I'm working with big data time-series and am trying to detect outliers. Upon my research I've come across a variety of different simple methods (e.g. here and here) and I'm trying to understand the difference among them compared to most basic ones I've been using in the past (using quantiles and IQR). I'm especially wondering about the point of e.g. the Grubb'sGrubbs'/Dixon's tests and Rosner's test when the recommendation for the former is to validate the detections against boxplots and for Rosner's it requires an a priori estimate of outliers - e.g. as obtained from visual inspection of boxplots.
I guess I fail to see why I would use any of those tests when I could merely use 1.5*IQR +- Q1/Q3?
Any insight is highly appreciated.