Skip to main content
edited body
Source Link
Nick Cox
  • 62.4k
  • 8
  • 146
  • 235

I'm working with big data time-series and am trying to detect outliers. Upon my research I've come across a variety of different simple methods (e.g. here and here) and I'm trying to understand the difference among them compared to most basic ones I've been using in the past (using quantiles and IQR). I'm especially wondering about the point of e.g. the Grubb'sGrubbs'/Dixon's tests and Rosner's test when the recommendation for the former is to validate the detections against boxplots and for Rosner's it requires an a priori estimate of outliers - e.g. as obtained from visual inspection of boxplots.

I guess I fail to see why I would use any of those tests when I could merely use 1.5*IQR +- Q1/Q3?

Any insight is highly appreciated.

I'm working with big data time-series and am trying to detect outliers. Upon my research I've come across a variety of different simple methods (e.g. here and here) and I'm trying to understand the difference among them compared to most basic ones I've been using in the past (using quantiles and IQR). I'm especially wondering about the point of e.g. the Grubb's/Dixon's tests and Rosner's test when the recommendation for the former is to validate the detections against boxplots and for Rosner's it requires an a priori estimate of outliers - e.g. as obtained from visual inspection of boxplots.

I guess I fail to see why I would use any of those tests when I could merely use 1.5*IQR +- Q1/Q3?

Any insight is highly appreciated.

I'm working with big data time-series and am trying to detect outliers. Upon my research I've come across a variety of different simple methods (e.g. here and here) and I'm trying to understand the difference among them compared to most basic ones I've been using in the past (using quantiles and IQR). I'm especially wondering about the point of e.g. the Grubbs'/Dixon's tests and Rosner's test when the recommendation for the former is to validate the detections against boxplots and for Rosner's it requires an a priori estimate of outliers - e.g. as obtained from visual inspection of boxplots.

I guess I fail to see why I would use any of those tests when I could merely use 1.5*IQR +- Q1/Q3?

Any insight is highly appreciated.

Became Hot Network Question
Source Link
Anke
  • 389
  • 1
  • 3
  • 11

Outlier tests for time-series data: difference among methods?

I'm working with big data time-series and am trying to detect outliers. Upon my research I've come across a variety of different simple methods (e.g. here and here) and I'm trying to understand the difference among them compared to most basic ones I've been using in the past (using quantiles and IQR). I'm especially wondering about the point of e.g. the Grubb's/Dixon's tests and Rosner's test when the recommendation for the former is to validate the detections against boxplots and for Rosner's it requires an a priori estimate of outliers - e.g. as obtained from visual inspection of boxplots.

I guess I fail to see why I would use any of those tests when I could merely use 1.5*IQR +- Q1/Q3?

Any insight is highly appreciated.