When I plot datapoints vs time, spanning over 2 days, and I set a datelocator for 0 and 30 minutes. A major tick for each half hour, matplotlib throws an error. Consider this example:
from datetime import datetime
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
datapoints = 3600*24*2 #2 days, 1 datapoint/second
data = range(datapoints) #anydata
timestamps = [ datetime.fromtimestamp(t) for t in range(datapoints) ]
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.xaxis.set_major_locator(mdates.MinuteLocator(byminute=[0,30]))
plt.plot(timestamps,data)
plt.show()
Then I get the following error:
RuntimeError: RRuleLocator estimated to generate 2879 ticks from 1970-01-01 01:00:00+00:00 to 1970-01-03 00:59:59+00:00: exceeds Locator.MAXTICKS * 2 (2000)
2879 ticks is exactly the amount of minutes in that timespan, meaning the estimate is based on 1 tick every minute. However the locator should yield 1 tick every 30 minutes (2 ticks per hour in 48 hour = 96 ticks). Why is the estimate and the real value so far from eachother?
A workaround would be to raise the MAXTICKS value:
locator = mdates.MinuteLocator(byminute=[0,30])
locator.MAXTICKS = 1500
ax.xaxis.set_major_locator(locator)
That works and the graph nicely shows. However that should not be needed right? Why is this error occuring in the first place? Am I using the datelocator wronly?
DateLocatorsdo account for theintervalsettings but not for the ones set by thebyminute,byseconds, etc. I guess you might say that this is a bug. Lets see if @tcaswell can confirm