• Hi, thanks for your feedback.

    Yes, the tables are probably quite small compared to some of the other databases out there, we're not pushing SQL server to it's extremes that's for certain!

    Once the data is inserted, the only queries are SELECT queries that retrieve a single system's data between two of our custom date values (the int values) that represent the day and time (custom epoch). This means returning rows WHERE datetime is between X and Y.

    A day with average activity is approximately 600 rows, sometimes people view a whole week (~4,200 rows). We may build in longer range reports in the future. The date range selected is continuous at the moment, however I couldn't rule out in the future that we might want to do more complex things like retrieve data from weekends only.

    Admittedly, these queries will always be quick on an indexed field. 🙂

    I suppose that this more of an issue for long-term administration than performance: a table for every system is not ideal. From some back of the envelope maths, we could reduce the logging system to one table for the last 24/48 hours (replacing the existing 3000 tables) to maintain compatiiblity with the log receiving software, with a further archive table of approx 3,500,000 rows, indexed by system and date.

    It may be beneficial to know that we retrieve the information using PHP to generate a "Gantt chart" of sorts that shows time against each sensor's activity. Currently each chart is generated using static HTML divs, but in the future we plan to make it generate in a "infinitely scrollable" javascript timeline widget where we would pre-fetch data just off to the left and right of the chart and feed it to the chart in JSON on the fly.

    Another thought is that it would be faster to retrieve one record for the next/previous day and build it using PHP, rather than looping through a record set of X values to build the data to pass to the chart?

    Again, thanks for the constructive feedback.