As I am actually concentrating on noise I managed to bring the measurement frequency down to every two seconds. For a few hours the data was even uploaded. On the uploaded data, every minute there was a break for around 8-10 seconds.
Without uploading, the arduino IDE/terminal view worked inside a room for more than one day.
Having good sound measurements on streets etc and also to improve the quality we should measure every 1-2 seconds.
But it does not make sense to upload all this data. It seems to be best for now to measure every 2 seconds, aggregate the data (avg or max values on all sensors) and upload just one value for each sensor every 1-3 minutes. We could change the data model and transfer a min, a max and an average record. Two additional fields would make sense. Aggregation type (min, max, avg) and another field with the amount of raw measurements. As we know how these values were built this is most transparent and avoids misunderstadings when comparing values.
In short term more easy, we keep the data model, build aggregates on arduino and upload this data. For each sensor we should decide whats best (min, max or avg).
Raw data is still possible (setting raw, each measured value is uploaded).
There is the comment from sander: ‘Currently the no and co2 sensors require some time (50 seconds was mentioned) before a new measurement can be done.’ I arduino stuck for 50 seconds when getting this values ?
So the question, would it make sense to aggregate the data ? Further, will we have good no and co2 values when we measure every 2 seconds and build aggregates ?
Having a few other noise measuring tools (there is even a relatively accurate iPad app (noise, using pegel graph)) i could do it. But we should have a general agreement from the system architects if this makes sense. If yes, we need an agreement/direction how to change the code. I can make some changes but general rules on calling subroutines, how to aggregate and on the data model should be done before.