Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v0.1.0 to v0.2.0 upgrade path #87

Open
David00 opened this issue Feb 5, 2023 · 19 comments
Open

v0.1.0 to v0.2.0 upgrade path #87

David00 opened this issue Feb 5, 2023 · 19 comments

Comments

@David00
Copy link
Owner

David00 commented Feb 5, 2023

This issue is to summarize the upgrade path from v0.1.0 to v0.2.0. There are several changes between the versions that makes upgrading a little tricky, which are:

  • Removal of Docker. InfluxDB and Grafana now run as native applications.
  • CT Number Change. The channel numbers in v0.1.0 were from 0 through 5. The channel numbers in v0.2.0 changed to use 1 through 6.

Getting InfluxDB and Grafana installed natively is fairly easy, as is getting the data out of the Docker environment and into the native environment. The data migration process can take a considerable amount of time (many hours) if you have data spanning a year or more. The data migration process is wrapped up into a script I wrote that you can download and run in the instructions in the following post.

Also, the original v0.1.0 installation instructions (before I had released the very first custom OS image) did not create a mapped path for the Grafana container. This just means that all of your Grafana settings/dashboards might be stored only inside the container, and not mapped to a local folder that's accessible outside Grafana... which means if you remove the Grafana container, you'll delete your dashboards. I will include commands to check on this prior to starting the upgrade procedure, and also provide steps to take backups of the Docker versions of Grafana and Influx.

See below to get started.

@David00
Copy link
Owner Author

David00 commented Feb 5, 2023

Here is the migration process. This is broken down into two main steps, across two comments (this one, and the next one). This comment/step is for backing up your existing power monitor data. The next comment is for executing the upgrade, installing native InfluxDB, and running the import.

Influx Backup Procedure

On the existing power monitor:

  1. Stop the power monitor service and the Grafana container:

    sudo systemctl stop power-monitor.service
    docker stop grafana
    
  2. Ensure that the InfluxDB Docker container has a mapped folder to the Pi's local storage:

    docker container inspect -f '{{ .Mounts }}' influx
    

    Sample output:

    [{bind  /opt/influxdb /var/lib/influxdb   true rprivate}]
    

    (Where /opt/influxdb inside the container is mapped to /var/lib/influxdb on your Pi. If your output matches the above, you're good to go.)

  3. Start the backup:

    docker exec -it influx influxd backup -portable -database power_monitor /var/lib/influxdb/backups/powermon_migrate
    

    Monitor the output for errors. If the backup does not complete properly, see below. If the backup finishes successfully, move on to step 4.

    3B. Backup Failure

    If you see any mention of download shard # failed copy backup to file, or if the backup fails, this is potentially due to a bug in the early 1.8.x versions of Influx. To try to get around it, you can remove the InfluxDB container and recreate it with version 1.8.10 where the issue is supposedly fixed.

    Ensure that step 2 showed a mapped folder - otherwise, you'll lose the data in your InfluxDB container if you run the next commands!

    WARNING: Only run the following commands if the backup failed AND you have a mapped folder according to step 2.
    If your backup was successful, do not run this, and move on to step 4.
    If your backup failed but you do not have a mapped folder, see this question and answer for how to add a mapped folder to an existing container.

    !!! ONLY RUN THESE IF YOUR BACKUP FAILED AND YOU HAVE A MAPPED FOLDER !!!
    
    docker stop influx
    docker rm influx
    docker run -d --restart always --name influx -p 8086:8086 -p 8088:8088 -v /opt/influxdb:/var/lib/influxdb influxdb:1.8.10
    
    Then, try to rerun step 3.
    
  4. Create a tar archive of the backup files as root:

    sudo su
    tar -C /opt/influxdb/backups -cvzf /home/pi/powermon_migrate.tar.gz powermon_migrate
    exit
    
  5. Optionally, export the tar file from the Pi so that you can keep it safe. If you have another Raspberry Pi on your local network, you can use SCP to get the file onto the other Pi (replacing your IP addresses as necessary).

    sudo scp ~/powermon_migrate.tar.gz pi@192.168.0.30:~/
    

    ... where 192.168.0.30 would be the IP address of your second Raspberry Pi, and the password prompt is your login/ssh password for the other Pi.

    Other options for moving files off your Pi are WinSCP, FileZilla, or good old flash drives. If you have another computer on your local network that has a Linux shell (like WSL on Windows), you can use SCP to copy from the Raspberry Pi to your Linux environment, by just changing the order of the above command. So, from your other Linux computer, run the following, and change the IP address below to match what your existing Pi is assigned to:

    scp pi@192.168.10.40:~/powermon_migrate.tar.gz ~/

The password prompt (assuming you're not using SSH keys) will be for the login of your Raspberry Pi, which might still be the default value of raspberry.


This concludes the Docker container InfluxDB backup - you should now have a file named powermon_migrate.tar.gz inside of ~/.

@David00
Copy link
Owner Author

David00 commented Feb 5, 2023

Here are the steps to upgrade your Pi running v0.1.0 of this software to v0.2.0. Also note that these steps still apply if you're trying to update your original v0.1.0 to v0.3.0.

Note that v0.3.0 includes a new configuration file and calibration mechanism, so the settings in your existing config.py file will not be needed anymore.

Upgrade and Data Restoration Procedure

The commands below should be ran one at a time, unless otherwise specified.

  1. Ensure the power monitor is stopped, then update and upgrade your Pi:
docker stop influx
docker rm influx       # WARNING: You should only run this command if your InfluxDB container has a mapped volume  (step 3 from my previous comment above)
sudo systemctl stop power-monitor.service
sudo systemctl disable power-monitor.service
sudo apt update && sudo apt upgrade -y
sudo reboot
  1. Install InfluxDB
wget -q https://repos.influxdata.com/influxdata-archive_compat.key
 cat influxdata-archive_compat.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
 echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list
 sudo rm -f /etc/apt/sources.list.d/influxdb.list
 sudo apt update
 sudo apt install influxdb
  1. Grab the backup script, which I just pushed to a new branch named upgrade-helpers.
cd ~/
wget https://raw.githubusercontent.com/David00/rpi-power-monitor/upgrade-helpers/rpi_power_monitor/db_migrate.py

It's unlikely you have changed the database name from power_monitor, but if you have, edit the db_migrate.py file and change the LEGACY_DB_NAME field to match your database name. The database name comes from your old config.py file, line 27:

db_settings = {
'host': 'localhost',
'port': 8086,
'username': 'root',
'password': 'password',
'database': 'power_monitor'
}

  1. Prepare to run the migration script with root privileges. Here are some important notes about this script:
  • It can take a very long time (potentially hours!!) if you have a lot of data.
  • But, before you leave it overnight, expect it to ask you for input a couple of times within the first 10-20 minutes of usage.
  • During this migration, your old data will be downsampled into 5 minute intervals, and placed into a new retention policy that's kept indefinitely. So, if you have any existing Grafana dashboards that look at really old data, you'll have to adjust them to look at the retention policy named rp_5min instead of the default retention policy autogen. More to come on this when I finish creating the new documentation site.
  • You'll be prompted about the channel number range that your deployment is using. The original v0.1.0 named the channels 0 - 5, and in v0.2.0 I changed that to use 1 - 6. The migration script will ask you what scheme you're using, and if on the old scheme, it will convert your CT0 measurements to CT1, and CT2 to CT3, etc.

Make sure the docker version of InfluxDB has been stopped (if you didn't already stop it from step 1 above). Then, make sure the new native InfluxDB is running with:

sudo systemctl status influxdb

Then, ensure that the Python influxdb client library is installed with:

sudo pip3 install influxdb==5.2.3

  1. Start the migration:

sudo python3 ~/db_migrate.py

Keep an eye on it until you see the prompt asking you which CT numbering scheme you are migrating from. After that, you should be free to let it do it's thing.


There are still a few things to do for the upgrade, like:

  • Update Grafana's container image (or migrate Grafana to a native install like we did for Influx)
  • Remove your old data from the database that was down-sampled
  • Update the power monitor software and configure it.

I'd like to make sure this process works well with a few people before transcribing what I've written here to the new docs that I'm working on.

I will reserve the next comment space to address the remaining items.

@David00
Copy link
Owner Author

David00 commented Feb 6, 2023

Reserved to continue from above

@5ft24dave
Copy link

5ft24dave commented Feb 6, 2023

db migrate is failing. did the backup as above, everything going smooth, estracts then bombs out
I know the db name is correct as its what's in the config.py file

`Backup found in: /home/pi/powermon_migrate.tar.gz

Extracting backup to /opt/influxdb/power-monitor-influx-import. This can take up to 20 minutes (or more). Please wait. You will not see any output while it is extracting.
Backup extracted in 35.9306263923645 seconds.
Traceback (most recent call last):
File "/home/pi/db_migrate.py", line 573, in
start()
File "/home/pi/db_migrate.py", line 504, in start
validate_rps()
File "/home/pi/db_migrate.py", line 520, in validate_rps
existing_rps = client.get_list_retention_policies()
File "/usr/local/lib/python3.9/dist-packages/influxdb/client.py", line 794, in get_list_retention_policies
rsp = self.query(
File "/usr/local/lib/python3.9/dist-packages/influxdb/client.py", line 458, in query
results = [
File "/usr/local/lib/python3.9/dist-packages/influxdb/client.py", line 459, in
ResultSet(result, raise_errors=raise_errors)
File "/usr/local/lib/python3.9/dist-packages/influxdb/resultset.py", line 25, in init
raise InfluxDBClientError(self.error)
influxdb.exceptions.InfluxDBClientError: database not found: power_monitor
`

@David00
Copy link
Owner Author

David00 commented Feb 6, 2023

Thanks for trying - I just updated db_migrate.py to fix this. You can try again from step 3:

  1. Grab the backup script, which I just pushed to a new branch named upgrade-helpers.

.. and then rerun it with sudo python3 ~/db_migrate.py.

@5ft24dave
Copy link

5ft24dave commented Feb 7, 2023 via email

@David00
Copy link
Owner Author

David00 commented Feb 7, 2023

Excellent. So now that you have the right data, you'll want to upgrade the power monitor software itself, and re-enable the service file. I haven't yet decided on the best way to do it, but one way would be to do the following, for the v0.2.0 (November 2022) release:

cd ~/
wget https://github.com/David00/rpi-power-monitor/archive/refs/tags/v0.2.0.tar.gz
tar -xzf v0.2.0.tar.gz
cd rpi-power-monitor-0.2.0/
pip install .

# Or, if pip install . doesn't work, try:
pip3 install .

Next, you'll need to manually update the new config.py file in ~/rpi-power-monitor-0.2.0/rpi_power_monitor/config.py.

The important thing to note is that the CT channel labels changed between versions, so as you're updating the settings, everything from ct0 in the old file should be applied to ct1 in the new file. Specifically, just these four variables:

  • ct_phase_correction (name changed in the new version to) CT_PHASE_CORRECTION
  • accuracy_calibration (name changed in the new version to) ACCURACY_CALIBRATION
  • GRID_VOLTAGE
  • AC_TRANSFORMER_OUTPUT_VOLTAGE

Finally, you'll want to update the service file to point to the new version of the code.

sudo pico /etc/systemd/system/power-monitor.service

If you followed the steps verbatim above, then you can replace the ExecStart line with the following:

ExecStart=/usr/bin/python3 /home/pi/rpi-power-monitor-0.2.0/rpi_power_monitor/power_monitor.py

Then, run the following to apply the changes, start the service, and check its status:

sudo systemctl daemon-reload
sudo systemctl start power-monitor.service
sudo systemctl status power-monitor.service

That's it! (Well, except for Grafana still running in a Docker container), but that's not too big of a deal. Once you get your dashboards manually backed up, it's easy to remove.

Updating Grafana

If you have customized dashboards, you'll need to back them up manually, or else they will be removed. The default dashboards to go with this project area available on the Dashboard Wiki page.

To backup your dashboards, go into each dashboard, click the settings wheel at the top, and then find the JSON model button on the left. Copy the entire text contents of the JSON model out and save it to a text file for future import.

  1. Once you've stashed your dashboard JSON models in a safe place, or chosen to lose them, stop the Grafana container, and then remove it:
docker stop grafana
docker rm grafana
  1. Native Grafana Installlation
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
sudo apt-get update
sudo apt-get install -y grafana

sudo systemctl daemon-reload
sudo systemctl enable grafana-server
sudo systemctl start grafana-server

Docker Removal

  1. Clean up the now-unused docker containers, volumes, and networks. Caution: This will remove data!!!
docker prune -a
  1. Then, remove docker with one of the following:
sudo apt purge docker

# If this fails, you can just disable Docker with:
sudo systemctl disable docker

Old Data Removal

The final thing to do, which is completely optional, is to remove your old high resolution power monitor data. As long as the migration script went well, all of your old high-res data has been downsampled and stored into new measurements, meaning your old data is safe to remove from Influx. You should retain the original backup archive just in case, though!

To remove your old data, we'll go into the Influx CLI and execute a few drop statements. The following will remove all of the high resolution data that is older than 30 days from the time you execute the command:

influx -precision rfc3339 -database power_monitor
DELETE from "raw_cts" WHERE time <= now() - 30d
DELETE from "home_load" WHERE time <= now() - 30d
DELETE from "net" WHERE time <= now() - 30d
DELETE from "solar" WHERE time <= now() - 30d
DELETE from "voltages" where time <= now() - 30d
exit

This concludes the upgrade process to get your environment fairly close to the v0.2.0 image. However, I am right on the brink of releasing v0.3.0, which contains a lot of improvements over v0.2.0, so stay tuned for that one (a beta is already out, but there are still a few issues I need to fix).

The upgrade path from v0.2.0 to v0.3.0 will be much easier.

@5ft24dave
Copy link

I can confirm that 0.2.0 works without issues under the V6.1.11 Kernel on the pi

@bobstanl
Copy link

Dashboard Changes?

Hi David
Have had problems bringing up the USB-SSD but it is now working and I can SSH
Decided to keep the uSD that contains the old system as it was and start fresh on the SSD.
The new image is based on "Raspberry-Pi-OS-Lite_rpi_power_monitor-0.2.0+release"
Aside: I was able to advance configure using the Imager "gear" on your image to change names, passwords, wifi, etc. Can forward the way they did it if you are interested.
The db_migrate.py seemed to run OK last night on my old data backup from the uSD card.
Both influx and grafana service are reported running.
Power-monitor is not running yet as RPi in not back in outside box yet.
Grafana is working at correct IP and acted new, requiring new password.

Using the dashboard that existed, I wanted to check the old data in grafana but all I get on any time period is No Data!

The influxdb "save and test" shows "datasource is working. 19 measurements found".

Do the dashboards need to be changed?

I did not see that in the above.

@David00
Copy link
Owner Author

David00 commented Feb 16, 2023

Aside: I was able to advance configure using the Imager "gear" on your image to change names, passwords, wifi, etc. Can forward the way they did it if you are interested.

Excellent. I'm not sure that I can implement the credential and Wi-Fi customizations into the image build process, but sure, I'd be curious to see!

The db_migrate.py script moves all the high resolution data (which the dashboard looks at by default) into the downsampled measurements that the dashboard does not yet look at. The script does not carry over any of the high resolution data, which is why the dashboards appear to be empty.

All of your old data should be in the database, but you'll have to add a new panel to the dashboard that looks at the downsampled measurements. In the "show and tell" discussion for dashboards, I've shared a panel that shows the last 60 days worth of production and consumption data. You should be able to deploy the panel I've shared in this comment, but you don't need to do any of the SELECT commands that are shown. You should just be able to set your timezone variable as described and then import the panel JSON.

Once you start the power monitor service, you should start to see data populate again in the default dashboard.

Edit: To answer your question specifically, the dashboards do not need to be changed, but with the new continuous queries and retention policies, there is room for them to be improved to look at older data. Hopefully my explanation above about how the db_migrate script handled the high resolution data explained why your dashboard is empty.

@bobstanl
Copy link

I still don't understand.

My ideal is that I can query any of the old data (that hasn't been deleted by a uSD failue) for any desired day or number of hours.

A reason for displaying the data in a similar way is for debugging. Is this the same power and currents that I got before? If not, is it software? A hardware fault? Did I have the same current spike when the spa pump turned on last year or is todays turn-on spike higher? Do I have a new pump problem?

I remember my neighbor asking a question about solar output, so I could show him my solar for a day in June and compare it to a day in the winter.

I personally prefer to use bigger SSD's and keep all historical data rather than throw it away. This is why I am so annoyed by failing uSD cards.

You said: "The script does not carry over any of the high resolution data, which is why the dashboards appear to be empty."

I have been meaning to ask what is meant by "high resolution data". Will I still see the same current excursions that I could see before?

Can a dashboard be created to see the old data as before?

@David00
Copy link
Owner Author

David00 commented Feb 16, 2023

My ideal is that I can query any of the old data (that hasn't been deleted by a uSD failue) for any desired day or number of hours.

The data has been downsampled by the db_migrate.py script into 5 minute intervals. (Your backup will still contain all your original data though, untouched). By "high resolution" I am referring to the data that goes into the database once every second or two. The new docs cover this whole subject a little bit (but after a quick review, there is some more info I need to add to a few sections on this page):

https://david00.github.io/rpi-power-monitor/docs/v0.3.0-beta/database-info.html#overview

Will I still see the same current excursions that I could see before?

Yes, when you start the power monitor again, the dashboard that you're used to should continue to function just like before, but the one thing that changed in v0.3.0 is that the power monitor software now configures InfluxDB to automatically downsample the high resolution data into the 5 minute averages. The downsampling happens in the background and is basically a completely different copy of the high resolution data.

For example... with updates coming in every second, there swould be about 300 data points every 5 minutes (per channel!) in the "high res" data. InfluxDB will now automatically average all 300 of those points together and store the resulting average in a new retention policy and measurement. The averaged data will be stored forever, so you can always query it in the future. The "high res" data will be deleted after 30 days, which means, the home power dashboard (as currently configured) can only go back 30 days.

The retention policies (RPs) are kind of like different folders that hold data inside individual measurements. The "high res" data goes into the default RP named autogen. So, the provided dashboard queries against the autogen RP.

If you look at one of the continuous query (CQ) creations, you can get an idea of what it's doing:

CREATE CONTINUOUS QUERY cq_home_power_5m ON power_monitor BEGIN SELECT mean(power) AS power, mean(current) AS current INTO power_monitor.rp_5min.home_load_5m FROM power_monitor.autogen.home_load GROUP BY time(5m) END
  • SELECT mean(power) AS power, mean(current) AS current ... FROM power_monitor.autogen.home_load
  • INTO power_monitor.rp_5min.home_load_5m
  • GROUP BY time(5m)

I've highlighted the key points above. So, for each GROUP of 5m, it's taking the mean power and mean current, FROM the database named power_monitor, RP named autogen, and measurement named home_load, and storing the result INTO the database named power_monitor, inside the RP named rp_5min, into the measurement named home_load_5m.

@bobstanl
Copy link

My problems are more existential. We cannot assume there is any valid data in the database. There is a lot of disk space being taken by something, but is it valid data?

Tried loading in the Overview-Panel.JSON as you suggested. Seems to load but the dashboard shows no panel. When I look at the json model version of the importe dashboard, it has only 48 lines, where the original file has 198 lines. No obvious error messages.

Remember this database came from the faulty uSD card that went through a series of operations. On the new V0.1 uSD, the system ran and I could, by randomly picking dates, see that some days had data and some did not. You said there was no easy way to look for gaps "Missing is tough - without writing a custom python script to compare the data, I don't think there's an easy way.

Problem now is my original technique of changing dates on the "Home Power Dashboard" no longer works because of the v0.2 changes.

I know Jan 23 only has a few days of data, and I think a bit of Dec 22 is gone. Jan 22 is gone, but I think other dates might have data. It was unusual that the "db_migrate.py" marched through EVERY day, since late Nov 21, with no errors. Did it convert anything on those days that the old "Home Power Dashboard" said No Data?

If there any way to find out if there is valid data in some date, say in July 22?

@bobstanl
Copy link

bobstanl commented Feb 16, 2023

Potential excess logging in db_migrate.py?

Sorry to be such a helpless pain in influxdb, all the small tricks I know so far have not shown me any results in query.

Looking at the big picture, how much data is on this new SSD drive with "Raspberry-Pi-OS-Lite_rpi_power_monitor-0.2.0+release" and my 2.1G powermon_migrate.tar.gz?

The whole du seems excessive at 13G, so I spent some time examining the drive. Here are the results:

List of sizes of top directories of root
Notes
/home has ~2G, this is my PM data backup
/opt has ~2G, presumably has this data imported
What is on /var???
root@powermon:/# du -h --max-depth=1
2.0G ./opt
56K ./tmp
15M ./root
1.2M ./run
0 ./sys
2.0G ./home
4.1M ./etc
16K ./lost+found
0 ./proc
50M ./boot
0 ./dev
4.0K ./mnt
4.0K ./media
4.0K ./srv
7.3G ./var
1.7G ./usr
13G .

Note /var has very large log
root@powermon:/var# du -h --max-depth=1
133M ./lib
4.0K ./opt
36K ./tmp
6.9G ./log
24K ./backups
16K ./spool
4.0K ./mail
275M ./cache
4.0K ./local
12K ./www
7.3G .

Note usr has large lib, maybe OK?
root@powermon:/usr# du -h --max-depth=1
735M ./lib
4.0K ./games
4.0K ./src
115M ./sbin
244M ./bin
103M ./local
29M ./include
1.6M ./libexec
490M ./share
1.7G .

/var/log ts huge!
root@powermon:/var/log# du -h --max-depth=3
8.0K ./nginx
3.2G ./journal/5afa955048e549beaed1b5c169d97e04
3.2G ./journal
4.0K ./private
4.0K ./influxdb
4.0K ./runit/ssh
8.0K ./runit
124K ./grafana
156K ./apt
6.9G .

daemon.log is excessive 1.8G, does this fold into system logs, maybe several times?
root@powermon:/var/log# ls -alh
total 3.8G
drwxr-xr-x 9 root root 4.0K Sep 21 17:23 .
drwxr-xr-x 12 root root 4.0K Nov 11 23:50 ..
-rw-r--r-- 1 root root 721 Feb 14 17:44 alternatives.log
drwxr-xr-x 2 root root 4.0K Feb 14 18:07 apt
-rw-r----- 1 root adm 32K Feb 15 23:17 auth.log
-rw-r--r-- 1 root root 0 Sep 21 17:23 bootstrap.log
-rw-rw---- 1 root utmp 0 Sep 21 17:23 btmp
-rw-r----- 1 root adm 1.9G Feb 15 23:35 daemon.log <-----???
-rw-r----- 1 root adm 7.0K Feb 15 01:12 debug
-rw-r--r-- 1 root root 83K Feb 14 18:08 dpkg.log
-rw-r--r-- 1 root root 24K Nov 11 23:53 faillog
drwxr-xr-x 2 grafana grafana 4.0K Feb 15 00:04 grafana
drwxr-xr-x 2 influxdb influxdb 4.0K Oct 11 2021 influxdb
drwxr-sr-x+ 3 root systemd-journal 4.0K Sep 21 17:23 journal
-rw-r----- 1 root adm 228K Feb 15 01:12 kern.log
-rw-rw-r-- 1 root utmp 286K Feb 15 16:03 lastlog
-rw-r----- 1 root adm 225K Feb 15 01:12 messages
drwxr-xr-x 2 root adm 4.0K Feb 15 00:00 nginx
drwx------ 2 root root 4.0K Sep 21 17:23 private
drwxr-xr-x 3 root root 4.0K Sep 21 17:06 runit
-rw-r----- 1 root adm 1.9G Feb 15 23:35 syslog <----???
-rw-r----- 1 root adm 2.7K Feb 15 01:12 user.log
-rw-rw-r-- 1 root utmp 16K Feb 15 16:03 wtmp

Here are a few lines from the end of the daemon.log file. Maybe help with debugging too much logging in db_migrate.py?

Feb 15 23:55:00 powermon influxd-systemd-start.sh[680]: ts=2023-02-16T07:55:00.305666Z lvl=info msg="Executing query" log_id=0g15OJ20000 service=query query="SELECT mean(power) AS power, mean(current) AS current INTO power_monitor.rp_5min.net_5m FROM power_monitor.autogen.net WHERE time >= '2023-02-16T07:50:00Z' AND time < '2023-02-16T07:55:00Z' GROUP BY time(5m)"
Feb 15 23:55:00 powermon influxd-systemd-start.sh[680]: ts=2023-02-16T07:55:00.306292Z lvl=info msg="Finished continuous query" log_id=0g15OJ20000 service=continuous_querier trace_id=0g2JMLKG000 op_name=continuous_querier_execute name=cq_net_power_5m db_instance=power_monitor written=0 start=2023-02-16T07:50:00.000000Z end=2023-02-16T07:55:00.000000Z duration=0ms
Feb 15 23:55:00 powermon influxd-systemd-start.sh[680]: ts=2023-02-16T07:55:00.306362Z lvl=info msg="Continuous query execution (end)" log_id=0g15OJ20000 service=continuous_querier trace_id=0g2JMLKG000 op_name=continuous_querier_execute op_event=end op_elapsed=0.904ms
Feb 15 23:55:00 powermon influxd-systemd-start.sh[680]: ts=2023-02-16T07:55:00.306618Z lvl=info msg="Continuous query execution (start)" log_id=0g15OJ20000 service=continuous_querier trace_id=0g2JMLKW000 op_name=continuous_querier_execute op_event=start
Feb 15 23:55:00 powermon influxd-systemd-start.sh[680]: ts=2023-02-16T07:55:00.306642Z lvl=info msg="Executing continuous query" log_id=0g15OJ20000 service=continuous_querier trace_id=0g2JMLKW000 op_name=continuous_querier_execute name=cq_solar_5m db_instance=power_monitor start=2023-02-16T07:50:00.000000Z end=2023-02-16T07:55:00.000000Z
Feb 15 23:55:00 powermon influxd-systemd-start.sh[680]: ts=2023-02-16T07:55:00.306833Z lvl=info msg="Executing query" log_id=0g15OJ20000 service=query query="SELECT mean(real_power) AS power, mean(current) AS current INTO power_monitor.rp_5min.solar_5m FROM power_monitor.autogen.solar WHERE time >= '2023-02-16T07:50:00Z' AND time < '2023-02-16T07:55:00Z' GROUP BY time(5m)"
Feb 15 23:55:00 powermon influxd-systemd-start.sh[680]: ts=2023-02-16T07:55:00.307316Z lvl=info msg="Finished continuous query" log_id=0g15OJ20000 service=continuous_querier trace_id=0g2JMLKW000 op_name=continuous_querier_execute name=cq_solar_5m db_instance=power_monitor written=0 start=2023-02-16T07:50:00.000000Z end=2023-02-16T07:55:00.000000Z duration=0ms
Feb 15 23:55:00 powermon influxd-systemd-start.sh[680]: ts=2023-02-16T07:55:00.307352Z lvl=info msg="Continuous query execution (end)" log_id=0g15OJ20000 service=continuous_querier trace_id=0g2JMLKW000 op_name=continuous_querier_execute op_event=end op_elapsed=0.740ms
(END)

@David00
Copy link
Owner Author

David00 commented Feb 16, 2023

Tried loading in the Overview-Panel.JSON as you suggested. Seems to load but the dashboard shows no panel. When I look at the json model version of the importe dashboard, it has only 48 lines, where the original file has 198 lines. No obvious error messages.

That sounds right - the Overview-Panel.json is just a single panel that goes into a dashboard.

Problem now is my original technique of changing dates on the "Home Power Dashboard" no longer works because of the v0.2 changes.

I was concerned with this impacting this functionality, but I think it can be reasonably solved with an update to the Grafana dashboard. You can still technically pull data by changing the date in Grafana, but only for the previous 30 days. If you look at a date beyond that, the existing dashboard won't show any data, because the data gets downsampled and stored elsewhere permanently, and the original data gets deleted as it "ages out".

If you duplicate the panels and edit the queries in each one, and change the query to use the new retention policy that holds the downsampled data, your technique of changing the dates in Grafana will work on the edited panels. The only difference is the data it's pulling is "lower resolution", and by that I simply mean each data point represents a 5 minute average, and not a 1 or 2 second average.

Here's how you can edit a panel to tell it to query from the down sampled data:

image

By making that change, you can then query as far back as your data goes AND also query over long intervals, like several years at a time, if you wanted to. It is not possible to do such a long-term query on the high-resolution data, so there's actually a ton of functionality unlocked by using the down-sampled data set. For example, you can compare monthly power output year over year, or even track the annual costs of your EV charger, etc.

You can even override the time frame for a single panel so that the date range selector doesn't interfere with what it shows:

image


It was unusual that the "db_migrate.py" marched through EVERY day, since late Nov 21, with no errors. Did it convert anything on those days that the old "Home Power Dashboard" said No Data?

If there any way to find out if there is valid data in some date, say in July 22?

The db_migrate script simply stepped through every day, issued a query for each day, and inserted the result into the database. It did not make any checks to see if the data was legitimate or even existed.

You can query manually in InfluxDB to see if there is data there. Just remember that if you're looking at the new version of the database, any data beyond 30 days from now is going to be stored in the rp_5min retention policy.

I've put a table together here that shows the retention policies and measurement names:

https://david00.github.io/rpi-power-monitor/docs/v0.3.0-beta/database-info.html#structure

Here's a quick example from the influx shell:

select mean(power) from "rp_5min"."home_load_5m" where time <= '2022-07-07 00:00:00' and time >= '2022-07-06 00:00:00'

Excellent diagnosis of the large log files! It does look like the InfluxDB logging can be turned down significantly. We certainly don't need or want detailed results of all the InfluxDB actions, especially on a Pi. I'll look into that separately.

@David00
Copy link
Owner Author

David00 commented Feb 16, 2023

I made the following changes to InfluxDB's config file at /etc/influxdb/influxdb.conf:

[logging]

  level = "warn"

[http]
  log-enabled = false

This seems to have reduced the verbosity of the logging going to /var/log/daemon.log.

I will include this as the standard config when I build the v0.3.0 OS image.

@bobstanl
Copy link

bobstanl commented Feb 16, 2023 via email

@ivynetca
Copy link

I'm confused about the clean up of old high res data. Your directions above include:
DELETE from "rp_raw" WHERE time <= now() - 30d.
There are no rp_raw measurements in my database, but cts_raw (with 3 field keys) is still there. Is this just a typo?
I havent started the new power_monitor up yet. Which measurements will be generated - rp_raw?
Thanks - Jeff

@David00
Copy link
Owner Author

David00 commented Nov 22, 2023

I'm confused about the clean up of old high res data. Your directions above include: DELETE from "rp_raw" WHERE time <= now() - 30d. There are no rp_raw measurements in my database, but cts_raw (with 3 field keys) is still there. Is this just a typo? I havent started the new power_monitor up yet. Which measurements will be generated - rp_raw? Thanks - Jeff

Thanks for pointing this out! Yes, I do believe this is a typo, and it should be DELETE from "raw_cts" WHERE time <= now() - 30d;. I will update the instructions above.

The measurements (and retention policies) that will be generated in v0.3.0 are documented here, but feel free to follow up if you have further questions :)

https://david00.github.io/rpi-power-monitor/docs/v0.3.0/database-info.html#structure

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants