Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot fix invalid energy dashboard data #17010

Closed
3 tasks done
swa72 opened this issue Jun 23, 2023 · 31 comments
Closed
3 tasks done

Cannot fix invalid energy dashboard data #17010

swa72 opened this issue Jun 23, 2023 · 31 comments

Comments

@swa72
Copy link

swa72 commented Jun 23, 2023

Checklist

  • I have updated to the latest available Home Assistant version.
  • I have cleared the cache of my browser.
  • I have tried a different browser to see if it is related to my browser.

Describe the issue you are experiencing

I have a sensor for my gas usage (ESPhome) which has been working nicely with the Energy dashboard until about a week ago. At that day the Energy dashboard shows a huge negative value (I may have updated HA on that day, now running 2023.6.2).

image

This should be quick to fix, by going to to Developer | Statistics and remove any bad reading.

However, there is no such reading in that timeframe ;-/

image

I've searched that evening in 5min steps to no avail. Do I have to go into HA's internals to fix this?

Here is my ESP config, just in case ...

# gas meter
  - platform: pulse_counter
    pin: 21
    name: "Gasverbrauch"
    update_interval : 60s
    filters:
      - lambda: |-
          static float total_value = 0.0;
          total_value += x * 0.01;
          return total_value;
    unit_of_measurement: "m³"
    accuracy_decimals: 2
    icon: 'mdi:fire'

Describe the behavior you expected

I can correct all values thru Developers | Statistics of that particular entity.

Steps to reproduce the issue

...

What version of Home Assistant Core has the issue?

Home Assistant 2023.6.2

What was the last working version of Home Assistant Core?

No response

In which browser are you experiencing the issue with?

No response

Which operating system are you using to run this browser?

Win 10

State of relevant entities

No response

Problem-relevant frontend configuration

No response

Javascript errors shown in your browser console/inspector

No response

Additional information

No response

@swa72
Copy link
Author

swa72 commented Jun 23, 2023

Issues home-assistant/core#94896 appears to be similar.

@karwosts
Copy link
Contributor

Does the sensor have state_class: total ?

I'm not sure how to fix it now, but I don't think it would have happened if it was total_increasing, which is probably more correct.

@swa72
Copy link
Author

swa72 commented Jun 23, 2023 via email

@karwosts
Copy link
Contributor

Ah ok, nevermind then. I was hoping that was the issue, but it must be something else.

@karwosts
Copy link
Contributor

Unfortunately what you have is not a "bad sensor reading", which could be fixed in the statistics ui, but the result of a HA bug wherein the cumulative sum for a statistics sensor got accidentally reset when it should not.

I believe the underlying bug has been fixed, but you're left with a corrupted database.

It is possible to fix this but you will have to do it in raw SQL, which is possible in haos if you install the sqlite addon.

If you want me to walk you through the steps to do so let me know if you want to take that approach once you have the addon installed (or if you already know how to access the database in another way).

@karwosts
Copy link
Contributor

karwosts commented Jun 30, 2023

I'm going to leave some notes here on how I fixed this, assuming a sensor with state_class: total_increasing that is not supposed to decrease in value.

  1. Install SQLite addon, go to the addon UI page. In my case it was http://homeassistant.local:8123/hassio/ingress/a0d7b954_sqlite-web
  2. Select statistics_meta table, and Query.
  3. Run this command, specifying your sensor name you want to fix:
SELECT * FROM "statistics_meta"
WHERE statistic_id is "sensor.YOUR_SENSOR_NAME_HERE"
  1. You will get a result with one record. Note the value in the id column.
    image

  2. Select statistics table, and Query.

  3. Run this command, with <id> being the value from step 4.

SELECT * FROM "statistics"
WHERE "metadata_id" IS "<id>"
order by start_ts asc

e.g.

SELECT * FROM "statistics"
WHERE "metadata_id" IS "279"
order by start_ts asc
  1. You will get a result with hundreds/thousands of records, depending on the age of your sensor. Note the sum column should typically be continually increasing as you scroll down through the rows. You are looking for a row where the sum column drops suddenly, it should correspond with the time in your graph where the big negative drop occurs.
    Note here the column where the sum drops from 725 to 1.87777. The value in the row with the drop is not important, just note that it has significantly reset or decreased, it could be 0 or a small number.
    image

From here you want to note two things, the id field for the first row containing the drop, and note the value of the sum from before it decreased.

  1. Now that you have this information, you can run the following query. Be careful with this as you can blow up your data if you don't do it right.
UPDATE "statistics"
SET sum = sum + <sum from step 7>
WHERE "metadata_id" IS "<id from step 4>" 
AND 
"id" >= <id from step 7>

e.g.:

UPDATE "statistics"
SET sum = sum + 725.243
WHERE "metadata_id" IS "279" 
AND 
"id" >= 1382082

After you run this you should get a terse result like "N rows updated".

At this point you can refresh your web browser, and the negative spike should be gone.

@rrozema
Copy link

rrozema commented Jul 1, 2023

To find the row that has the drop (step 6), better use a query like below, that finds the row(s) where the value for sum is smaller than that on the last row before it.
select t.*, t.sum - t.sum_previous as difference from (select s.*, lag(s.sum) over (partition by s.metadata_id order by s.start_ts) as sum_previous from statistics s) t where t.metadata_id = <id> and t.sum < t.sum_previous order by t.start_ts desc;

@swa72
Copy link
Author

swa72 commented Jul 1, 2023

Ok, many thanks for @karwosts and @rrozema, I fixed it although me setting was a bit different.

I use mariadb and phpMyAdmin and can confirm that the above steps work here as well.

My steps (see #17010 (comment)) were ...

SELECT * FROM `statistics_meta` WHERE `statistic_id`="sensor.gasverbrauch"
Note the form of the quotes. Result in my case 117.

6 and 7)
Use https://www.epochconverter.com/ to get correct time for the > (narrows down the table to the faulty ones)
SELECT * FROM `statistics` WHERE `metadata_id`="117" AND `start_ts` > 1686700800
The 1686700800 is the day/time where trouble began.
Or use @rrozema approach to find the id field for the first row containing the drop

Last step in my case was

UPDATE `statistics`
SET sum = sum + 5816.840000004125
WHERE `metadata_id`="117" 
AND 
`id`>=2992418

@swa72
Copy link
Author

swa72 commented Jul 1, 2023

I believe the underlying bug has been fixed, but you're left with a corrupted database.

Do you happen to have a link to the bug? As soon as the sensor starts sending data again, I again receive a huge negative reading and can start all over again ;-)

@karwosts
Copy link
Contributor

karwosts commented Jul 1, 2023

Actually now I'm not so sure it has been fixed. I was thinking of some issue specifically related to riemann sums being reset when they shouldn't (don't have a issue link handy), but now I'm wondering if this could happen to any sensor. I'm not sure yet what's triggering it.

@rrozema
Copy link

rrozema commented Jul 1, 2023

Following query can give you some more insight in what exactly the contents of the statistics and the statistics_short_term table are. The query lists all data points recorded on a particular day for one particular statistic, adding to the rows properties the previous row's state and sum, plus the difference between this row and the previous one for each data point.

select *, strftime('%Y-%m-%d %H:%M:%f', start_ts, 'unixepoch') as start_dt, lag(state, 1) over (partition by metadata_id order by start_ts) as state_previous, state - lag(state, 1) over (partition by metadata_id order by start_ts) as state_difference, lag(sum, 1) over (partition by metadata_id order by start_ts) as sum_previous, sum - lag(sum, 1) over (partition by metadata_id order by start_ts) as sum_difference from main.statistics where metadata_id = 118 and start_ts >= (julianday('2023-07-01T00:00:00.000') - 2440587.5)*86400.0 and start_ts < (julianday('2023-07-02T00:00:00.000') - 2440587.5)*86400.0 order by start_ts;

I have seen examples where a device was temporary unavailable. Depending on the timing I think this can sometimes make the recorder behave strangely: It seems to sometimes reset the running total to 0, and then when the device comes back available and again reports the previous value the recorder can record a very large difference. I think a scenario like this can somehow result in the recorder registering a difference in a single step that is as large as the difference between the reset value (0 most of the times) and the current value of the continuously increasing meter value. I have reported an issue like this for the HACS sankey-card (a very nice, but so far un-official energy card). In that situation I could exactly explain what happened. I have however not yet found a clear example yet where I could catch the the official energy cards doing something similar. But I too do see some un-explainable spikes plus negative growing values sometimes (the negative growth seems like rounding errors to me, as they are usually very small values).
A variation on the same query, to list only data points that have a non-zero value:

select strftime('%Y-%m-%d %H:%M:%f', start_ts, 'unixepoch') as start_dt, * from (select *, lag(state, 1) over (partition by metadata_id order by start_ts) as state_previous, state - lag(state, 1) over (partition by metadata_id order by start_ts) as state_difference, lag(sum, 1) over (partition by metadata_id order by start_ts) as sum_previous, sum - lag(sum, 1) over (partition by metadata_id order by start_ts) as sum_difference from main.statistics_short_term where metadata_id = (select id from statistics_meta where statistic_id = 'sensor.trap_1_wall_switch_electric_consumption_kwh_2') and start_ts >= (julianday('2023-07-01T00:00:00.000') - 2440587.5)*86400.0 and start_ts < (julianday('2023-07-02T00:00:00.000') - 2440587.5)*86400.0 ) t where [sum] <> [sum_previous] order by start_ts;

So, by no means I think I can give an explanation of the behavior you observed, but I thought I'd give you the queries so you can have a better look at the underlying data the recorder produces and the energy cards read.

In fact: That last query just gave me a rather strange result:

start_dt|id|created|created_ts|metadata_id|start|start_ts|mean|min|max|last_reset|last_reset_ts|state|sum|state_previous|state_difference|sum_previous|sum_difference
2023-07-01 12:10:00.000|18580804||1688213715.69067|188||1688213400.0||||||8.58|6.76|3.44|5.14|1.62|5.14
2023-07-01 14:10:00.000|18606320||1688220913.7511|188||1688220600.0||||||3.44|10.2|8.58|-5.14|6.76|3.44

The device apparently raised the state at 12:10 UTC from 3.44 to 8.85, then at 14:10 UTC the state changed back to the original value, from 8.85 to 3.44. The recorder however responds strangely to this change: at 12:10 UTC the sum is raised from 1.62 to 6.76 , then at 14:10 UTC the sum changes from 6.76 to 10.2. This device is a FGS223 by Fibargroup Firmware: 3.2, not a cheap neo coolcam device. This device is not supposed to report such state changes, but the fact that the recorder responds by adding 5.14 + 3.44 to the running total (sum) does not seem correct to me either. At least it results in an unrealistically high value for usage of 8.58kWh in 2 hours for 3 led lights:
image
This change in the state seems to have triggered the recorder to add the high values:
image
image

I think I've seen so far 3 different incorrect behaviors:
1 - high value reported in the energy card(s) that can not be seen (and thus not fixed) in the statistics developers tool,
2 - high value reported in the energy card(s) that can be seen and fixed in the statistics developers tool and
3 - negative usage values that can or can not be seen in the statistics tool.

I have a feeling that 1 and 2 are actually the same issue, but it depends on timing whether or not the change falls into a period that can be edited in the statistics tool. And 3 feels like a rounding error to me: when looking at the values using sqlite3, they are often very small negative numbers close to rounding points. Also: the statistics tool has a rounding issue itself too. It will not write a change to the database if the change is less than the 2 decimals the tool allows to enter. i.e. it won't let you correct an error that is > -0.005 and < 0.005 because it decides wheter or not to write the changes back to the database based on if the (rounded) presentation value changed, not the actual value read from the database.

@swa72
Copy link
Author

swa72 commented Jul 8, 2023

Ok, having run the SQL queries once did not do the job :-/. If the sensor reports data again after a while, the energy dashboard again shows a huge negative value. Given that I‘ve upgraded to 2023.07 in the meantime, I doubt that any bug has been fixed.

I cannot run the query in #17010 (comment) as it reports errors. „main.statistics“ does not exist - I fixed that by just using „statistics“. The next error was „Function homeassistant.strftime does not exist“ and here ends my SQL knowledge ;-)

Do I have to fix anything in statistics_short_term?

@rrozema
Copy link

rrozema commented Jul 8, 2023

@swa72 : I would not recommend making too much changes in the statistics table based on the above info. The information from me (and probably that from karwosts as well) is derived from looking at the contents of the database tables mostly, not from intimate knowledge of the workings of the recorder component. i.e. There may be other functionalities involved that were not known to me (or karwosts) at the time of writing. At the very least do make proper backups of your tables if you make any changes! Myself I use the queries to find where exactly the 'bad data' is, then always first try to use the statistics development tool. Only if there is really no other option I do any updates to the tables using update statements, and even then I only do this if the errors in my graphs are too bad to ignore... One trick I found to work in some situations where the tool doesn't let me edit the value is to adjust the statistics twice using the statistics tool: first adjust the statistic into some fictive value like for example 123, then later re-edit it to the value I actually want it to have, like 0 for example. This sometimes fixes the rounding errors that the tool seems to suffer from.

b.t.w. do you happen to have another database than sqlite configured for your home assitant instance? Because that is the only thing I can think of that explains why you had to remove the "main." from the query and do not have the strftime() function.

From my observations, the statistics_short_term table seems to only be used to create the graphs in the 'history' views. This data does get cleaned after x days (as configured in your settings) so any oddities in there will be gone in x days anyway. The energy cards however seem to read from the statistics table only. So I think you would not need to update the statistics_short_term table to fix any issues in the energy cards.

@swa72
Copy link
Author

swa72 commented Jul 9, 2023

@rrozema Thanks for the infos and you were right in assuming I use another db. I switched to MariaDB some time ago.
In general I do not mind rounding errors and somesuch but if you have a reading of -6.000kW on one day, the scale of the graph gets messed up (for the month and the year). Makes the entire thing useless ;-/

@roumano
Copy link

roumano commented Jul 21, 2023

Hi,
i got the same kind of issue on my energy dashboard due to nearly 2 days of unavailable sensor.
when the sensor come back, i got a huge postive spick on 3 sensors.

The "new" sensor value is exactly the same as before the unavailable (i can see it on mqtt browser ) ( so i don't get any row where the value for sum is smaller than that on the last row before it.)

  • i can't use Developer | Statistics and remove any bad reading as they bad value are not in

  • i try but it's not working the trick with first adjust the statistic into some fictive value like for example 123, then later re-edit it to the value I actually want it to have, like 0 for example.

  • i've update the sqlite database to remove the spick, it's working

    • but a new spick (with same value) is coming at the date of the modification

    • The first spick :
      7_july

    • after removing the spick of 6 july, a new spick appear when the modification was done :
      21_july

  • any idea why remove previous spick create a new spick with the same value as old one ?

ps : i use this sqlite command to modify ( my sensor value doesn't change since may as it's used only in the winter)

UPDATE `statistics`
SET sum = 244.383
WHERE `metadata_id`="565" 
AND 
`sum`=488.764

on another sensor, i use this kind of sqlite command:

UPDATE `statistics`
SET sum = sum-10.317
WHERE `metadata_id`="567" 
AND
`id`>=2176544

@rrozema
Copy link

rrozema commented Jul 25, 2023

  • any idea why remove previous spick create a new spick with the same value as old one ?

The graph shows the difference between the [sum] values for one entry and its 'previous' entry. When you delete one entry, the next available entry before that will become the new 'previous' entry. Because this new 'previous' entry has (almost) the same data as the one you just deleted, you'll get exactly the same spike, only over a somewhat longer time frame.

@lolmachine4
Copy link

lolmachine4 commented Aug 6, 2023

I got the exact same problem. One of my Plugs reported a negative value after being not plugged in and being plugged back after several weeks, which now completly breaks my energy usage, as the device in question "generated" energy when it came back. I've used the SQL magic in this thread to find the value for the energy cost sensor and set it to the last value that makes sense, but every hour roles over and the dashboard gets refreshed I get another 0 entry (device is online but powering nothing), which, again, breaks the whole dashboard. How can this be fixed?

@karwosts
Copy link
Contributor

karwosts commented Aug 6, 2023

@lolmachine4 - Can you expand your comment with some thorough datapoints? I'd be curious exactly what data is in your SQL table each hour, and exactly what you mean by a "0 entry".

@lolmachine4
Copy link

@karwosts I will certainly try! Please bear in mind: I'm an absolute novice when it comes to SQL an thelike. All I can is watch and observe.

So, this is what the dashboard looks like right now.
grafik
As you can see, the device has used 0.08 kWh today (OK), but it still says this would have generated ~8€ in electricy - obviously bogus.
Following the trail of the cost_sensor (not the device id itself, why is explained further down) for this device (ID=43) gets me this:
SELECT * FROM statisticsWHEREmetadata_id="43" AND start_ts > 1688212951
grafik
The value 7.6XX is, atleast by my assumption, the actual, correct, real value the device should have stored in the DB before it was unplugged.
When a new hour starts, it looks like this. Ignore the "-0.2X" value, I had the device plugged in and in use this morning. Usually, if the device is online but not in use, this value would be "0.0"
grafik
At this point, I've deleted the old device via statistics menu, when open, HA noticed I had deleted and re-added the device and assigned it a new ID (the device_id, NOT cost_id) was 35, which does not exist anymore. But the actual cost_sensor stayed the same.
Anyhow. If I execute the following query, the graph is right again, but as soon as the next hour comes, it wrongs again because it records another "0.0". Query:
UPDATE statisticsSET sum = 7.659081833333422 WHEREmetadata_id="43" AND id>= 245561
This results in:
grafik
and in a proper, correct dashboard:
grafik
I would expect HA to simply re-write the same entry (7.6X) again if the device is online but nothing is plugged into it, but from my basic understanding, this is not happening. It has, even before deleting the device and re-adding it again, always started from "0.0".

I hope this helps!

@karwosts
Copy link
Contributor

karwosts commented Aug 6, 2023

For now I think I will suggest lets stop trying to modify the SQL tables directly, I think the same thing can be achieved by using the "fix a statistic" dialog in developer tools.

Lets let everything rest for a few hours, then take a picture of your dashboard energy graph that shows the erroneous datapoint. We can then see if there's an update we can make in statistics devtools that will remove it.

@lolmachine4
Copy link

lolmachine4 commented Aug 6, 2023

I've tried that before messing around with the database directly, but I couldn't find any entry related to this.
grafik
As you can see, there are zero entries to this cost-sensor, nor the energy sensor, so I have no idea where to look for.
grafik
This view has me confused anyway, as I know I've used this plug on 11.07., and there are infact entries there.
grafik
So, how can I find out where this spike is coming from?
I've since tried to do a restore from the backup-menu to last friday where this issue did not exist yet, but the entry STILL reappeard. I'm completely puzzled now. I figured I'd take a loss of 2 days worth of data over endless debugging while not really knowing what I'm doing, but here we are..
grafik
Atleast I got the original device ID (35) back, so if someone where to figure out what to edit here, I'm back to where I started.
So, I checked the DB again with the following command
SELECT * FROMstatisticsWHEREmetadata_id="35" AND start_ts > 1685651837
The only thing strange is the sudden drop there:
grafik
This continues to be 0.0 in the "state" coloum, even though this coloum is never 0.0 in all of the 1000 entries displayed.
And this is how it looks now.
The last proper entry is from 15.07 at 10:00, which could be very likely the time I've unplugged the device. The next and wrong entry is from now.
grafik
grafik

@swa72
Copy link
Author

swa72 commented Aug 7, 2023

Having run the DB fix a couple of times, I sadly can confirm that the error re-appears:

image

It is a gas sensor that doesn't do much these summer days. But once some gas gets consumed, HA records it incorrectly (6.3499) resulting in a huge negative spike from the previous sum

I should add that the entire thing worked flawlessly for more than a year, including reboots, updates and somesuch.

Checking Developer | Statistics doesn't help (e.g. removing the 0.03 reading) either.

I would like to try the following:

  • turn off the gas sensor device
  • fix all sum entries (zero them first, then run something like sum = previous sum + state)
  • reboot HA (flushing any caches)
  • reboot gas sensor device

and see if that works.

Sadly my SQL knowledge is not sufficient to create that query.

@lolmachine4
Copy link

So, everything has settled as mentioned, but the faulty datapoint does not appear in the statistics-tab, no matter what.
According to HA, the faulty datapoint was recorded on the 6. of August. Thats what the energy-dashboard says.
grafik
So, we're now going to have a look at the kWh-graph for that sensor, which clearly shows there have been 0 datapoints recorded, which is probably the reason I can't fix anything in the statistics tab in the first place.
grafik
As seen:
grafik
grafik
grafik
This continues until like 11 pm, where a flat line indicating zero is shown.

@karwosts
Copy link
Contributor

karwosts commented Aug 7, 2023

@lolmachine4 - so if you do "fix statistic" dialog at ~21:40 for august 6, it shows a 0 there?

And if you look on energy dashboard, in the 21-22 hour for august 6 it shows a bar for +30kWh at 21-22pm?

If that's the case you can try "fixing" the 0 to change it to -30, and I think that may remove the erroneous bar.

Note that we're actually straying pretty far from a frontend issue here since all this is actually wrong in the backend database. I'll try to finish up getting a bandaid on some of these issues but this probably should be closed and moved to a core issue.

@lolmachine4
Copy link

@karwosts
Here's a screenshot of the energy dashboard bar as requested:
grafik
And there's indeed nothing to "fix" in the statistics bar:
grafik

@karwosts
Copy link
Contributor

karwosts commented Aug 7, 2023

Try change the datapoint at 22:00 from 0 to 30.05. My guess is that will make the bad point go away.

@lolmachine4
Copy link

lolmachine4 commented Aug 7, 2023

I was just typing this as you replied. This is the jankies stuff I've ever done, but for now, it works. I've updated both the energy-entry and the cost-entry, updating the energy-entry alone did nothing:
grafik
grafik
grafikgrafik
If this stays when the next hour rolls around I'm happy - quick and dirty, even if it wasn't quick. Thanks a bunch!

@karwosts
Copy link
Contributor

For now I'm going to close this as all the issues reported here are actually core issues with statistics, which is not really something can be fixed in frontend.

If you get future cases of seeing long term statistics values getting reset unintentionally, I would open a core issue for long term statistics.

@karwosts karwosts closed this as not planned Won't fix, can't repro, duplicate, stale Aug 13, 2023
@swa72
Copy link
Author

swa72 commented Aug 14, 2023

I noticed by accident that my huge negative spike of -5800m3 suddenly showed up in Developer | Statistics. I zero'ed that value and, fingers crossed, everything looks good. Odd.

@taitadam-eng
Copy link

I'm going to leave some notes here on how I fixed this, assuming a sensor with state_class: total_increasing that is not supposed to decrease in value.

  1. Install SQLite addon, go to the addon UI page. In my case it was http://homeassistant.local:8123/hassio/ingress/a0d7b954_sqlite-web
  2. Select statistics_meta table, and Query.
  3. Run this command, specifying your sensor name you want to fix:
SELECT * FROM "statistics_meta"
WHERE statistic_id is "sensor.YOUR_SENSOR_NAME_HERE"
  1. You will get a result with one record. Note the value in the id column.
    image
  2. Select statistics table, and Query.
  3. Run this command, with <id> being the value from step 4.
SELECT * FROM "statistics"
WHERE "metadata_id" IS "<id>"
order by start_ts asc

e.g.

SELECT * FROM "statistics"
WHERE "metadata_id" IS "279"
order by start_ts asc
  1. You will get a result with hundreds/thousands of records, depending on the age of your sensor. Note the sum column should typically be continually increasing as you scroll down through the rows. You are looking for a row where the sum column drops suddenly, it should correspond with the time in your graph where the big negative drop occurs.
    Note here the column where the sum drops from 725 to 1.87777. The value in the row with the drop is not important, just note that it has significantly reset or decreased, it could be 0 or a small number.
    image

From here you want to note two things, the id field for the first row containing the drop, and note the value of the sum from before it decreased.

  1. Now that you have this information, you can run the following query. Be careful with this as you can blow up your data if you don't do it right.
UPDATE "statistics"
SET sum = sum + <sum from step 7>
WHERE "metadata_id" IS "<id from step 4>" 
AND 
"id" >= <id from step 7>

e.g.:

UPDATE "statistics"
SET sum = sum + 725.243
WHERE "metadata_id" IS "279" 
AND 
"id" >= 1382082

After you run this you should get a terse result like "N rows updated".

At this point you can refresh your web browser, and the negative spike should be gone.

This really helped me, thankyou.

@roumano
Copy link

roumano commented Sep 12, 2023

I'm going to leave some notes here on how I fixed this, assuming a sensor with state_class: total_increasing that is not supposed to decrease in value.

  1. Install SQLite addon, go to the addon UI page. In my case it was http://homeassistant.local:8123/hassio/ingress/a0d7b954_sqlite-web
  2. Select statistics_meta table, and Query.
  3. Run this command, specifying your sensor name you want to fix:
SELECT * FROM "statistics_meta"
WHERE statistic_id is "sensor.YOUR_SENSOR_NAME_HERE"
  1. You will get a result with one record. Note the value in the id column.
    image
  2. Select statistics table, and Query.
  3. Run this command, with <id> being the value from step 4.
SELECT * FROM "statistics"
WHERE "metadata_id" IS "<id>"
order by start_ts asc

e.g.

SELECT * FROM "statistics"
WHERE "metadata_id" IS "279"
order by start_ts asc
  1. You will get a result with hundreds/thousands of records, depending on the age of your sensor. Note the sum column should typically be continually increasing as you scroll down through the rows. You are looking for a row where the sum column drops suddenly, it should correspond with the time in your graph where the big negative drop occurs.
    Note here the column where the sum drops from 725 to 1.87777. The value in the row with the drop is not important, just note that it has significantly reset or decreased, it could be 0 or a small number.
    image

From here you want to note two things, the id field for the first row containing the drop, and note the value of the sum from before it decreased.

  1. Now that you have this information, you can run the following query. Be careful with this as you can blow up your data if you don't do it right.
UPDATE "statistics"
SET sum = sum + <sum from step 7>
WHERE "metadata_id" IS "<id from step 4>" 
AND 
"id" >= <id from step 7>

e.g.:

UPDATE "statistics"
SET sum = sum + 725.243
WHERE "metadata_id" IS "279" 
AND 
"id" >= 1382082

After you run this you should get a terse result like "N rows updated".
At this point you can refresh your web browser, and the negative spike should be gone.

This really helped me, thankyou.

yes it's working for negative spick but not in case of a bug ( see my comment in #17010 (comment) )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants