New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Iterative autotuning of basals and ratios #261

Closed
scottleibrand opened this Issue Nov 26, 2016 · 41 comments

Comments

Projects
None yet
8 participants
@scottleibrand
Contributor

scottleibrand commented Nov 26, 2016

One possibly easier way to implement #99 in oref0 would be to take an incremental approach to iteratively adjusting basal schedules and ratios.

To keep track of required adjustments to the pump's programmed basal schedule, we could create a long-lived autotune.json file that contains an entry for each hour of the day for which we've identified an adjustment. For ISF and CSF, we could similarly track any required adjustments in that same file.

For time periods without COB, and where basal insulin activity dominates, we could look at net deviations for each hour, and calculate how much less or additional basal insulin would have been required to eliminate the deviations. We could then add an adjustment factor/multiplier to the autotune.json to adjust basal by a fraction of that, perhaps 10%. That could be split across a few hours' basals: perhaps 5% of the required adjustment would go in the 2h-prior slot, and 2.5% each could go in the 1h-prior and 3h-prior ones. The other 90% of adjustment would not be made at all unless subsequent days' outcomes justified further adjustments, so as to dampen oscillations and cause the system to react gradually to observed insulin needs.

After mealtimes, we could observe whether AMA's COB estimate drops to zero before or after post-prandial deviations drop to zero. If the COB estimate hits zero first, that indicates that the (perceived carb) sensitivity factor is too low. As with basals, we could adjust the CSF ratio (calculated from the pump's ISF and IC ratio) multiplier in autotune.json by a fraction (perhaps 10%) of the adjustment that would've been required to get COB to decay to zero at the time we saw deviations drop to ~0. For meals where deviations drop to zero while COB is still positive, we'd want to subtract out any net positive deviations after the initial negative deviation, so we can account for any carbs whose absorption was delayed by post-meal activity etc.

For any post-correction periods and post-meal periods with significant insulin activity after COB and deviations have dropped to zero, we can calculate deviations for the period where bolus and high-temp insulin dominate basal insulin, and calculate what adjustments to ISF would have been necessary to bring deviations for that period to zero. As with basals and CSF, we can gradually adjust the ISF multiplier by a fraction (perhaps 10%) of the observed deviations.

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Dec 4, 2016

Contributor

The existing lib/determine-basal/cob-autosens.js doesn't have all the info it needs to determine whether a meal is over yet: it just calculates carbsAbsorbed, and then lib/meal/total.js uses that to actually calculate COB.

It would probably make sense to create a new high-level autotune-prep.js, which would:

  • Iterate over all carb treatments in a manner similar to what lib/meal/total.js does, except instead of identifying carbsAbsorbed, identify the time at which COB hits zero.
  • Store the glucose data from the carb treatment time until COB and deviations hit zero in an autotune/glucose.json file under a top-level csf stanza.
  • Go through the remaining time periods and divide them into periods where scheduled basal insulin activity dominates. This would be determined by calculating the BG impact of scheduled basal insulin (for example 1U/hr * 48 mg/dL/U ISF = 48 mg/dL/hr = 5 mg/dL/5m), and comparing that to BGI from bolus and net basal insulin activity.
  • Store the glucose data from the scheduled-basal-activity-dominated time periods into the autotune/glucose.json file under a top-level basal stanza.
  • Store the glucose data from the post-correction periods into the autotune/glucose.json file under a top-level isf stanza.

Then, having created the glucose data file with a top-level stanza for each category of interest, we could have an autotune.js (or separate autotune/csf.js, autotune/basal.js, and autotune/isf.js) that would parse their respective data sets and use deviations from the gap-free data intervals to perform the actual CSF, basal, and ISF adjustment calculations.

With this method, we could run the autotune process once an hour, parse only the previous hour's data, and update autotune.json accordingly.

Contributor

scottleibrand commented Dec 4, 2016

The existing lib/determine-basal/cob-autosens.js doesn't have all the info it needs to determine whether a meal is over yet: it just calculates carbsAbsorbed, and then lib/meal/total.js uses that to actually calculate COB.

It would probably make sense to create a new high-level autotune-prep.js, which would:

  • Iterate over all carb treatments in a manner similar to what lib/meal/total.js does, except instead of identifying carbsAbsorbed, identify the time at which COB hits zero.
  • Store the glucose data from the carb treatment time until COB and deviations hit zero in an autotune/glucose.json file under a top-level csf stanza.
  • Go through the remaining time periods and divide them into periods where scheduled basal insulin activity dominates. This would be determined by calculating the BG impact of scheduled basal insulin (for example 1U/hr * 48 mg/dL/U ISF = 48 mg/dL/hr = 5 mg/dL/5m), and comparing that to BGI from bolus and net basal insulin activity.
  • Store the glucose data from the scheduled-basal-activity-dominated time periods into the autotune/glucose.json file under a top-level basal stanza.
  • Store the glucose data from the post-correction periods into the autotune/glucose.json file under a top-level isf stanza.

Then, having created the glucose data file with a top-level stanza for each category of interest, we could have an autotune.js (or separate autotune/csf.js, autotune/basal.js, and autotune/isf.js) that would parse their respective data sets and use deviations from the gap-free data intervals to perform the actual CSF, basal, and ISF adjustment calculations.

With this method, we could run the autotune process once an hour, parse only the previous hour's data, and update autotune.json accordingly.

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Dec 20, 2016

Contributor

https://github.com/openaps/oref0/blob/autotune/lib/autotune-prep/total.js now has all of the preparation steps working, and is spitting out json with top-level csf_glucose_data, isf_glucose_data, and basal_glucose_data arrays.

I decided to allocate data to ISF tuning if the BGI is more than about 1/4 of the "basal BGI" (for now), and use the rest for tuning basals. Also, when avgDelta is positive, it doesn't make sense to use that for calculating ISF, so that data goes toward basals as well.

Contributor

scottleibrand commented Dec 20, 2016

https://github.com/openaps/oref0/blob/autotune/lib/autotune-prep/total.js now has all of the preparation steps working, and is spitting out json with top-level csf_glucose_data, isf_glucose_data, and basal_glucose_data arrays.

I decided to allocate data to ISF tuning if the BGI is more than about 1/4 of the "basal BGI" (for now), and use the rest for tuning basals. Also, when avgDelta is positive, it doesn't make sense to use that for calculating ISF, so that data goes toward basals as well.

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Dec 20, 2016

Contributor

I also added in the avgDelta and deviation values to the json dump, which I think should mean that all of the recursive stuff can be contained to autotune-prep, and we can do the rest of the tuning as a single pass.

Contributor

scottleibrand commented Dec 20, 2016

I also added in the avgDelta and deviation values to the json dump, which I think should mean that all of the recursive stuff can be contained to autotune-prep, and we can do the rest of the tuning as a single pass.

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Dec 24, 2016

Contributor

oref0-autotune-prep.js and oref0-autotune.js are now working well, and spitting out reasonable-looking results (although I still have some questions as to whether the CSF estimate is accurate). The main problem at this point is that oref0-autotune-prep.js is highly recursive and takes an hour or two to run on a full day's worth of data (at 100% of one CPU). Probably need to figure out how to make the COB calculation more efficient before we can really test out the algorithm and make sure it converges on reasonable values for ISF and CSF. I think the way to do that would be to find and move the relevant carb absorption code out of lib/determine-basal/cob-autosens.js and into oref0-autotune-prep.js, so it can be run to calculate just the absorption since the last BG data point and update COB accordingly, rather than recalculating carb absorption from scratch on each one.

Contributor

scottleibrand commented Dec 24, 2016

oref0-autotune-prep.js and oref0-autotune.js are now working well, and spitting out reasonable-looking results (although I still have some questions as to whether the CSF estimate is accurate). The main problem at this point is that oref0-autotune-prep.js is highly recursive and takes an hour or two to run on a full day's worth of data (at 100% of one CPU). Probably need to figure out how to make the COB calculation more efficient before we can really test out the algorithm and make sure it converges on reasonable values for ISF and CSF. I think the way to do that would be to find and move the relevant carb absorption code out of lib/determine-basal/cob-autosens.js and into oref0-autotune-prep.js, so it can be run to calculate just the absorption since the last BG data point and update COB accordingly, rather than recalculating carb absorption from scratch on each one.

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Dec 24, 2016

Contributor

Oh, and they also work great on data downloaded from the NS treatments.json and entries.json API endpoints. The only thing you need from openaps is an initial profile.json. One additional enhancement might be to allow for providing a NS url and have the script download the needed treatments and entries data directly from NS...

Contributor

scottleibrand commented Dec 24, 2016

Oh, and they also work great on data downloaded from the NS treatments.json and entries.json API endpoints. The only thing you need from openaps is an initial profile.json. One additional enhancement might be to allow for providing a NS url and have the script download the needed treatments and entries data directly from NS...

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Dec 25, 2016

Contributor

Re-did the CSF algorithm to be much simpler and 50x more efficient. :-) I'm now parsing a day's worth of data in 2-3 minutes, so I can get started on testing whether it converges on a reasonable value.

Contributor

scottleibrand commented Dec 25, 2016

Re-did the CSF algorithm to be much simpler and 50x more efficient. :-) I'm now parsing a day's worth of data in 2-3 minutes, so I can get started on testing whether it converges on a reasonable value.

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Dec 26, 2016

Contributor

After some bug fixes, I ran the script overnight starting with some too-aggressive and too-sensitive settings, and confirmed that the algorithm eventually converged (after a handful of runs on the same 3 weeks' worth of data) on two nearly identical and reasonable sets of estimates for ISF, CSF, IC ratio, and basals.

Here's how I ran the test:

Collect data:

curl "$NIGHTSCOUT_HOST/api/v1/treatments.json?find\[created_at\]\[\$gte\]=`date -d 2016-12-01 -Iminutes`" > ns-treatments.json

for i in `seq 2 25`; do j=$((i+1)); curl "$NIGHTSCOUT_HOST/api/v1/entries/sgv.json?find\[date\]\[\$gte\]=`(date -d 2016-12-$i +%s | tr -d '\n'; echo 000)`&find\[date\]\[\$lte\]=`(date -d 2016-12-$j +%s | tr -d '\n'; echo 000)`&count=1000" > ~/ns-entries.2016-12-$i.json; done

Set up testprofile.json. My too-sensitive one looked like:

{
  "max_iob": 4,
  "type": "current",
  "max_daily_safety_multiplier": 4,
  "current_basal_safety_multiplier": 4,
  "autosens_max": 1.2,
  "autosens_min": 0.7,
  "autosens_adjust_targets": true,
  "override_high_target_with_low": false,
  "bolussnooze_dia_divisor": 2,
  "min_5m_carbimpact": 3,
  "carbratio_adjustmentratio": 1,
  "dia": 3,
  "model": {},
  "current_basal": 1,
  "basalprofile": [
    {
      "i": 0,
      "start": "00:00:00",
      "rate": 0.1,
      "minutes": 0
    }
  ],
  "max_daily_basal": 0.1,
  "max_basal": 4,
  "min_bg": 100,
  "max_bg": 100,
  "sens": 100,
  "isfProfile": {
    "units": "mg/dL",
    "sensitivities": [
      {
        "i": 0,
        "start": "00:00:00",
        "sensitivity": 100,
        "offset": 0,
        "x": 0,
        "endOffset": 1440
      }
    ],
    "first": 1
  },
  "carb_ratio": 1000
}

Run test (overnight):

rm profile.[1-9].json; cp testprofile.json profile.json; for run in `seq 1 9`; do cp profile.json profile.$run.json; for i in `seq 2 24`; do ~/src/oref0/bin/oref0-autotune-prep.js ns-treatments.json profile.json ns-entries.2016-12-$i.json > autotune.2016-12-$i.json; ~/src/oref0/bin/oref0-autotune.js autotune.2016-12-$i.json profile.json > newprofile.2016-12-$i.json; cp newprofile.2016-12-$i.json profile.json; done; done

Display results (in another tab while test is running):

while(true); do (for type in csf carb_ratio isfProfile.sensitivities[0].sensitivity; do ( echo $type | awk -F \. '{print $1}'; for i in `seq 1 9`; do cat profile.$i.json | json $type; done; cat profile.json | json $type) | while read line; do echo -n "$line "; done; echo;  done; for j in `seq 0 23`; do ( echo $j; for i in `seq 1 9`; do cat profile.$i.json | json basalprofile[$j].rate ; done; cat profile.json | json basalprofile[$j].rate ) | while read line; do echo -n "$line "; done; echo;  done ) 2>/dev/null | column -t; date; done
Contributor

scottleibrand commented Dec 26, 2016

After some bug fixes, I ran the script overnight starting with some too-aggressive and too-sensitive settings, and confirmed that the algorithm eventually converged (after a handful of runs on the same 3 weeks' worth of data) on two nearly identical and reasonable sets of estimates for ISF, CSF, IC ratio, and basals.

Here's how I ran the test:

Collect data:

curl "$NIGHTSCOUT_HOST/api/v1/treatments.json?find\[created_at\]\[\$gte\]=`date -d 2016-12-01 -Iminutes`" > ns-treatments.json

for i in `seq 2 25`; do j=$((i+1)); curl "$NIGHTSCOUT_HOST/api/v1/entries/sgv.json?find\[date\]\[\$gte\]=`(date -d 2016-12-$i +%s | tr -d '\n'; echo 000)`&find\[date\]\[\$lte\]=`(date -d 2016-12-$j +%s | tr -d '\n'; echo 000)`&count=1000" > ~/ns-entries.2016-12-$i.json; done

Set up testprofile.json. My too-sensitive one looked like:

{
  "max_iob": 4,
  "type": "current",
  "max_daily_safety_multiplier": 4,
  "current_basal_safety_multiplier": 4,
  "autosens_max": 1.2,
  "autosens_min": 0.7,
  "autosens_adjust_targets": true,
  "override_high_target_with_low": false,
  "bolussnooze_dia_divisor": 2,
  "min_5m_carbimpact": 3,
  "carbratio_adjustmentratio": 1,
  "dia": 3,
  "model": {},
  "current_basal": 1,
  "basalprofile": [
    {
      "i": 0,
      "start": "00:00:00",
      "rate": 0.1,
      "minutes": 0
    }
  ],
  "max_daily_basal": 0.1,
  "max_basal": 4,
  "min_bg": 100,
  "max_bg": 100,
  "sens": 100,
  "isfProfile": {
    "units": "mg/dL",
    "sensitivities": [
      {
        "i": 0,
        "start": "00:00:00",
        "sensitivity": 100,
        "offset": 0,
        "x": 0,
        "endOffset": 1440
      }
    ],
    "first": 1
  },
  "carb_ratio": 1000
}

Run test (overnight):

rm profile.[1-9].json; cp testprofile.json profile.json; for run in `seq 1 9`; do cp profile.json profile.$run.json; for i in `seq 2 24`; do ~/src/oref0/bin/oref0-autotune-prep.js ns-treatments.json profile.json ns-entries.2016-12-$i.json > autotune.2016-12-$i.json; ~/src/oref0/bin/oref0-autotune.js autotune.2016-12-$i.json profile.json > newprofile.2016-12-$i.json; cp newprofile.2016-12-$i.json profile.json; done; done

Display results (in another tab while test is running):

while(true); do (for type in csf carb_ratio isfProfile.sensitivities[0].sensitivity; do ( echo $type | awk -F \. '{print $1}'; for i in `seq 1 9`; do cat profile.$i.json | json $type; done; cat profile.json | json $type) | while read line; do echo -n "$line "; done; echo;  done; for j in `seq 0 23`; do ( echo $j; for i in `seq 1 9`; do cat profile.$i.json | json basalprofile[$j].rate ; done; cat profile.json | json basalprofile[$j].rate ) | while read line; do echo -n "$line "; done; echo;  done ) 2>/dev/null | column -t; date; done
@danamlewis

This comment has been minimized.

Show comment
Hide comment
@danamlewis

danamlewis Dec 28, 2016

Contributor

Some more plain language to go with the code as it is today (I also made some changes today to rename and comment, which is in line with the below):

There are two key pieces: autotune-prep and autotune.js

1. Autotune-prep:

  • Autotune-prep takes three things initially: glucose data; treatments data; and starting profile (originally from pump; afterwards autotune will set a profile)
  • It calculates BGI and deviation for each glucose value based on treatments
  • Then, categorizes each glucose value as attributable to either CSF, ISF, or basals
  • To determine if a "datum" is attributable to CSF, it calculates COB until carbs are observed as absorbed and carb absorption has stopped (COB=0). If COB is 0 but all deviations since hitting COB=0 are positive, those deviations are attributed to CSF. Once deviations are negative after COB=0, subsequent data is attributed as ISF or basals.
  • To determine if it is attributable to ISF, autotune-prep looks at whether BGI is negative and greater than one quarter of basal BGI.
  • Otherwise, the remaining situations (BGI positive (meaning insulin activity is negative); or BGI is smaller than 1/4 of basal BGI) mean the data is attributed to basals
  • Exception: if something would be attributed to ISF but average delta is positive, then that can't be ISF because ISF is activity that is pushing BG down so we attribute it to basals being off if you are rising for no reason. (This will be a future TODO area of improvement to detect non-entered carbs rather than solely attributing to basals).
  • All this data is outputs to a single file with 3 sections: ISF, CSF, and basals.

2. Autotune.js:

  • Autotune.js reads the prepped glucose file with 3 sections. It calculates what adjustments should be made to ISF, CSF, and basals accordingly.
  • For basals, it divides the day into hour long increments. It calculates the total deviations for that hour increment and calculates what change in basal would be required to adjust those deviations to 0. It then applies 20% of that change needed to the three hours prior (because of insulin impact time). If increasing basal, increases evenly across all 3 hour increments. If decreasing basal, it does so proportionally, so the biggest basal is reduced the most.
  • For ISF, it calculates the 50th percentile (mean) deviation for the entire day and determines how much ISF would need to change to get that deviation to 0. It applies 20% of that as an adjustment to ISF.
  • For CSF, it calculates the total deviations over all mealtimes and compares to the deviations that are expected based on existing CSF and the known amount of carbs entered. (TODO: simplify by calculating total carbs entered for day instead of allocating to individual meals)
  • Autotune.js applies a 20% limit on all 3 variables if you provide the existing pump profile, to prevent autotune from getting more than 20% off in either direction.
  • (FUTURE TODO: Instead of 20% hardcoded safety cap, use autosens min and max ratios.)
Contributor

danamlewis commented Dec 28, 2016

Some more plain language to go with the code as it is today (I also made some changes today to rename and comment, which is in line with the below):

There are two key pieces: autotune-prep and autotune.js

1. Autotune-prep:

  • Autotune-prep takes three things initially: glucose data; treatments data; and starting profile (originally from pump; afterwards autotune will set a profile)
  • It calculates BGI and deviation for each glucose value based on treatments
  • Then, categorizes each glucose value as attributable to either CSF, ISF, or basals
  • To determine if a "datum" is attributable to CSF, it calculates COB until carbs are observed as absorbed and carb absorption has stopped (COB=0). If COB is 0 but all deviations since hitting COB=0 are positive, those deviations are attributed to CSF. Once deviations are negative after COB=0, subsequent data is attributed as ISF or basals.
  • To determine if it is attributable to ISF, autotune-prep looks at whether BGI is negative and greater than one quarter of basal BGI.
  • Otherwise, the remaining situations (BGI positive (meaning insulin activity is negative); or BGI is smaller than 1/4 of basal BGI) mean the data is attributed to basals
  • Exception: if something would be attributed to ISF but average delta is positive, then that can't be ISF because ISF is activity that is pushing BG down so we attribute it to basals being off if you are rising for no reason. (This will be a future TODO area of improvement to detect non-entered carbs rather than solely attributing to basals).
  • All this data is outputs to a single file with 3 sections: ISF, CSF, and basals.

2. Autotune.js:

  • Autotune.js reads the prepped glucose file with 3 sections. It calculates what adjustments should be made to ISF, CSF, and basals accordingly.
  • For basals, it divides the day into hour long increments. It calculates the total deviations for that hour increment and calculates what change in basal would be required to adjust those deviations to 0. It then applies 20% of that change needed to the three hours prior (because of insulin impact time). If increasing basal, increases evenly across all 3 hour increments. If decreasing basal, it does so proportionally, so the biggest basal is reduced the most.
  • For ISF, it calculates the 50th percentile (mean) deviation for the entire day and determines how much ISF would need to change to get that deviation to 0. It applies 20% of that as an adjustment to ISF.
  • For CSF, it calculates the total deviations over all mealtimes and compares to the deviations that are expected based on existing CSF and the known amount of carbs entered. (TODO: simplify by calculating total carbs entered for day instead of allocating to individual meals)
  • Autotune.js applies a 20% limit on all 3 variables if you provide the existing pump profile, to prevent autotune from getting more than 20% off in either direction.
  • (FUTURE TODO: Instead of 20% hardcoded safety cap, use autosens min and max ratios.)
@danamlewis

This comment has been minimized.

Show comment
Hide comment
@danamlewis

danamlewis Dec 28, 2016

Contributor

A few food for thought items for later:

  • Running this as a one-off may achieve #99 for basal tuning.

  • Looking at the reports from iterative autotuning, you can start to see where COB=0 yet it is still associating the effect to CSF, which means mealtime insulin was mismatched or there were undercounted carbs. It may be possible to generate output/a report to point to where there may be (perhaps with a margin of error to exclude slower absorption, etc.) miscounted meal carbs.

This may be the first tool to peel back the onion and start reliably understanding post-meal BGs to the granular level of basal vs csf vs uncounted/miscounted meal carbs.

Contributor

danamlewis commented Dec 28, 2016

A few food for thought items for later:

  • Running this as a one-off may achieve #99 for basal tuning.

  • Looking at the reports from iterative autotuning, you can start to see where COB=0 yet it is still associating the effect to CSF, which means mealtime insulin was mismatched or there were undercounted carbs. It may be possible to generate output/a report to point to where there may be (perhaps with a margin of error to exclude slower absorption, etc.) miscounted meal carbs.

This may be the first tool to peel back the onion and start reliably understanding post-meal BGs to the granular level of basal vs csf vs uncounted/miscounted meal carbs.

@sulkaharo

This comment has been minimized.

Show comment
Hide comment
@sulkaharo

sulkaharo Dec 28, 2016

Collaborator

Things that come to mind:

  • Autosens currently doesn't take profile changes into account, so autosens calculation fails to produce correct results for 24 hours after a profile change as it assumes the current profile was in effect prior to being applied (so at worst, the resulting autosens adjustment can be catastrophically wrong). If we want to guarantee correct results, this system should probably be able to pull historic profiles from the pump and use the correct profile data for any given moment being analyzed. Nightscout already supports profile data lookup with multiple profiles, so porting the code over would probably be not very difficult, if we manage to implement a system to provide the data.

  • Autosens also doesn't have a mechanism to detect a bad cannula site. Given the APS automatically high-temps when a site has gone bad, it's harder to notice the issue before it's potentially caused quite a few hours of data that should be excluded from the calculation. A potential way to analyze for this could be to compare ISF analysis results right before and after a site change event and discard data from before the site change if the deviation between results exceeds a threshold? If we don't add something like this, how much will this affect the calculation?

  • How sensitive is the algorithm to missing carb data and/or carb events being misplaced in time? AFAIK the usage patterns for how people mark carbs vary hugely (some use the Wizard, some not, some mark carbs at the time of pre bolus, some when starting to eat, some after the carb amount has been confirmed post meal etc) so the carb information is probably inherently unreliable. If the system assumes a certain usage pattern, that should be documented or alternatively the system should support varying usages.

Collaborator

sulkaharo commented Dec 28, 2016

Things that come to mind:

  • Autosens currently doesn't take profile changes into account, so autosens calculation fails to produce correct results for 24 hours after a profile change as it assumes the current profile was in effect prior to being applied (so at worst, the resulting autosens adjustment can be catastrophically wrong). If we want to guarantee correct results, this system should probably be able to pull historic profiles from the pump and use the correct profile data for any given moment being analyzed. Nightscout already supports profile data lookup with multiple profiles, so porting the code over would probably be not very difficult, if we manage to implement a system to provide the data.

  • Autosens also doesn't have a mechanism to detect a bad cannula site. Given the APS automatically high-temps when a site has gone bad, it's harder to notice the issue before it's potentially caused quite a few hours of data that should be excluded from the calculation. A potential way to analyze for this could be to compare ISF analysis results right before and after a site change event and discard data from before the site change if the deviation between results exceeds a threshold? If we don't add something like this, how much will this affect the calculation?

  • How sensitive is the algorithm to missing carb data and/or carb events being misplaced in time? AFAIK the usage patterns for how people mark carbs vary hugely (some use the Wizard, some not, some mark carbs at the time of pre bolus, some when starting to eat, some after the carb amount has been confirmed post meal etc) so the carb information is probably inherently unreliable. If the system assumes a certain usage pattern, that should be documented or alternatively the system should support varying usages.

@PieterGit

This comment has been minimized.

Show comment
Hide comment
@PieterGit

PieterGit Dec 28, 2016

Contributor

Looks promising...

I tried to do a run myself, but got errors:

Error: Cannot find module 'oref0/lib/autotune'
    at Function.Module._resolveFilename (module.js:325:15)
    at Function.Module._load (module.js:276:25)
    at Module.require (module.js:353:17)
    at require (internal/module.js:12:17)
    at Object.<anonymous> (/home/foobar/src/oref0/bin/oref0-autotune.js:23:16)
    at Module._compile (module.js:409:26)
    at Object.Module._extensions..js (module.js:416:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Function.Module.runMain (module.js:441:10)
module.js:327
    throw err;
    ^

Error: Cannot find module 'oref0/lib/autotune-prep'
    at Function.Module._resolveFilename (module.js:325:15)
    at Function.Module._load (module.js:276:25)
    at Module.require (module.js:353:17)
    at require (internal/module.js:12:17)
    at Object.<anonymous> (/home/foobar/src/oref0/bin/oref0-autotune-prep.js:23:16)
    at Module._compile (module.js:409:26)
    at Object.Module._extensions..js (module.js:416:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Function.Module.runMain (module.js:441:10)

Is that caused by a missing file, or should I install the autotune differently?

Why do you do calculate it on an hourly basis, and not (as the pump and Nightscout support) on a half-hour basis.

Contributor

PieterGit commented Dec 28, 2016

Looks promising...

I tried to do a run myself, but got errors:

Error: Cannot find module 'oref0/lib/autotune'
    at Function.Module._resolveFilename (module.js:325:15)
    at Function.Module._load (module.js:276:25)
    at Module.require (module.js:353:17)
    at require (internal/module.js:12:17)
    at Object.<anonymous> (/home/foobar/src/oref0/bin/oref0-autotune.js:23:16)
    at Module._compile (module.js:409:26)
    at Object.Module._extensions..js (module.js:416:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Function.Module.runMain (module.js:441:10)
module.js:327
    throw err;
    ^

Error: Cannot find module 'oref0/lib/autotune-prep'
    at Function.Module._resolveFilename (module.js:325:15)
    at Function.Module._load (module.js:276:25)
    at Module.require (module.js:353:17)
    at require (internal/module.js:12:17)
    at Object.<anonymous> (/home/foobar/src/oref0/bin/oref0-autotune-prep.js:23:16)
    at Module._compile (module.js:409:26)
    at Object.Module._extensions..js (module.js:416:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Function.Module.runMain (module.js:441:10)

Is that caused by a missing file, or should I install the autotune differently?

Why do you do calculate it on an hourly basis, and not (as the pump and Nightscout support) on a half-hour basis.

@TC2013

This comment has been minimized.

Show comment
Hide comment
@TC2013

TC2013 Dec 28, 2016

Awesome idea and thanks for this!

  1. As it works now, it looks like the stored (basal, COB, and ISF) profile is an anchor and calculations will always stay fairly close to the stored profile. I think it would be great if the there were an option for autotune to update the pump profile with the new rates calculated. That way there is a "memory" of the adjustments being made and hopefully that translates into a more finely tuned profile over time.

  2. I also wanted to echo the earlier support for 30 minute windows (instead of 1 hour).

I'm not sure if either of my suggestions would make a difference in practice, but I at least wanted to pass along my thoughts and support.

TC2013 commented Dec 28, 2016

Awesome idea and thanks for this!

  1. As it works now, it looks like the stored (basal, COB, and ISF) profile is an anchor and calculations will always stay fairly close to the stored profile. I think it would be great if the there were an option for autotune to update the pump profile with the new rates calculated. That way there is a "memory" of the adjustments being made and hopefully that translates into a more finely tuned profile over time.

  2. I also wanted to echo the earlier support for 30 minute windows (instead of 1 hour).

I'm not sure if either of my suggestions would make a difference in practice, but I at least wanted to pass along my thoughts and support.

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Dec 28, 2016

Contributor

@PieterGit you probably just need to npm run global-install there.

I did hourly basals initially because that is convenient to parse and calculate. Anything more frequent introduces more noise, as you're using less and less data to tune each (half) hour. And with DIA of at least 3h anyway, scheduling basals for less than an hour is unlikely to make much difference...

Contributor

scottleibrand commented Dec 28, 2016

@PieterGit you probably just need to npm run global-install there.

I did hourly basals initially because that is convenient to parse and calculate. Anything more frequent introduces more noise, as you're using less and less data to tune each (half) hour. And with DIA of at least 3h anyway, scheduling basals for less than an hour is unlikely to make much difference...

PieterGit added a commit to PieterGit/oref0 that referenced this issue Dec 28, 2016

@PieterGit

This comment has been minimized.

Show comment
Hide comment
@PieterGit

PieterGit Dec 28, 2016

Contributor

Thanks. npm run global-install works. I updated Readme.md on my #282 pull request to improve docs. First run crashed:

Error: carb_ratio 2.685 out of bounds
/home/foobar/src/oref0/lib/autotune/index.js:66
        for (var i=0; i < basal_glucose.length; ++i) {
                                      ^
TypeError: Cannot read property 'length' of undefined
    at tuneAllTheThings (/home/pieterb/src/oref0/lib/autotune/index.js:66:40)

. Investigated what happened.

Contributor

PieterGit commented Dec 28, 2016

Thanks. npm run global-install works. I updated Readme.md on my #282 pull request to improve docs. First run crashed:

Error: carb_ratio 2.685 out of bounds
/home/foobar/src/oref0/lib/autotune/index.js:66
        for (var i=0; i < basal_glucose.length; ++i) {
                                      ^
TypeError: Cannot read property 'length' of undefined
    at tuneAllTheThings (/home/pieterb/src/oref0/lib/autotune/index.js:66:40)

. Investigated what happened.

@danamlewis

This comment has been minimized.

Show comment
Hide comment
@danamlewis

danamlewis Dec 28, 2016

Contributor

@PieterGit that looks like it's pulling in a bad autotune file, then running into a safety configuration. Are you running with the pump profile being pulled in now? That's preventing a lot of @jyaw's runaway numbers he was seeing before we ran it with a pump profile.

Contributor

danamlewis commented Dec 28, 2016

@PieterGit that looks like it's pulling in a bad autotune file, then running into a safety configuration. Are you running with the pump profile being pulled in now? That's preventing a lot of @jyaw's runaway numbers he was seeing before we ran it with a pump profile.

@danamlewis

This comment has been minimized.

Show comment
Hide comment
@danamlewis

danamlewis Dec 28, 2016

Contributor

Pump profile + 20% limits are currently doing well on both my data (regular, plus a set each of super sensitive and super resistant parameters) and @jyaw's data and preventing any crazy extremes, converging to reasonable values.

Contributor

danamlewis commented Dec 28, 2016

Pump profile + 20% limits are currently doing well on both my data (regular, plus a set each of super sensitive and super resistant parameters) and @jyaw's data and preventing any crazy extremes, converging to reasonable values.

@danamlewis

This comment has been minimized.

Show comment
Hide comment
@danamlewis

danamlewis Dec 28, 2016

Contributor

@TC2013 we want some safety caps, which is why it currently would store a different profile rather than editing the pump profile - then a 20% cap will prevent it from constantly iterating up up and away and being very different from what the human originally set for manual mode. Open to brainstorming if whether there are other ways to achieve this - but generally thinking if the values and basals start to be more than 20% off for looping purposes, the human needs to decide whether to adjust the baseline pump-stored values, in order for it to tune further beyond that.

Contributor

danamlewis commented Dec 28, 2016

@TC2013 we want some safety caps, which is why it currently would store a different profile rather than editing the pump profile - then a 20% cap will prevent it from constantly iterating up up and away and being very different from what the human originally set for manual mode. Open to brainstorming if whether there are other ways to achieve this - but generally thinking if the values and basals start to be more than 20% off for looping purposes, the human needs to decide whether to adjust the baseline pump-stored values, in order for it to tune further beyond that.

@danamlewis

This comment has been minimized.

Show comment
Hide comment
@danamlewis

danamlewis Dec 28, 2016

Contributor

For OS X, the date command should be in the format date -d 2016-12-$i +%s. (If you skip this fix, the curl will give you usage errors.)

Contributor

danamlewis commented Dec 28, 2016

For OS X, the date command should be in the format date -d 2016-12-$i +%s. (If you skip this fix, the curl will give you usage errors.)

@jyaw

This comment has been minimized.

Show comment
Hide comment
@jyaw

jyaw Dec 29, 2016

Contributor

hey did the input/output change for bin/oref0-autotune.js? there's now a autotune/glucose.json and autotune/autotune.json.... aren't those pretty much the same? would I pass the previous run's autotune/glucose.json as autotune/autotune.json? With only 2 args it's complaining about my carb_ratio being out of bounds.

Contributor

jyaw commented Dec 29, 2016

hey did the input/output change for bin/oref0-autotune.js? there's now a autotune/glucose.json and autotune/autotune.json.... aren't those pretty much the same? would I pass the previous run's autotune/glucose.json as autotune/autotune.json? With only 2 args it's complaining about my carb_ratio being out of bounds.

@jyaw

This comment has been minimized.

Show comment
Hide comment
@jyaw

jyaw Dec 29, 2016

Contributor

disregard my previous comment.... not thinking straight. Good now :)

Contributor

jyaw commented Dec 29, 2016

disregard my previous comment.... not thinking straight. Good now :)

scottleibrand added a commit that referenced this issue Dec 29, 2016

Dexusb cgm loop (#282)
* Merge remote-tracking branch 'refs/remotes/origin/oref0-setup' into openaps/oref0-setup

merge

* make oref0-find-ti work on edison explorer board out of the box, required for ww pump

rename files, add to package.json

* fix json tabs and comma's

* use oref0-find-ti (because it also works on explorer board)

* fix dexusb and oref0-find-ti issue

* remove stdin by default

* create .profile  and dexusb fixes

* fix oref0 install docs

as suggested by @scottleibrand on issue
#261 (comment)

* apply changes because of scott's review

fix things a scott suggested on 28-12-2016

* sort files under bin in package.json. increment version number and use semver to indicate dev version

* run subg-ww-radio-parameters script on mmtune

* append radio_locale to pump.ini for ww pumps

temporary workaround for
oskarpearson/mmeowlink#55

* fix add radio_locale if it's not there workaround

oops, grep -q returns 0 if found, we want to add radio_locale if it's
not there so && instead of ||
@jyaw

This comment has been minimized.

Show comment
Hide comment
@jyaw

jyaw Dec 31, 2016

Contributor

@scottleibrand @danamlewis used the script below to test what looks like an issue... My resulting profile.json does not have correct ".basalprofile.start" entries. They appear to be inherited from the initial pump profile with repition until the "start" value changes in the initial pump profile (e.g. repeated 00:00:00 until 3am when my old profile changed). My repo appears to be even with the upstream autotune branch, so I don't think that's it. Can you confirm this on your end?

rm profile.[1-2].json; cp profile.pump.json profile.json; for run in `seq 1 2`; do cp profile.json profile.$run.json; for i in `seq 9 10`; do ~/src/oref0/bin/oref0-autotune-prep.js ns-treatments.json profile.json ns-entries.2016-12-$i.json > autotune.2016-12-$i.json; ~/src/oref0/bin/oref0-autotune.js autotune.2016-12-$i.json profile.json profile.pump.json > newprofile.2016-12-$i.json; cp newprofile.2016-12-$i.json profile.json; done; done

Also, have updates to the shell script ready for loggin stdout and reporting original vs. autotune in a table. But I'm thinking I want to work through the above issue before I do a PR for my updates.

Contributor

jyaw commented Dec 31, 2016

@scottleibrand @danamlewis used the script below to test what looks like an issue... My resulting profile.json does not have correct ".basalprofile.start" entries. They appear to be inherited from the initial pump profile with repition until the "start" value changes in the initial pump profile (e.g. repeated 00:00:00 until 3am when my old profile changed). My repo appears to be even with the upstream autotune branch, so I don't think that's it. Can you confirm this on your end?

rm profile.[1-2].json; cp profile.pump.json profile.json; for run in `seq 1 2`; do cp profile.json profile.$run.json; for i in `seq 9 10`; do ~/src/oref0/bin/oref0-autotune-prep.js ns-treatments.json profile.json ns-entries.2016-12-$i.json > autotune.2016-12-$i.json; ~/src/oref0/bin/oref0-autotune.js autotune.2016-12-$i.json profile.json profile.pump.json > newprofile.2016-12-$i.json; cp newprofile.2016-12-$i.json profile.json; done; done

Also, have updates to the shell script ready for loggin stdout and reporting original vs. autotune in a table. But I'm thinking I want to work through the above issue before I do a PR for my updates.

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Dec 31, 2016

Contributor

Yeah, for now those start entries are simply copied from the original profile. They don't do anything AFAICT, so I haven't bothered to change them. I think @sulkaharo had an easy fix for that, but I don't know where that ended up...

Contributor

scottleibrand commented Dec 31, 2016

Yeah, for now those start entries are simply copied from the original profile. They don't do anything AFAICT, so I haven't bothered to change them. I think @sulkaharo had an easy fix for that, but I don't know where that ended up...

@sulkaharo

This comment has been minimized.

Show comment
Hide comment
@sulkaharo

sulkaharo Jan 1, 2017

Collaborator
diff --git a/lib/autotune/index.js b/lib/autotune/index.js
index 949d68b..48963f2 100644
--- a/lib/autotune/index.js
+++ b/lib/autotune/index.js
@@ -43,6 +43,8 @@ function tuneAllTheThings (inputs) {
         }
         hourlybasalprofile[i].i=i;
         hourlybasalprofile[i].minutes=i*60;
+        var zeroPadHour = ("000"+i).slice(-2);
+        hourlybasalprofile[i].start=zeroPadHour + ":00:00";
         hourlybasalprofile[i].rate=Math.round(hourlybasalprofile[i].rate*1000)/1000
         // pump basal profile
         if (pumpbasalprofile && pumpbasalprofile[0]) {

Note this assumes the even hour implementation. Our profile currently uses half an hours, for example a changed basal rate at 5:30 AM to combat morning resistance, where 5:00 seemed too early due to often being low at that time and 6:00 seemed too late for the basal to kick in before breakfast.

Collaborator

sulkaharo commented Jan 1, 2017

diff --git a/lib/autotune/index.js b/lib/autotune/index.js
index 949d68b..48963f2 100644
--- a/lib/autotune/index.js
+++ b/lib/autotune/index.js
@@ -43,6 +43,8 @@ function tuneAllTheThings (inputs) {
         }
         hourlybasalprofile[i].i=i;
         hourlybasalprofile[i].minutes=i*60;
+        var zeroPadHour = ("000"+i).slice(-2);
+        hourlybasalprofile[i].start=zeroPadHour + ":00:00";
         hourlybasalprofile[i].rate=Math.round(hourlybasalprofile[i].rate*1000)/1000
         // pump basal profile
         if (pumpbasalprofile && pumpbasalprofile[0]) {

Note this assumes the even hour implementation. Our profile currently uses half an hours, for example a changed basal rate at 5:30 AM to combat morning resistance, where 5:00 seemed too early due to often being low at that time and 6:00 seemed too late for the basal to kick in before breakfast.

@sulkaharo

This comment has been minimized.

Show comment
Hide comment
@sulkaharo

sulkaharo Jan 1, 2017

Collaborator

Related - #301 pulls all of the needed profile data into the one profile object. We should do a refactor where all scripts pull data using this file and always go through using a method call that returns needed data, rather than accessing data in the profile directly. This would allow us to implement the support for loading multiple profile objects with validity periods, which is a step toward allowing systems like autotune use profile data that was actually in use at the time of the analysis. (And changing the file format if needed - right now all the core in the repo is tightly coupled to data storage file formats.)

Collaborator

sulkaharo commented Jan 1, 2017

Related - #301 pulls all of the needed profile data into the one profile object. We should do a refactor where all scripts pull data using this file and always go through using a method call that returns needed data, rather than accessing data in the profile directly. This would allow us to implement the support for loading multiple profile objects with validity periods, which is a step toward allowing systems like autotune use profile data that was actually in use at the time of the analysis. (And changing the file format if needed - right now all the core in the repo is tightly coupled to data storage file formats.)

@PieterGit

This comment has been minimized.

Show comment
Hide comment
@PieterGit

PieterGit Jan 2, 2017

Contributor

I added #310 which can create an Microsoft Excel file with the 'expanded profile' (isf and basal profile for each half an hour of a day), for each profile json file in the autotune directory.

Please test and leave feedback here or in a new issue or i-to-b channel.

Contributor

PieterGit commented Jan 2, 2017

I added #310 which can create an Microsoft Excel file with the 'expanded profile' (isf and basal profile for each half an hour of a day), for each profile json file in the autotune directory.

Please test and leave feedback here or in a new issue or i-to-b channel.

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Jan 3, 2017

Contributor

I think we're getting close. Just opened #313 for review

Contributor

scottleibrand commented Jan 3, 2017

I think we're getting close. Just opened #313 for review

@sulkaharo

This comment has been minimized.

Show comment
Hide comment
@sulkaharo

sulkaharo Jan 3, 2017

Collaborator

So hey, how does the tuning take profile changes during the period being analyzed into account? From what I can see nothing was done regarding this, which AFAIK can lead to the analysis producing a profile that's on the other end of the safety margin from what it should be. If this goes into the production release without even a simple solution in place, at least the documentation of the feature should say the results are unpredictable if the analysis is run within 24 hours of a pump profile change, leading to potential under- or overdosing of insulin as configured by the adjustment margin (as in, the same as the current autosens, which is also a big problem).

Collaborator

sulkaharo commented Jan 3, 2017

So hey, how does the tuning take profile changes during the period being analyzed into account? From what I can see nothing was done regarding this, which AFAIK can lead to the analysis producing a profile that's on the other end of the safety margin from what it should be. If this goes into the production release without even a simple solution in place, at least the documentation of the feature should say the results are unpredictable if the analysis is run within 24 hours of a pump profile change, leading to potential under- or overdosing of insulin as configured by the adjustment margin (as in, the same as the current autosens, which is also a big problem).

@jyaw

This comment has been minimized.

Show comment
Hide comment
@jyaw

jyaw Jan 3, 2017

Contributor

I agree with @sulkaharo's concern. It seems that we now have the ability to push/pull/sync profile info from NS though?.. Perhaps we need to pull all profiles that cover the particular time window from NS? In a case where there's an overlap, we could just do a piecewise autotune for that day and note this to the user somehow? Or not provide autotune results during periods with older (not the current) profiles? There could be a need for different approach in cases with manual vs iterative tuning as well...

Contributor

jyaw commented Jan 3, 2017

I agree with @sulkaharo's concern. It seems that we now have the ability to push/pull/sync profile info from NS though?.. Perhaps we need to pull all profiles that cover the particular time window from NS? In a case where there's an overlap, we could just do a piecewise autotune for that day and note this to the user somehow? Or not provide autotune results during periods with older (not the current) profiles? There could be a need for different approach in cases with manual vs iterative tuning as well...

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Jan 3, 2017

Contributor

I believe that concern is addressed by the fact that, with autotune enabled, changes to the pump only get reflected in pumpprofile.json, which is only used to set the 20% caps on autotune. The actual IOB calculations etc. will continue to use the autotuned profile.json, which removes the sudden shifts you would see with just autosens. We definitely should add some documentation around this, though, so people know that changes they make to the pump's basal profile won't take effect for looping right away like they did without autotune, won't have any effect at all until midnight, and even after that will only influence basals and ratios that are more than 20% off the new pump settings.

Contributor

scottleibrand commented Jan 3, 2017

I believe that concern is addressed by the fact that, with autotune enabled, changes to the pump only get reflected in pumpprofile.json, which is only used to set the 20% caps on autotune. The actual IOB calculations etc. will continue to use the autotuned profile.json, which removes the sudden shifts you would see with just autosens. We definitely should add some documentation around this, though, so people know that changes they make to the pump's basal profile won't take effect for looping right away like they did without autotune, won't have any effect at all until midnight, and even after that will only influence basals and ratios that are more than 20% off the new pump settings.

@sulkaharo

This comment has been minimized.

Show comment
Hide comment
@sulkaharo

sulkaharo Jan 3, 2017

Collaborator

+1 for docs :) I'm definitely not sure what all the intended consequences of this system are.

Collaborator

sulkaharo commented Jan 3, 2017

+1 for docs :) I'm definitely not sure what all the intended consequences of this system are.

@danamlewis

This comment has been minimized.

Show comment
Hide comment
@danamlewis

danamlewis Jan 3, 2017

Contributor

This issue is starting to get busy, and has nuggets of what needs to go in to start. Unless someone beats me to it, I'll pr to the OpenAPS docs and we can discuss additions/edits on that pr (will come back and link it here). Although, since this could be used by non-OpenAPS users, we'll need to try to be clear on how this is documented as a one-off run by anyone w requisite data, vs the documentation for how it incorporates into OpenAPS looping if/when enabled.

Contributor

danamlewis commented Jan 3, 2017

This issue is starting to get busy, and has nuggets of what needs to go in to start. Unless someone beats me to it, I'll pr to the OpenAPS docs and we can discuss additions/edits on that pr (will come back and link it here). Although, since this could be used by non-OpenAPS users, we'll need to try to be clear on how this is documented as a one-off run by anyone w requisite data, vs the documentation for how it incorporates into OpenAPS looping if/when enabled.

@danamlewis

This comment has been minimized.

Show comment
Hide comment
@danamlewis

danamlewis Jan 6, 2017

Contributor

(Starting stubbing out WIP docs - please direct PRs there for things that need to be added, further documented, etc. to this page: http://openaps.readthedocs.io/en/latest/docs/walkthrough/phase-4/autotune.html Thanks! )

Contributor

danamlewis commented Jan 6, 2017

(Starting stubbing out WIP docs - please direct PRs there for things that need to be added, further documented, etc. to this page: http://openaps.readthedocs.io/en/latest/docs/walkthrough/phase-4/autotune.html Thanks! )

scottleibrand added a commit that referenced this issue Jan 10, 2017

Iterative autotuning of basals and ratios (#313)
Implements new Autotune feature (#261)

Commit details:

* oref0-autotune-prep.js

* use oref0/lib/autotune-prep

* don't print autosens debug stuff when running in meal mode

* divide basal_glucose_data from isf_glucose_data at basalBgi > -3 * bgi; comments and TODOs

* bucketize data, calculate deltas and deviations, and use those to better allocate data to csf, isf, or basals

* prep for an optimized append mode to an existing autotune/glucose.json

* initial framework for oref0-autotune.js

* adjust basals for basal deviations

* add bgi to output json

* try including rising BGs in ISF calculations

* initial basic ISF autotuning

* use medians, not averages, for ISF calcs

* add mealCarbs and mealAbsorption start/end

* first pass at CSF estimation

* when avgDelta with large negative BGI, don't use that for ISF or basal tuning

* convert sgv records into glucose if needed

* add support for nightscout treatments.json format

* only consider BGs for 6h before a meal to speed up processing

* properly map sgv to glucose

* add support for carbs from NS

* remove unnecessary clock and basalprofile arguments

* update basalprofile

* profile needs isfProfile not isf_profile

* use min_5m_carbimpact in calculating total deviations too

* way more efficient and simpler iterative algorithm for calculating COB

* add mealCarbs to glucose_data

* make sure new CSF isn't NaN

* disable min deviation for CSF calculation

* smooth out basal adjustments by incrementing evenly and reducing proportionally over 3h

* smooth out basal adjustments by using average of current and last 3 hours as iob_inputs.profile.current_basal

* make sure increases and decreases of basal are both doing the same 20%

* minPossibleDeviation, and actually basal_glucose_data.push when avgDelta > 0

* include Math.max(0,bgi) in minPossibleDeviation

* null treatment check

* add pumpprofile as optional argument

* TODO: use pumpprofile to implement 20% limit on tuning

* use pumpprofile to implement 20% limit on tuning

* use pumpprofile to implement 20% limit on ISF and CSF tuning

* only set pumpCSF if setting pumpISF

* logging

* null check

* undefined check

* Commenting to describe index.js

* Deleting unnecssary variable that's not used

* More commenting

* Last bit of commenting for now

* Rename function from diaCarbs to categorizeBGdatums

* Rename total.js to categorize.js

* Update reference to now categorize.js

* Reference categorize instead of total.js

* Rename total.js to categorize.js

* Rename categorize.js to tune.js

* Fixing function naming from diacarbs to tuneAllTheThings

* Delete tune.js

* Simplifying min and maxrate

* Tweaking troubleshooting language

* Adding to-do about dinner carbs not absorbed at midnight

* Defining fullNewCSF

* Function tuneAllTheThings instead of Generate

* Update index.js

* Make pump profile required for autotune (#298)

* Make pump profile required for autotune

* Added oref0-autotune-test.sh script to test autotune. Allows the user to specify date range and number of runs as well as openaps directory and user's Nightscout URL. Note that the pump profile is pulled from the following location: <loop dir>/settings/profile.json. Also note that --end-date and --runs are not required parameters, but the script will default to the day before today as the end date and 5 runs, so you may or may not want to use those. Example Usage: ./oref0-autotune-test.sh --dir=openaps --ns-host=<NS URL> --start-date=2016-12-9 --end-date=2016-12-10 --runs=2

* Added oref0-autotune-test.sh script to test the autotune. Allows the user to specify date range and number of runs as well as openaps directory and user's Nightscout URL. Note that the pump profile is pulled from the following location: <loop dir>/settings/profile.json. Also note that --end-date and --runs are not required parameters, but the script will default to the day before today as the end date and 5 runs, so you may or may not want to use those. Example Usage: ./oref0-autotune-test.sh --dir=openaps --ns-host=<NS URL> --start-date=2016-12-9 --end-date=2016-12-10 --runs=2 (#303)

* Added stdout logging option to oref0-autotune-test.sh. Terminal output is still there as it was before. Logging is off by default, but can be enabled with the --log=true option. Also cleaned up odds and ends in the file :)

* default to 1 run, for yesterday, if not otherwise specified

* If a previous settings/autotune.json exists, use that; otherwise start from settings/profile.json

* write out isf to sens too: used by determine-basal

* make sure suggested.json is printed all on one line

* support optional --autotune autotune.json

* round insulinReq

* add @sulkaharo's method to calculate basal start

* small fix for autotune command line parameters (#308)

correct documentation of parameters and exit if user enters an unknown
command line option

* autotune export to microsoft excel

initial version, requires xlsxwriter

* increment version number for autotune

* export excel improvements for autotune

swap run and date column, do some formatting (font size, etc)

* rename export to excel to .xlsx instead of .xls for consistency

* missed one... changed --xls to --xlsx in the cli example

* swap Date and Run column, add license stuff in script

* Install autotune with oref0-setup (#312)

* Commenting out "type": = "current" (#296)

* restart networking completely instead just cycling wlan0 (#284)

* restart networking completely instead just cycling wlan0

this has proved more stable for me across some wifi netoworks

* Re-add dhclient release/renew

* Update oref0-online.sh

* Bt device name (#307)

* Update oref0-online.sh

Change the BT devicename from BlueZ 5.37 to hostname of Board

* Update oref0-setup.sh

add hostname as BT device name.

* Exit scripts when variables under or functions fail (#309)

* Exit script when variables unset or functions fail

* first attempt at setting up nightly autotune with oref0-setup and using autotuned profile.json for looping

* increment version number for autotune

* check if settings/autotune.json is valid json

* specify a default for radio_locale

* require openaps 0.1.6 or higher for radio_locale

* radio_locale requires openaps 0.2.0-dev or later

* redirect oref0-ns-autotune stderr to log file

* update script name in usage

* use settings/pumpprofile.json in oref0-ns-autotune

* Updated bin/oref-autotune-test.sh with capability of running a small summary report at the end of the script. The report consists of the tunable parameters of interest, their current pump value and the autotune recommendation. Implemented report separately in oref0-autotune-recommends-report.sh. Example Usage: ~/src/oref0/bin/oref0-autotune-recommends-report.sh <Full path to OpenAPS Directory>.

* Merged changes that were incorporated into the updated oref0-ns-autotune.sh to add terminal logging to autotune.<date/time stamp>.log in the autotune directory as well as a simple table report at the end of this manual autotune to show current pump profile vs autotune recommended profile. Implemented report in oref0-autotune-recommends-report.sh

* rename to oref0-autotune-core.js and oref0-autotune.sh

* Clarify usage

* We're redirecting stderr not stdout

* only cp autotune/profile.json if it's valid

* camelCase autotune and use pumpProfile.autosens_max and min (#319)

* camelCase autotune.js and categorize.js

* use pumpProfile.autosens_max and min instead of 20% hard-coded cap

* revert require('../profile/isf');

* camelCase autotuneM(in|ax)

* profile/isf function is still isfLookup

* change ISFProfile back to isfProfile to match pumpprofile

* change basalProfile back to basalprofile to match pumpprofile.json

* change ISFProfile back to isfProfile to match pumpprofile

* camelCase pumpHistory to match categorize.js

* change basalProfile back to basalprofile to match pumpprofile.json

* camelCase pumpProfile to match autotune/index.js

* camelCase to match autotune/index.js

* autotuneMin/Max and camelCase fixes

* update 20% log statements for basals to reflect autotune min/max

* mmtune a bit more often

* fix start and end date for ns-treatments.json

* leave ISF unchanged if fewer than 5 ISF data points (#322)

* leave ISF unchanged if fewer than 5 ISF data points

* move stuff out of the else block

* output csf as expected by oref0-autotune-recommends-report.sh

* fix ww pump and dexusb with small changes (#323)

* fix blocker bug for ww pumps and for dex usb

* additional upcasing for radio_locale for cli

* Compare lowercase radio_locale to "ww"

* bump version and require oref0@0.3.6 or later

* install jq for autotune

* use pip install rather than cloning (#324)

* pip install git+URL@dev instead of cloning

* sudo pip install

* move openaps dev install out where it belongs

* remove commented code

* bump version and require oref0@0.3.6 or later

* redirect stderr to stdout so we can grep it

* Continue and output a non-autotuned profile if we don't have autotune_data
@PieterGit

This comment has been minimized.

Show comment
Hide comment
@PieterGit

PieterGit Jan 16, 2017

Contributor

Did my first autotune run on on 6 weeks of data. Findings/remarks:

  • autotune takes long, It took upboard (Intel X5-Z8350 1.92 Ghz) almost 2 days for 3 runs
  • i think auto tune must get it own settings, i.e. profile.autotune_min and profile.autotune_max if run daily. Otherwise profiles can really change quite fast, e.g. 1.2 increments on a day might increase the basal 3.6 times in a week ($1.2^7 \approx 3.6$) . Update: I read that the original profile is always used as a basis, so this is untrue. I still believe differnt features should have different profile settings.
  • maybe also use the amount of positive and negative basals. For example: hard limit on:
    Positive temp basal insulin + Negative temp basal insulin increase on basal profile. So if there is +4.5 positive bolus on a day and a -3.2 negative bolus, then the profile should maximal increase +4.5 + - 3.2 = 1.3 U on 24 hour basis. So if openaps fails the amount of basal insuline is not increased by too much.
  • filename handling is currently profile.<run>.<date>.json and newprofile.<run>.<date>.json. Better would be something that will sort easily like:
profile.YYYYMMDD.autotune-input.json instead of profile.<run>.<date>
profile.YYYYMMDD.autotune-output.json instead of newprofile.<run>.<date>
  • auto tune should have a seperate CLI option to install the latest autotune-output file, so it can backup/archive the current profile.. For example
profile.YYYYMMDD-HHMM.yymmdd-hhmm.autotune-archive.json (profile used from YYYMMDD-HHMM to yymmdd-hhmm)
profile.YYYYMMDD-0500-.autotune-current.json 
  • Reports in Excel should mimic the format of Carelink reports of pump settings a bit so it's familiar and can be used by health care professionals in future.

  • I'm working on graphs that show the basal profile. I'll try to generate them automatically.

  • found a ISF profile bug

Anybody has objections of enabling Excel generation by default? Compared to autotune, the Excel generation only takes a few CPU cycles extra.

Contributor

PieterGit commented Jan 16, 2017

Did my first autotune run on on 6 weeks of data. Findings/remarks:

  • autotune takes long, It took upboard (Intel X5-Z8350 1.92 Ghz) almost 2 days for 3 runs
  • i think auto tune must get it own settings, i.e. profile.autotune_min and profile.autotune_max if run daily. Otherwise profiles can really change quite fast, e.g. 1.2 increments on a day might increase the basal 3.6 times in a week ($1.2^7 \approx 3.6$) . Update: I read that the original profile is always used as a basis, so this is untrue. I still believe differnt features should have different profile settings.
  • maybe also use the amount of positive and negative basals. For example: hard limit on:
    Positive temp basal insulin + Negative temp basal insulin increase on basal profile. So if there is +4.5 positive bolus on a day and a -3.2 negative bolus, then the profile should maximal increase +4.5 + - 3.2 = 1.3 U on 24 hour basis. So if openaps fails the amount of basal insuline is not increased by too much.
  • filename handling is currently profile.<run>.<date>.json and newprofile.<run>.<date>.json. Better would be something that will sort easily like:
profile.YYYYMMDD.autotune-input.json instead of profile.<run>.<date>
profile.YYYYMMDD.autotune-output.json instead of newprofile.<run>.<date>
  • auto tune should have a seperate CLI option to install the latest autotune-output file, so it can backup/archive the current profile.. For example
profile.YYYYMMDD-HHMM.yymmdd-hhmm.autotune-archive.json (profile used from YYYMMDD-HHMM to yymmdd-hhmm)
profile.YYYYMMDD-0500-.autotune-current.json 
  • Reports in Excel should mimic the format of Carelink reports of pump settings a bit so it's familiar and can be used by health care professionals in future.

  • I'm working on graphs that show the basal profile. I'll try to generate them automatically.

  • found a ISF profile bug

Anybody has objections of enabling Excel generation by default? Compared to autotune, the Excel generation only takes a few CPU cycles extra.

@PieterGit

This comment has been minimized.

Show comment
Hide comment
@PieterGit

PieterGit Jan 17, 2017

Contributor

I also used Excel to calculate a profile.autosens_min=1.0 and profile.autosens_max=1.0 basal profile. That way I can see the suggestions of autotune based on the same amount of daily basal insulin. Would setting it in autotune and calculating with autotune give a different result than scaling the autotune results by compensating the increase of suggested basal insuline?

Contributor

PieterGit commented Jan 17, 2017

I also used Excel to calculate a profile.autosens_min=1.0 and profile.autosens_max=1.0 basal profile. That way I can see the suggestions of autotune based on the same amount of daily basal insulin. Would setting it in autotune and calculating with autotune give a different result than scaling the autotune results by compensating the increase of suggested basal insuline?

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Jan 18, 2017

Contributor

I think for retrospective analysis we probably want to discourage people from running multiple runs on the same input data, and instead have them just use more input data if they want to see if it results in bigger adjustments.

To speed things up, we should have oref0-autotune.sh download the treatments for each day separately, so that oref0-autotune-prep.js doesn't have to scan through multiple weeks of treatments for every 5m data point. When I did this manually in a bash loop, it sped things up considerably.

I'm not sure what you mean by the hard limit on basal insulin increase. Have you seen an example of where such a limit would be useful because it's doing the wrong thing? If not, I think the autosens_max and autosens_min limits should be sufficient.

I don't use the Excel stuff myself yet, and don't really have any strong opinions about file naming, so you can do whatever you want there. :-)

Can you expand on the ISF profile bug? Have you PR'd a fix for that yet?

Contributor

scottleibrand commented Jan 18, 2017

I think for retrospective analysis we probably want to discourage people from running multiple runs on the same input data, and instead have them just use more input data if they want to see if it results in bigger adjustments.

To speed things up, we should have oref0-autotune.sh download the treatments for each day separately, so that oref0-autotune-prep.js doesn't have to scan through multiple weeks of treatments for every 5m data point. When I did this manually in a bash loop, it sped things up considerably.

I'm not sure what you mean by the hard limit on basal insulin increase. Have you seen an example of where such a limit would be useful because it's doing the wrong thing? If not, I think the autosens_max and autosens_min limits should be sufficient.

I don't use the Excel stuff myself yet, and don't really have any strong opinions about file naming, so you can do whatever you want there. :-)

Can you expand on the ISF profile bug? Have you PR'd a fix for that yet?

@Cagier

This comment has been minimized.

Show comment
Hide comment
@Cagier

Cagier Jan 25, 2017

I'm really interested in running against retrospective data but have only used Nightscout for uploading CGM readings thus far. However, as we have an Animas Vibe pump, all the historical basal, bolus and carb info is held in Diasend. Is there any relatively easy way of using the exported (into .xls) data from Diasend and plugging into Nightscout / mlab? Or can the system take information from a combination of sources?

Cagier commented Jan 25, 2017

I'm really interested in running against retrospective data but have only used Nightscout for uploading CGM readings thus far. However, as we have an Animas Vibe pump, all the historical basal, bolus and carb info is held in Diasend. Is there any relatively easy way of using the exported (into .xls) data from Diasend and plugging into Nightscout / mlab? Or can the system take information from a combination of sources?

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Jan 26, 2017

Contributor

If you can construct a treatments.json file in the format expected by the autotune code, you can run oref0-autotune-prep against that instead of against the ns-treatments.js downloaded by oref0-autotune.

Contributor

scottleibrand commented Jan 26, 2017

If you can construct a treatments.json file in the format expected by the autotune code, you can run oref0-autotune-prep against that instead of against the ns-treatments.js downloaded by oref0-autotune.

@Cagier

This comment has been minimized.

Show comment
Hide comment
@Cagier

Cagier Jan 26, 2017

Thanks for the seedy reply Scott. OK, I reckon I'll set up some excel formulas/macros to do. If I come up with something reusable then I will post it back here in case anyone else is interested. I'll have a look at the documentation and see if I can work out the appropriate format. If there are any existing samples that I can refer to then that would be useful but it would probably help if I read the documentation first and then asked questions afterwards! ;) I'll come back if I'm stuck.
Cheers

Cagier commented Jan 26, 2017

Thanks for the seedy reply Scott. OK, I reckon I'll set up some excel formulas/macros to do. If I come up with something reusable then I will post it back here in case anyone else is interested. I'll have a look at the documentation and see if I can work out the appropriate format. If there are any existing samples that I can refer to then that would be useful but it would probably help if I read the documentation first and then asked questions afterwards! ;) I'll come back if I'm stuck.
Cheers

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Jan 26, 2017

Contributor

The expected format is json, so you'll probably end up writing a script to parse the data from .csv or something and turn it into the proper json format. Not sure if Excel can do json natively, but I've never heard of anyone doing it that way. :-)

Contributor

scottleibrand commented Jan 26, 2017

The expected format is json, so you'll probably end up writing a script to parse the data from .csv or something and turn it into the proper json format. Not sure if Excel can do json natively, but I've never heard of anyone doing it that way. :-)

@scottleibrand

This comment has been minimized.

Show comment
Hide comment
@scottleibrand

scottleibrand Feb 7, 2017

Contributor

Anything else we want to track under this issue? If not, I think it's ready to be closed.

Contributor

scottleibrand commented Feb 7, 2017

Anything else we want to track under this issue? If not, I think it's ready to be closed.

@danielharrelson

This comment has been minimized.

Show comment
Hide comment
@danielharrelson

danielharrelson Aug 15, 2017

oref0-autotune-core autotune.1.2017-08-01.json profile.json profile.pump.json > newprofile.1.2017-08-01.json
/Users/daniel/Downloads/oref0/lib/autotune/index.js:49
hourlyBasalProfile[i].i=i;
^

TypeError: Cannot set property 'i' of undefined
at tuneAllTheThings (/Users/daniel/Downloads/oref0/lib/autotune/index.js:49:32)
at Object. (/Users/daniel/Downloads/oref0/bin/oref0-autotune-core.js:59:27)
at Module._compile (module.js:573:30)
at Object.Module._extensions..js (module.js:584:10)
at Module.load (module.js:507:32)
at tryModuleLoad (module.js:470:12)
at Function.Module._load (module.js:462:3)
at Function.Module.runMain (module.js:609:10)
at startup (bootstrap_node.js:158:16)
at bootstrap_node.js:578:3
false
Could not run oref0-autotune-core autotune.1.2017-08-01.json profile.json profile.pump.json

I'm receiving this error on both Debian and Mac after it spews through my data.

danielharrelson commented Aug 15, 2017

oref0-autotune-core autotune.1.2017-08-01.json profile.json profile.pump.json > newprofile.1.2017-08-01.json
/Users/daniel/Downloads/oref0/lib/autotune/index.js:49
hourlyBasalProfile[i].i=i;
^

TypeError: Cannot set property 'i' of undefined
at tuneAllTheThings (/Users/daniel/Downloads/oref0/lib/autotune/index.js:49:32)
at Object. (/Users/daniel/Downloads/oref0/bin/oref0-autotune-core.js:59:27)
at Module._compile (module.js:573:30)
at Object.Module._extensions..js (module.js:584:10)
at Module.load (module.js:507:32)
at tryModuleLoad (module.js:470:12)
at Function.Module._load (module.js:462:3)
at Function.Module.runMain (module.js:609:10)
at startup (bootstrap_node.js:158:16)
at bootstrap_node.js:578:3
false
Could not run oref0-autotune-core autotune.1.2017-08-01.json profile.json profile.pump.json

I'm receiving this error on both Debian and Mac after it spews through my data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment