Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IndexError: list index (nnn) out of range (3155b85a fw) #33

Closed
bdruth opened this issue Oct 17, 2021 · 19 comments
Closed

IndexError: list index (nnn) out of range (3155b85a fw) #33

bdruth opened this issue Oct 17, 2021 · 19 comments

Comments

@bdruth
Copy link

bdruth commented Oct 17, 2021

Just started seeing this this morning ... not sure what's going on yet, might've started with a fw update, since it started ~3:51am, local time.

starlink-grpc-tools     | current counter:       23098
starlink-grpc-tools     | All samples:           900
starlink-grpc-tools     | Valid samples:         900
starlink-grpc-tools     | Traceback (most recent call last):
starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 330, in <module>
starlink-grpc-tools     |     main()
starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 311, in main
starlink-grpc-tools     |     rc = loop_body(opts, gstate)
starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 254, in loop_body
starlink-grpc-tools     |     rc = dish_common.get_data(opts, gstate, cb_add_item, cb_add_sequence, add_bulk=cb_add_bulk)
starlink-grpc-tools     |   File "/app/dish_common.py", line 200, in get_data
starlink-grpc-tools     |     rc = get_history_stats(opts, gstate, add_item, add_sequence)
starlink-grpc-tools     |   File "/app/dish_common.py", line 296, in get_history_stats
starlink-grpc-tools     |     groups = starlink_grpc.history_stats(parse_samples,
starlink-grpc-tools     |   File "/app/starlink_grpc.py", line 968, in history_stats
starlink-grpc-tools     |     if not history.scheduled[i]:
starlink-grpc-tools     | IndexError: list index (598) out of range

Using the latest docker image.

Looking back at the data captured before the update, and I see it was on a different fw (ee5aa15c). What kind of diagnostics should I provide to help?

@sparky8512
Copy link
Owner

Ah, I fixed the crash here already (see issue #32), but I forgot the Docker image would need to be updated!

@neurocis, could you please pull my latest changes and update that?

And yes, this was due to that firmware update. Even with the crash fixed, some data is no longer present in the script output due to the grpc service change.

@bdruth
Copy link
Author

bdruth commented Oct 17, 2021

Awesome, thx!

@StephenShamakian
Copy link

I just noticed my Grafana dashboard stopped populating data a few days ago. @sparky8512 I see you may have already fixed this. But I too use @neurocis's docker container. It looks like that was last updated 5 months ago?

@StephenShamakian
Copy link

Also this deprecation of fields in the Starlink GRPC API worries me... As I believe it will render large parts of my Grafana Dashboard (https://github.com/StephenShamakian/Starlink-Grafana-Dashboard) that uses these endpoints useless now. :(

@bdruth
Copy link
Author

bdruth commented Oct 19, 2021

@StephenShamakian - I've switched my docker-compose to build from this repo instead of pulling the Docker Hub image for the moment.

    # image: neurocis/starlink-grpc-tools
    build:
      context: https://github.com/sparky8512/starlink-grpc-tools.git#main

then just do a docker-compose build and then docker-compose up again.

@StephenShamakian
Copy link

@bdruth Sadly I am not using docker compose. Synology NAS doesn't support it. I'm just using simple docker container images downloaded from the hub in the Synology NAS Web UI. I suppose I could switch to compose via the SSH command line. But thanks for the tip! I'm still hoping that the docker image is updated soon. :)

@neurocis
Copy link
Collaborator

neurocis commented Oct 19, 2021

Just checked in, I will update the container build within 24 hrs. Cheers!

EDIT: Bumped it, hub should rebuild it soon. @sparky8512 added you to my fork as collaborator.

@neurocis
Copy link
Collaborator

Hmm scratch that, re docker hub image, looks like no more autobuild at docker hub on the free tier, not even grandfathered, this may take longer than thought as will need to find an alternative to publish to. May be better if @sparky8512 enabled github action (I think it is) to build on the main repo.

@sparky8512
Copy link
Owner

I'll have a look at that. I've never messed around with actions before.

@sparky8512
Copy link
Owner

@neurocis : Before I fall any further down the rabbit hole here, I wanted to confirm what you had in mind wrt the GitHub actions.

I found the "build a Docker image" action, but that doesn't seem to be useful on its own without also running a publish step, as it would just build an image and then discard it. I suppose that verifies that the image is buildable, which is useful for CI, but I assume that's not your main interest here.

I also found an action config for building and publishing Docker images to Docker Hub and/or the GitHub Packages repository. This got me to looking into the Docker Hub repository pricing, and I find the description on the free tier to be confusing, at best. At this point, I'm inclined to switch to GitHub Packages for hosting the docker image for this project, as they don't appear to put any limit on public repositories.

Thoughts?

@sparky8512
Copy link
Owner

I went ahead and set up a manual action to publish to GitHub Packages, for testing if nothing else.

The image can currently be pulled as: ghcr.io/sparky8512/starlink-grpc-tools

@neurocis
Copy link
Collaborator

neurocis commented Oct 20, 2021

Went ahead myself mucking around with the actions (new to me as well) and managed to get a dockerhub build-push after many commits to hit-and-miss trying things out. Anyhow my docker build/repo is now current. You can still use the dockerhub repo, just no more automatic build service on free tier, manual (or action) build-push.

@StephenShamakian
Copy link

StephenShamakian commented Oct 20, 2021

@neurocis & @sparky8512 Thanks guys! The container image is up and running again with the latest code!

So now begins the not so fun part of updating my Grafana dashboard to remove all the fields SpaceX removed (oh the joys of undocumented APIs) :(

So in Issue #32 It mentions the following fields were removed: snr, state, scheduled & obstructed. But for some reason I am still getting a state returned as "Connected"? Is that just a static default value now?

Also in the Starlink Debug data under the section called "obstructionStats" there are a few interesting fields like "fractionalObstructed" and "last24hObstructedS" are those available to this import script? Trying to figure out how to rebuild at least some indications of obstruction in my dashboard even if it won't work like it did before.

Kind of bummed they removed SNR. Was a good metric to see how much rain fade was effecting it... :(

@sparky8512
Copy link
Owner

So in Issue #32 It mentions the following fields were removed: snr, state, scheduled & obstructed. But for some reason I am still getting a state returned as "Connected"? Is that just a static default value now?

I re-added state yesterday in change 3dddd95 because I found a substitute source for that information. It's not exactly the same as it was before, as there are a few additional possible state values. See the doc string at the top of starlink_grpc.py for the full list.

Also in the Starlink Debug data under the section called "obstructionStats" there are a few interesting fields like "fractionalObstructed" and "last24hObstructedS" are those available to this import script? Trying to figure out how to rebuild at least some indications of obstruction in my dashboard even if it won't work like it did before.

Yes, those should be in the status mode group as fraction_obstructed and seconds_obstructed. However, now that I'm looking at it, I see that the last_24h_obstructed_s field of the status response message (from which seconds_obstructed is taken) is marked deprecated, too, and I don't see it populated in the message. I'm not surprised, as this field was always a bit dubious. It's possible I just don't have any obstructions reported recently, but I doubt it.

There is also currently_obstructed in the status mode group, but I'm not sure the field in the grpc message that uses is being populated, either. It's not marked deprecated, but it used to be just the current value of the corresponding history field, which is no longer present, and I only ever see it reported as default value (False). I don't have obstructions very often these days, though, so it may just be that I never have them when I pull status. There's actually a different way this could be computed, though, so I will probably cook up a polling test to verify.

Finally, there are obstruction_duration and obstruction_interval in the status mode group, which were added somewhat recently to convey the "prolonged outage" estimates that appear in the mobile app. However, the scripts only output them if they are marked "valid" in the grpc message (otherwise they are emitted as an empty field). I've never seen it marked valid on my dish, so I don't know if I made that logic too strict, or I just don't have enough obstructions on my dish these days for the dish to judge them "prolonged".

As I mentioned in issue #32, the history data does include data in a different form that could be used to re-add some of the ping_drop items, specifically total_obstructed_ping_drop and total_unscheduled_ping_drop, but I will probably only do that if there is sufficient interest.

Kind of a mess, I know, but as you said, the joys of undocumented APIs...

For all items, see the documentation at the top of starlink_grpc.py for the description of what each means (to the best of my knowledge).

@StephenShamakian
Copy link

Thanks @sparky8512! I missed that commit update on the state field. Thanks!

Do we know if "fraction_obstructed" is a 0-100 value? Or what its range is? As of right now for me its reporting something like "0.00009809..." Trying to figure out if this would be a good line graph over time of obstructions. I too don't get many obstructions myself anymore so its hard to check my work besides intentionally obstructing it :)

But with this additional info you provided I should be good at understanding what I need to tweak now to remove the values that aren't reported any longer.

Thanks again!

@StephenShamakian
Copy link

@sparky8512 I just RTFM... haha! I guess I'm still a little confused on what that value range is though...

fraction_obstructed : The fraction of total area (or possibly fraction
of time?) that the user terminal has determined to be obstructed between
it and the satellites with which it communicates.

@sparky8512
Copy link
Owner

I guess I'm still a little confused on what that value range is though...

I believe the range is 0.0 to 1.0, so your 0.00009809 would be about 0.01%, which is really good! Mine's usually a bit higher than that at around 0.1% after it learns where the obstructions are.

@sparky8512
Copy link
Owner

Regarding the currently_obstructed item in the status mode group: It does appear to be reported in the grpc service when there are obstructions.

I had expected it to either not work at all or correlate very closely with state being set to "OBSTRUCTED", but the actual behavior is a bit more complex than that. There is some heavy overlap, especially for longer runs of outages blamed on obstruction, but I see shorter runs where only one or the other is the case, but not both. I'm not really sure which is more indicative of actual obstructions, so I guess it makes sense to keep them both reporting as-is.

@sparky8512
Copy link
Owner

Anyway, now that the docker image that the README file references has been updated, I'm going to close this issue out.

If we want to revise how the docker image gets updated or where it gets hosted in the future, we can discus that elsewhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants