Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container keeps restarting (loop) #1

Closed
xenon149 opened this issue May 5, 2019 · 27 comments
Closed

Container keeps restarting (loop) #1

xenon149 opened this issue May 5, 2019 · 27 comments
Assignees
Labels
bug Something isn't working

Comments

@xenon149
Copy link

xenon149 commented May 5, 2019

Hey!,

I think I have found an issue with this code
The Docker Container keeps restarting after I try spinning it up using the run commands.

/var/log$ sudo docker ps
CONTAINER ID        IMAGE                       COMMAND             CREATED             STATUS                            PORTS               NAMES
b68a22cf7697        mikenye/piaware-to-influx   "/init"             25 minutes ago      Restarting (139) 47 seconds ago                       piaware2influx

docker version: _ 18.06.1ce3-0 5:18.09.03-0_
Tried recreating container.
No log output.

@mikenye
Copy link
Owner

mikenye commented May 6, 2019

Hi, thanks for reporting a problem. Can I please get you to send the output of uname -a from your host?

@mikenye
Copy link
Owner

mikenye commented May 6, 2019

(I just want to know the architecture you're running on)

@mikenye
Copy link
Owner

mikenye commented May 6, 2019

Also can you please post your docker run command (or equivalent).

@mikenye mikenye added the question Further information is requested label May 6, 2019
@mikenye
Copy link
Owner

mikenye commented May 8, 2019

Paging @xenon149... :)

@mikenye
Copy link
Owner

mikenye commented May 16, 2019

Hi, it's been a week since you opened this issue with no response back. I'm going to close this issue as "no response", however if you're still having the issue please let me know and I'd be more than happy to re-open and help you troubleshoot. Cheers!

@xenon149
Copy link
Author

xenon149 commented Aug 2, 2019

Hey, apologies for not replying!

Apparently, I forgot to change my email address on my Github account... Don't really use this account much.

Here is the system info that you asked for:

uname -a
Linux node-0 4.19.57+ #1244 Thu Jul 4 18:42:50 BST 2019 armv6l GNU/Linux

System is a raspberry pi zero.


CONTAINER ID        IMAGE                       COMMAND             CREATED             STATUS                            PORTS                                                                                           NAMES
06f2b47fe96b        mikenye/piaware             "/init"             43 seconds ago      Up 29 seconds                     0.0.0.0:8080->8080/tcp, 30001-30002/tcp, 30004-30005/tcp, 0.0.0.0:30003->30003/tcp, 30104/tcp   piaware

1677f2fbbb79        mikenye/piaware-to-influx   "/init"             4 hours ago         Restarting (139) 13 seconds ago                                                                                                   piaware2influx

The run command I'm using is as follows:

docker run \
  -d \
  --name=piaware2influx \
  --restart=always \
  -e INFLUXDB_URL="https://influxdb-node-02.xenon149.com:8086" \
  -e DUMP1090_HOST="localhost" \
  mikenye/piaware-to-influx

and to show that the influxdb service is responding on this address/port to requests:

~ # curl https://influxdb-node-02.xenon149.com:8086
404 page not found

Thank you so much for your help!

@mikenye mikenye reopened this Aug 6, 2019
@mikenye
Copy link
Owner

mikenye commented Aug 6, 2019

Hi @xenon149 no problems at all.

I think the issue might be with the DUMP1090_HOST variable. Setting to localhost will mean the container is trying to connect to itself (not the actual underlying host; and thus failing, as dump1090 isn't running on the piaware2influx container). What you should likely do is:

  • Try setting the DUMP1090_HOST variable to the LAN IP address (not loopback) of your docker host; or

  • "link" the piaware and piaware2influx containers (which has been deprecated by docker but still works, see "https://docs.docker.com/network/links/") so that they can communicate. Then, change the DUMP1090_HOST variable to piaware (the name of the container) so that the piaware2influx container can talk to the piaware container; or

  • Create a docker network for both the "piaware" and "piaware2influx" containers, so that they can communicate. Then, change the DUMP1090_HOST variable to piaware (the name of the container) so that the piaware2influx container can talk to the piaware container. See "https://docs.docker.com/network/network-tutorial-standalone/".

Did you want to give that a try and let me know how you get on?

If you get stuck with any of this, let me know and I'd be happy to help further.

-Mike

@natedzl
Copy link

natedzl commented Sep 6, 2019

@mikenye I'm experiencing the same "container restarting" loop issue. I'm running my docker instance on a Synology NAS:

~$ uname -a Linux DiskStation 3.10.105 #24922 SMP Wed Jul 3 16:37:24 CST 2019 x86_64 GNU/Linux synology_avoton_1815+

I can confirm network connectivity between the NAS and my RPi3b+ running piaware. The piaware db gets created in Influx but nothing further happens beyond that. See logs below, appears connection is good and data is being received, however processing of that data results in an error.

2019-09-06 11:36:16 | stdout | ValueError: unconverted data remains: 9776
-- | -- | --
2019-09-06 11:36:16 | stdout | data_string[found.end():])
2019-09-06 11:36:16 | stdout | File "/usr/lib/python3.5/_strptime.py", line 346, in _strptime
2019-09-06 11:36:16 | stdout | tt, fraction = _strptime(data_string, format)
2019-09-06 11:36:16 | stdout | File "/usr/lib/python3.5/_strptime.py", line 510, in _strptime_datetime
2019-09-06 11:36:16 | stdout | msgdt = datetime.datetime.strptime(msgdtstring, '%Y/%m/%dT%H:%M:%S.%f')
2019-09-06 11:36:16 | stdout | File "/piaware2influx.py", line 133, in process_message
2019-09-06 11:36:16 | stdout | self.process_message(new_message)
2019-09-06 11:36:16 | stdout | File "/piaware2influx.py", line 94, in process_buffer
2019-09-06 11:36:16 | stdout | self.process_buffer()
2019-09-06 11:36:16 | stdout | File "/piaware2influx.py", line 65, in add_data_to_buffer
2019-09-06 11:36:16 | stdout | D.add_data_to_buffer(data.decode("utf-8"))
2019-09-06 11:36:16 | stdout | File "/piaware2influx.py", line 359, in <module>
2019-09-06 11:36:16 | stdout | Traceback (most recent call last):
2019-09-06 11:36:15 | stdout | CONNECT: Connected OK, receiving data
2019-09-06 11:36:15 | stdout | Piaware2Influx: CONNECT: Connecting to 10.1.15.200:30003 to receive dump1090 TCP BaseStation output data {0 msgs rx'd, 0 points tx'd}
2019-09-06 11:36:15 | stdout | Namespace(dump1090_port='30003', dump1090_server='10.1.15.200', telegraf_url='http://127.0.0.1:8186/write')
2019-09-06 11:36:14 | stdout | 2019-09-06T11:36:14Z I! Started HTTP listener service on :8186
2019-09-06 11:36:14 | stdout | 2019-09-06T11:36:14Z I! Agent Config: Interval:10s, Quiet:false, Hostname:"", Flush Interval:10s
2019-09-06 11:36:14 | stdout | 2019-09-06T11:36:14Z I! Tags enabled:
2019-09-06 11:36:14 | stdout | 2019-09-06T11:36:14Z I! Loaded outputs: influxdb
2019-09-06 11:36:14 | stdout | 2019-09-06T11:36:14Z I! Loaded processors:
2019-09-06 11:36:14 | stdout | 2019-09-06T11:36:14Z I! Loaded aggregators:
2019-09-06 11:36:14 | stdout | 2019-09-06T11:36:14Z I! Loaded inputs: inputs.http_listener
2019-09-06 11:36:14 | stdout | 2019-09-06T11:36:14Z I! Starting Telegraf v1.7.0
2019-09-06 11:36:14 | stdout | DUMP1090_HOST is set to '10.1.15.200'
2019-09-06 11:36:14 | stdout | INFLUXDB_URL is set to 'http://10.1.0.4:8086'

Thoughts?

@mikenye mikenye self-assigned this Sep 10, 2019
@mikenye
Copy link
Owner

mikenye commented Sep 10, 2019

Hi @natedzl,

Sorry you're having problems. I cannot replicate this problem in my environment so I'm going to need some additional information from you.

I need to generate some commands for you to run in order to troubleshoot, however in order for me to do this I need the command line (or a screen shot of the settings) that you used to create your docker container.

Are you able to provide this information please?

Alternatively if you can get onto a shell within the container (eg: docker exec -it piaware2influx bash) between restarts, could you please run the following commands and send me the output:

curl -v $INFLUXDB_URL/ping - This command is checking to make sure InfluxDB is reachable by the container.

curl -v -m 5 $DUMP1090_HOST:30003 - This command will dump the raw data from dump1090 for 5 seconds.

On my working container they look like this:

root@f7b851ed035a:/# curl -v $INFLUXDB_URL/ping
*   Trying 172.23.0.9...
* TCP_NODELAY set
* Connected to influxdb (172.23.0.9) port 8086 (#0)
> GET /ping HTTP/1.1
> Host: influxdb:8086
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 204 No Content
< Content-Type: application/json
< Request-Id: 4d3f6333-d3a8-11e9-8158-0242ac110003
< X-Influxdb-Build: OSS
< X-Influxdb-Version: 1.7.8
< X-Request-Id: 4d3f6333-d3a8-11e9-8158-0242ac110003
< Date: Tue, 10 Sep 2019 08:52:21 GMT
< 
* Curl_http_done: called premature == 0
* Connection #0 to host influxdb left intact

root@f7b851ed035a:/# curl -v -m 5 $DUMP1090_HOST:30003
* Rebuilt URL to: piaware:30003/
*   Trying 172.29.0.22...
* TCP_NODELAY set
* Connected to piaware (172.29.0.22) port 30003 (#0)
> GET / HTTP/1.1
> Host: piaware:30003
> User-Agent: curl/7.52.1
> Accept: */*
> 
MSG,4,1,1,7C0DAA,1,2019/09/10,16:53:02.758,2019/09/10,16:53:02.811,,,120,195,,,-64,,,,,0
MSG,3,1,1,7C0DAA,1,2019/09/10,16:53:03.064,2019/09/10,16:53:03.084,,1400,,,-31.94989,115.96234,,,0,,0,0
MSG,4,1,1,7C0DAA,1,2019/09/10,16:53:03.210,2019/09/10,16:53:03.249,,,121,195,,,-64,,,,,0
MSG,7,1,1,7C6DD4,1,2019/09/10,16:53:03.217,2019/09/10,16:53:03.251,,4050,,,,,,,,,,
MSG,7,1,1,7C6DD4,1,2019/09/10,16:53:03.243,2019/09/10,16:53:03.254,,4050,,,,,,,,,,
MSG,7,1,1,7C0DAA,1,2019/09/10,16:53:03.298,2019/09/10,16:53:03.304,,1400,,,,,,,,,,
MSG,7,1,1,7C6DD4,1,2019/09/10,16:53:03.301,2019/09/10,16:53:03.304,,4050,,,,,,,,,,
MSG,7,1,1,7C0DAA,1,2019/09/10,16:53:03.438,2019/09/10,16:53:03.469,,1400,,,,,,,,,,
MSG,3,1,1,7C0DAA,1,2019/09/10,16:53:03.512,2019/09/10,16:53:03.526,,1400,,,-31.95012,115.96229,,,0,,0,0
MSG,6,1,1,7C0DAA,1,2019/09/10,16:53:03.610,2019/09/10,16:53:03.630,CZG     ,,,,,,,3661,0,0,0,
MSG,8,1,1,7C0DAA,1,2019/09/10,16:53:03.672,2019/09/10,16:53:03.689,,,,,,,,,,,,0
MSG,4,1,1,7C0DAA,1,2019/09/10,16:53:03.703,2019/09/10,16:53:03.740,,,122,195,,,-128,,,,,0
MSG,4,1,1,7C0DAA,1,2019/09/10,16:53:04.129,2019/09/10,16:53:04.177,,,123,196,,,-128,,,,,0
MSG,3,1,1,7C0DAA,1,2019/09/10,16:53:04.431,2019/09/10,16:53:04.453,,1400,,,-31.95067,115.96212,,,0,,0,0
MSG,8,1,1,7C6DD4,1,2019/09/10,16:53:04.454,2019/09/10,16:53:04.504,,,,,,,,,,,,0
MSG,8,1,1,7C0DAA,1,2019/09/10,16:53:04.624,2019/09/10,16:53:04.669,,,,,,,,,,,,0
MSG,4,1,1,7C0DAA,1,2019/09/10,16:53:04.652,2019/09/10,16:53:04.672,,,123,195,,,-192,,,,,0
MSG,3,1,1,7C0DAA,1,2019/09/10,16:53:04.877,2019/09/10,16:53:04.891,,1400,,,-31.95090,115.96207,,,0,,0,0
MSG,8,1,1,7C6B37,1,2019/09/10,16:53:05.040,2019/09/10,16:53:05.052,,,,,,,,,,,,0
MSG,4,1,1,7C0DAA,1,2019/09/10,16:53:05.184,2019/09/10,16:53:05.216,,,124,195,,,-256,,,,,0
MSG,7,1,1,7C0DAA,1,2019/09/10,16:53:05.239,2019/09/10,16:53:05.271,,1400,,,,,,,,,,
MSG,3,1,1,7C0DAA,1,2019/09/10,16:53:05.309,2019/09/10,16:53:05.327,,1400,,,-31.95108,115.96196,,,0,,0,0
MSG,8,1,1,7C6DD4,1,2019/09/10,16:53:05.395,2019/09/10,16:53:05.434,,,,,,,,,,,,0
MSG,4,1,1,7C0DAA,1,2019/09/10,16:53:05.632,2019/09/10,16:53:05.654,,,125,195,,,-384,,,,,0
MSG,3,1,1,7C0DAA,1,2019/09/10,16:53:05.737,2019/09/10,16:53:05.762,,1400,,,-31.95136,115.96187,,,0,,0,0
MSG,7,1,1,7C0DAA,1,2019/09/10,16:53:06.268,2019/09/10,16:53:06.308,,1400,,,,,,,,,,
MSG,7,1,1,7C0DAA,1,2019/09/10,16:53:06.569,2019/09/10,16:53:06.584,,1400,,,,,,,,,,
MSG,4,1,1,7C0DAA,1,2019/09/10,16:53:06.636,2019/09/10,16:53:06.689,,,125,195,,,-256,,,,,0
MSG,8,1,1,7C0DAA,1,2019/09/10,16:53:06.733,2019/09/10,16:53:06.748,,,,,,,,,,,,0
MSG,3,1,1,7C0DAA,1,2019/09/10,16:53:06.778,2019/09/10,16:53:06.801,,1400,,,-31.95197,115.96170,,,0,,0,0
MSG,8,1,1,7C1B33,1,2019/09/10,16:53:07.084,2019/09/10,16:53:07.127,,,,,,,,,,,,0
MSG,4,1,1,7C0DAA,1,2019/09/10,16:53:07.196,2019/09/10,16:53:07.236,,,125,195,,,-256,,,,,0
MSG,7,1,1,7C0DAA,1,2019/09/10,16:53:07.248,2019/09/10,16:53:07.290,,1400,,,,,,,,,,
MSG,3,1,1,7C0DAA,1,2019/09/10,16:53:07.253,2019/09/10,16:53:07.291,,1400,,,-31.95222,115.96163,,,0,,0,0
* Operation timed out after 5000 milliseconds with 3165 bytes received
* Curl_http_done: called premature == 1
* stopped the pause stream!
* Closing connection 0
curl: (28) Operation timed out after 5000 milliseconds with 3165 bytes received

Thanks.

-Mike

@mikenye
Copy link
Owner

mikenye commented Sep 10, 2019

One other thing @natedzl, when you need to quote multiple lines, do it by placing three backticks on a blank line, then paste your multi-line output, then place another three backticks on a blank line. Eg:

```
multiline
output
here
```

... will give you:

multiline
output
here

Thanks.

-Mike

@natedzl
Copy link

natedzl commented Sep 10, 2019

Hi Mike, I've rebuilt everything on an Ubuntu VM to rule out anything with the Synology setup. The container still actively restarts. There's a lot of storms in my area today so flight traffic is quite low, I've noticed the container will run when there's no data stream coming in from dump1090, however, once the data begins the container restarts. Not sure if that is helpful, but I believe that supports my earlier theory with the data processing part.

I'm currently running Piaware v 3.6.2

Docker Command:

sudo docker run \
  -d \
  --name=piaware2influx \
  --restart=always \
  -e INFLUXDB_URL="http://10.1.0.4:8086" \
  -e DUMP1090_HOST="10.1.15.200" \
  mikenye/piaware-to-influx

InfluxDB Ping:

root@5ac95ceb50b8:/# curl -v $INFLUXDB_URL/ping
*   Trying 10.1.0.4...
* TCP_NODELAY set
* Connected to 10.1.0.4 (10.1.0.4) port 8086 (#0)
> GET /ping HTTP/1.1
> Host: 10.1.0.4:8086
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 204 No Content
< Content-Type: application/json
< Request-Id: fefcbfb6-d3c7-11e9-9553-0242ac110002
< X-Influxdb-Build: OSS
< X-Influxdb-Version: 1.7.7
< X-Request-Id: fefcbfb6-d3c7-11e9-9553-0242ac110002
< Date: Tue, 10 Sep 2019 12:39:14 GMT
<
* Curl_http_done: called premature == 0
* Connection #0 to host 10.1.0.4 left intact

Dump1090 Messages:

root@5ac95ceb50b8:/# curl -v -m 5 $DUMP1090_HOST:30003
* Rebuilt URL to: 10.1.15.200:30003/
*   Trying 10.1.15.200...
* TCP_NODELAY set
* Connected to 10.1.15.200 (10.1.15.200) port 30003 (#0)
> GET / HTTP/1.1
> Host: 10.1.15.200:30003
> User-Agent: curl/7.52.1
> Accept: */*
>
MSG,8,1,1,A42E13,1,2019/09/10,07:37:22.1568119042,2019/09/10,07:37:22.859,,,,,,,,,,,,0
MSG,8,1,1,A0B8A9,1,2019/09/10,07:37:22.1568119042,2019/09/10,07:37:22.911,,,,,,,,,,,,0
MSG,8,1,1,A0B8A9,1,2019/09/10,07:37:22.1568119042,2019/09/10,07:37:22.914,,,,,,,,,,,,0
MSG,8,1,1,A0B8A9,1,2019/09/10,07:37:22.1568119042,2019/09/10,07:37:22.964,,,,,,,,,,,,0
MSG,8,1,1,A0B8A9,1,2019/09/10,07:37:22.1568119042,2019/09/10,07:37:22.966,,,,,,,,,,,,0
MSG,8,1,1,A0B8A9,1,2019/09/10,07:37:22.1568119042,2019/09/10,07:37:22.969,,,,,,,,,,,,0
MSG,8,1,1,A0B8A9,1,2019/09/10,07:37:22.1568119042,2019/09/10,07:37:23.018,,,,,,,,,,,,0
MSG,8,1,1,A0B8A9,1,2019/09/10,07:37:22.1568119042,2019/09/10,07:37:23.022,,,,,,,,,,,,0
MSG,4,1,1,A42E13,1,2019/09/10,07:37:23.1568119043,2019/09/10,07:37:23.023,,,352,295,,,1088,,,,,0
MSG,8,1,1,A0B8A9,1,2019/09/10,07:37:23.1568119043,2019/09/10,07:37:23.024,,,,,,,,,,,,0

Thanks,
Nate

@mikenye mikenye added bug Something isn't working and removed question Further information is requested labels Sep 11, 2019
@mikenye
Copy link
Owner

mikenye commented Sep 11, 2019

Hi Nate,

Thanks for getting back to me with that information.

I agree there is definitely a bug and I'm actively working on fixing it.

Thanks for your patience.

Kind regards,
Mike Nye

@mikenye
Copy link
Owner

mikenye commented Sep 11, 2019

Hi Nate,

The issue appears to be with a difference in how your dump1090 is formatting the data.

Here's the output from my dump1090:

MSG,4,1,1,7C0DAA,1,2019/09/10,16:53:03.210,2019/09/10,16:53:03.249,,,121,195,,,-64,,,,,0
                                       ^^^

And here's the output from your dump1090:

MSG,4,1,1,A42E13,1,2019/09/10,07:37:23.1568119043,2019/09/10,07:37:23.023,,,352,295,,,1088,,,,,0
                                       ^^^^^^^^^^

Note the timestamps I've highlighted; your dump1090 has a significantly higher time resolution than mine - 10 digits as opposed to 3. I have no idea why this is the case. I'm running piaware on x86 platform so it might be due to architecture differences if you're running on an RPi (armv7l) platform.

Regardless, I'll implement a fix for this.

Thanks for your help on this.

-Mike

mikenye added a commit that referenced this issue Sep 11, 2019
On some platforms, dump1090 is sending a timestamp with more resolution than python can handle. In issue #1, the timestamps have the format: `07:37:23.1568119043`. Python datetime's `%f` directive supports from one to six digits, and was causing a `ValueError` for the extra digits. Adding a restriction `[0:15]` for the time string simply drops the additional digits from the end of the string, preventing an error for these situations.
@mikenye
Copy link
Owner

mikenye commented Sep 11, 2019

Hi Nate,

I've implemented a fix and tested in my environment. As I couldn't replicate your problem, could I trouble you to test in your environment?

I've pushed a new image: mikenye/piaware-to-influx:gitcommit_9825ca7 that I'd like you to test.

Provided the container restart issue is fixed, if you could post the first 30 lines or so of the container (docker logs piaware2influx | head -30) that would be great. The container will need to be running for several minutes (depending on air traffic) to log any output.

If your feedback is positive, I'll merge these changes into the master branch, do a cross-platform build and push a new "latest" tagged container.

Thanks!

-Mike

@natedzl
Copy link

natedzl commented Sep 11, 2019

Thanks for your quick responses Mike,

Container is still restarting. Here's the logs, let me know if I can get you any further info.

INFLUXDB_URL is set to 'http://10.1.0.4:8086'
DUMP1090_HOST is set to '10.1.15.200'
2019-09-11T09:49:23Z I! Starting Telegraf 1.11.5
2019-09-11T09:49:23Z I! Loaded inputs: http_listener
2019-09-11T09:49:23Z I! Loaded aggregators:
2019-09-11T09:49:23Z I! Loaded processors:
2019-09-11T09:49:23Z I! Loaded outputs: influxdb
2019-09-11T09:49:23Z I! Tags enabled:
2019-09-11T09:49:23Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"", Flush Interval:10s
2019-09-11T09:49:23Z I! Started HTTP listener service on :8186
Namespace(dump1090_port='30003', dump1090_server='10.1.15.200', telegraf_url='http://127.0.0.1:8186/write')
Piaware2Influx: CONNECT: Connecting to 10.1.15.200:30003 to receive dump1090 TCP BaseStation output data {0 msgs rx'd, 0 points tx'd}
CONNECT: Connected OK, receiving data
Piaware2Influx: AIRCRAFT [A05DD8]: Now receiving from this vessel {8 msgs rx'd, 0 points tx'd}
Piaware2Influx: CLEANUP [A05DD8]: Expiring inactive vessel {8 msgs rx'd, 0 points tx'd}
Traceback (most recent call last):
  File "/piaware2influx.py", line 365, in <module>
    D.add_data_to_buffer(data.decode("utf-8"))
  File "/piaware2influx.py", line 65, in add_data_to_buffer
    self.process_buffer()
  File "/piaware2influx.py", line 94, in process_buffer
    self.process_message(new_message)
  File "/piaware2influx.py", line 320, in process_message
    self.clean_database()
  File "/piaware2influx.py", line 101, in clean_database
    for hexident in self.database:
RuntimeError: dictionary changed size during iteration

Thanks,
Nate

@mikenye
Copy link
Owner

mikenye commented Sep 11, 2019

Thanks for the output.

The good news is that the issue with the timestamp is solved.
The bad news is that you've found another bug!

I'll work on fixing this. Thanks for your patience!

mikenye added a commit that referenced this issue Sep 11, 2019
Implement a set to hold hexidents to clean, then perform the 'del' operation outside of the "for hexident in self.database:" loop.
@mikenye
Copy link
Owner

mikenye commented Sep 11, 2019

Hi Nate,

Could you please retry your testing with image mikenye/piaware-to-influx:gitcommit_6b8eb43?

Could please post the first 30 lines or so of the container logs after it has been running for several minutes and captured some data?

Thanks.

-Mike

@natedzl
Copy link

natedzl commented Sep 11, 2019

Mike - appreciate the fast turn-around! The container loop issue appears to be solved! However, I'm not getting any data flowing into Influx. I stopped the piaware2influx container, deleted the DB from Influx and restarted the container. The database did get re-created as expected but I'm not seeing any data nor am I seeing anything in the logs that point to any write failures, etc.

Thoughts?

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-piaware2influx: executing...
WARNING: TZ environment variable not set
[cont-init.d] 01-piaware2influx: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
2019-09-11 16:20:57.300574500  [piaware2influx] Namespace(dump1090_port='30003', dump1090_server='10.1.15.200', telegraf_url='http://127.0.0.1:8186/write')
2019-09-11 16:20:57.300659500  [piaware2influx] CONNECT: Connecting to 10.1.15.200:30003 to receive dump1090 TCP BaseStation output data {0 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300662500  [piaware2influx] CONNECT: Connected OK, receiving data
2019-09-11 16:20:57.300664500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {0 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300665500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {0 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300667500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {1 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300668500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {1 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300669500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {2 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300671500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {2 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300672500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {3 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300684500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {3 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300685500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {4 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300687500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {4 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300688500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {5 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300690500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {5 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300691500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {6 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300696500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {6 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300697500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {7 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300699500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {7 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300700500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {8 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300702500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {8 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300703500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {9 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300704500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {9 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300705500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {10 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300706500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {10 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300708500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {11 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300709500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {11 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300710500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {12 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300712500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {12 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300719500  [piaware2influx] AIRCRAFT [A452F2]: Now receiving from this vessel {13 msgs rx'd, 0 points tx'd}
2019-09-11 16:20:57.300721500  [piaware2influx] CLEANUP [A452F2]: Expiring inactive vessel {13 msgs rx'd, 0 points tx'd}

Thanks,
Nate

@mikenye
Copy link
Owner

mikenye commented Sep 11, 2019

Hi Nate,

No problems!

It's very odd you're continually getting the "CLEANUP" messages - these should only be happenning very sparsely. This is likely the reason why you're not getting any data.

I'm going to put together a release with debug logging, stay tuned for this.

Thanks again for your patience.

-Mike

@mikenye
Copy link
Owner

mikenye commented Sep 12, 2019

Hi Nate,

I'm wondering if you could check the date and time on your piaware/dump1090 host, and also the host running piaware2influx.

I have a hunch that what might be happening is that the clock on the piaware/dump1090 host may be wrong, which is triggering the piaware2influx script to prematurely run its clean up procedures when it receives a message with a timestamp that is older than 15 minutes.

I should have a debug container ready for you in a few hours when I get home from my day job, but if you could ensure the clocks on your hosts are synchronized and correct it would be greatly appreciated.

Thanks.

-Mike

@mikenye mikenye added the question Further information is requested label Sep 12, 2019
@mikenye
Copy link
Owner

mikenye commented Sep 12, 2019

Hi Nate,

I've pushed a debug image that will generate a tonne of logs that I can go through.

Please run mikenye/piaware-to-influx:development-amd64, let it run for at least 10 minutes and make sure it's logging lots of output.

You can then stop the container and send me through the logs.

Thanks.

-Mike

@natedzl
Copy link

natedzl commented Sep 12, 2019

Hi Mike,

In looking at the date command on both hosts they are current. However, I opened a shell within the container and noticed the time was off.

In Piaware2Influx Container:
image

I noticed the TZ parameter in this build so I applied that and viola! Time is now current.
image

Just waiting on some air traffic nearby and I will provide an update.

Thanks,
Nate

@natedzl
Copy link

natedzl commented Sep 12, 2019

Mike, I've got data getting into influx!

@mikenye mikenye removed bug Something isn't working question Further information is requested labels Sep 12, 2019
@mikenye
Copy link
Owner

mikenye commented Sep 12, 2019

This is awesome news! OK. I'll roll the changes made in this image into the master branch, and update the readme to recommend setting the TZ paramater.

Thanks so much for helping me troubleshoot this issue and making the container better for everyone!

-Mike

mikenye added a commit that referenced this issue Sep 12, 2019
Implement S3 overlay.
Logging improvements.
@mikenye
Copy link
Owner

mikenye commented Sep 12, 2019

OK, the new amd64 image is up: mikenye/piaware-to-influx:2019-09-12-amd64. I'm working through the build process on the other platforms before releasing the new latest. Feel free to check it out and let me know how you fare.

Thanks.

-Mike

@natedzl
Copy link

natedzl commented Sep 12, 2019

Lookin' good!

It now runs on my Synology, no crash/restart loops and I'm getting data in Influx. All is well!

image

image

@mikenye mikenye mentioned this issue Sep 12, 2019
mikenye added a commit that referenced this issue Sep 12, 2019
@mikenye
Copy link
Owner

mikenye commented Sep 12, 2019

Hi Nate,

That is fantastic and I'm relieved to see it working for you. :-)

I've merged the changes with the master branch, built the new image on all platforms and pushed a new mikenye/piaware-to-influx:latest image.

Please use that, and let me know if you have any issues in the future.

Thanks again for your help troubleshooting these issues. The image was overdue for some love and attention so its been great getting it up to scratch.

I'll now close this issue out!

Kind regards,
Mike

@mikenye mikenye closed this as completed Sep 12, 2019
@mikenye mikenye added the bug Something isn't working label Sep 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants