Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PTP synchronization takes 30-60 seconds #78

Closed
Pisikoll opened this issue Jul 4, 2019 · 25 comments
Closed

PTP synchronization takes 30-60 seconds #78

Pisikoll opened this issue Jul 4, 2019 · 25 comments
Assignees
Labels
sensor-support Potential for ouster support timesync Ptp, ntp, gps, etc.

Comments

@Pisikoll
Copy link

Pisikoll commented Jul 4, 2019

Hello

I am trying to sync lidar with my NVIDIAJetson Xavier using ptp. Everything works well, but it takes 30-60 seconds to sync. As soon as api call (api/v1/system/time/ptp) shows ptp info, the delay is minimized to 0.2 seconds (measured with rostopic delay command.).

I followed lidar documentation, on how to setup ptp server. Running ptp4l, lidar timestamp mode is set to TIME_FROM_PTP_1588 and Wireshark shows that PTPv2 protocol broadcasts are working.

I would like to get the correct timestamp as soon as lidar starts sending data. Is there any way to remove the delay on minimizing it?

@waychin-weiqin
Copy link

How do you check if PTP is working? I used ptp4l command and rosbag record from terminal but I dont see any information that shows if the lidar is synced

@dsb6063
Copy link

dsb6063 commented Jul 30, 2019 via email

@dsb6063
Copy link

dsb6063 commented Jul 30, 2019 via email

@l0g1x
Copy link

l0g1x commented Aug 7, 2019

I have a similar situation. Im having some trouble synchronizing my PTP hardware clock with the system clock. Im able to set the os1 to use PTP_1588 for timestamps and see the hardware clock time on the os1_cloud_node/imu rostopic. After only configuring ptp4l, the os1 output topic delay still shows a small constant delay of -0.353 seconds.

 $ journalctl -f -u ptp4l
-- Logs begin at Wed 2019-08-07 13:01:53 CDT. --
Aug 07 13:23:05 copilot-002 systemd[1]: Started Precision Time Protocol (PTP) service.
Aug 07 13:23:05 copilot-002 ptp4l[28030]: [1273.393] selected /dev/ptp0 as PTP clock
Aug 07 13:23:05 copilot-002 ptp4l[28030]: [1273.394] port 1: INITIALIZING to LISTENING on INITIALIZE
Aug 07 13:23:05 copilot-002 ptp4l[28030]: [1273.394] port 0: INITIALIZING to LISTENING on INITIALIZE
Aug 07 13:23:11 copilot-002 ptp4l[28030]: [1280.154] port 1: LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
Aug 07 13:23:11 copilot-002 ptp4l[28030]: [1280.154] selected best master clock 78d004.fffe.260218
Aug 07 13:23:11 copilot-002 ptp4l[28030]: [1280.154] assuming the grand master role
^C

 $ rostopic delay /os1_cloud_node/imu
subscribed to [/os1_cloud_node/imu]
average delay: -0.353
	min: -0.353s max: -0.351s std dev: 0.00024s window: 96
average delay: -0.353
	min: -0.353s max: -0.351s std dev: 0.00030s window: 189
average delay: -0.353
	min: -0.353s max: -0.351s std dev: 0.00030s window: 288
average delay: -0.353
	min: -0.353s max: -0.351s std dev: 0.00030s window: 382

Then I tried using phc2sys according to the software user manual which is supposed to sync the hardware clock with the system clock, but after creating /etc/systemd/system/phc2sys.service.d/override.conf , journalctl starts to show a growing negative delay, working its way towards -36 seconds. I'm aware the 36 seconds is the time difference between Atomic time and UTC, so i tried passing the time offset parameter to phc2sys with -O 36, but it did not help.

The PTP appendix from the software user guide is what i followed, and seperately also went through the redhat chapters (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/ch-configuring_ptp_using_ptp4l#s2-Advantages_of_PTP) about setting up PTP, and have tried the various github issue suggestions:
#1
#48
ethz-asl#1
#78

Output from ethtool:

 $ ethtool -T enp0s31f6
Time stamping parameters for enp0s31f6:
Capabilities:
    hardware-transmit     (SOF_TIMESTAMPING_TX_HARDWARE)
    software-transmit     (SOF_TIMESTAMPING_TX_SOFTWARE)
    hardware-receive      (SOF_TIMESTAMPING_RX_HARDWARE)
    software-receive      (SOF_TIMESTAMPING_RX_SOFTWARE)
    software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
    hardware-raw-clock    (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
    off                   (HWTSTAMP_TX_OFF)
    on                    (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
    none                  (HWTSTAMP_FILTER_NONE)
    all                   (HWTSTAMP_FILTER_ALL)
    ptpv1-l4-sync         (HWTSTAMP_FILTER_PTP_V1_L4_SYNC)
    ptpv1-l4-delay-req    (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ)
    ptpv2-l4-sync         (HWTSTAMP_FILTER_PTP_V2_L4_SYNC)
    ptpv2-l4-delay-req    (HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ)
    ptpv2-l2-sync         (HWTSTAMP_FILTER_PTP_V2_L2_SYNC)
    ptpv2-l2-delay-req    (HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ)
    ptpv2-event           (HWTSTAMP_FILTER_PTP_V2_EVENT)
    ptpv2-sync            (HWTSTAMP_FILTER_PTP_V2_SYNC)
    ptpv2-delay-req       (HWTSTAMP_FILTER_PTP_V2_DELAY_REQ)

Output from ptp4l:

 $ sudo systemctl status ptp4l
● ptp4l.service - Precision Time Protocol (PTP) service
   Loaded: loaded (/lib/systemd/system/ptp4l.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/ptp4l.service.d
           └─override.conf
   Active: active (running) since Wed 2019-08-07 13:36:39 CDT; 19min ago
 Main PID: 14638 (ptp4l)
   CGroup: /system.slice/ptp4l.service
           └─14638 /usr/sbin/ptp4l -f /etc/linuxptp/ptp4l.conf

Aug 07 13:36:39 copilot-002 systemd[1]: Started Precision Time Protocol (PTP) service.
Aug 07 13:36:39 copilot-002 ptp4l[14638]: [2087.392] selected /dev/ptp0 as PTP clock
Aug 07 13:36:39 copilot-002 ptp4l[14638]: [2087.393] port 1: INITIALIZING to LISTENING on INITIALIZE
Aug 07 13:36:39 copilot-002 ptp4l[14638]: [2087.393] port 0: INITIALIZING to LISTENING on INITIALIZE
Aug 07 13:36:45 copilot-002 ptp4l[14638]: [2093.981] port 1: LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
Aug 07 13:36:45 copilot-002 ptp4l[14638]: [2093.981] selected best master clock 78d004.fffe.260218
Aug 07 13:36:45 copilot-002 ptp4l[14638]: [2093.981] assuming the grand master role

Output from phc2sys:

 $ sudo systemctl status phc2sys
● phc2sys.service - Synchronize system clock or PTP hardware clock (PHC)
   Loaded: loaded (/lib/systemd/system/phc2sys.service; disabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/phc2sys.service.d
           └─override.conf
   Active: active (running) since Wed 2019-08-07 13:48:53 CDT; 4min 47s ago
 Main PID: 25163 (phc2sys)
   CGroup: /system.slice/phc2sys.service
           └─25163 /usr/sbin/phc2sys -w -s CLOCK_REALTIME -c enp0s31f6 -O 36

Aug 07 13:53:31 copilot-002 phc2sys[25163]: [3099.782] sys offset        14 s2 freq  +15216 delay   2375
Aug 07 13:53:32 copilot-002 phc2sys[25163]: [3100.782] sys offset      -147 s2 freq  +15059 delay   2625
Aug 07 13:53:33 copilot-002 phc2sys[25163]: [3101.782] sys offset        72 s2 freq  +15234 delay   4375
Aug 07 13:53:34 copilot-002 phc2sys[25163]: [3102.782] sys offset        43 s2 freq  +15226 delay   3000
Aug 07 13:53:35 copilot-002 phc2sys[25163]: [3103.782] sys offset      -240 s2 freq  +14956 delay   4875
Aug 07 13:53:36 copilot-002 phc2sys[25163]: [3104.782] sys offset       158 s2 freq  +15282 delay   4125
Aug 07 13:53:37 copilot-002 phc2sys[25163]: [3105.782] sys offset       246 s2 freq  +15418 delay   2499
Aug 07 13:53:38 copilot-002 phc2sys[25163]: [3106.783] sys offset       -98 s2 freq  +15148 delay   4625
Aug 07 13:53:39 copilot-002 phc2sys[25163]: [3107.783] sys offset      -163 s2 freq  +15053 delay   4000
Aug 07 13:53:40 copilot-002 phc2sys[25163]: [3108.783] sys offset       167 s2 freq  +15334 delay   3625

Output from chrony:

 $ sudo systemctl status chrony
● chrony.service - LSB: Controls chronyd NTP time daemon
   Loaded: loaded (/etc/init.d/chrony; bad; vendor preset: enabled)
   Active: active (running) since Wed 2019-08-07 13:32:48 CDT; 6s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 22808 ExecStop=/etc/init.d/chrony stop (code=exited, status=0/SUCCESS)
  Process: 22817 ExecStart=/etc/init.d/chrony start (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/chrony.service
           └─22824 /usr/sbin/chronyd

Aug 07 13:32:46 copilot-002 chronyd[22824]: chronyd version 2.1.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP -DEBUG +ASYNCDNS +IPV6 +SECHASH)
Aug 07 13:32:46 copilot-002 chronyd[22824]: Setting filter length for ptp to 2
Aug 07 13:32:46 copilot-002 chronyd[22824]: Frequency 16.086 +/- 0.390 ppm read from /var/lib/chrony/chrony.drift
Aug 07 13:32:48 copilot-002 chronyd[22824]: Source 2001:470:0:2c8::2 online
Aug 07 13:32:48 copilot-002 chronyd[22824]: Source 2001:559:2be:3::1001 online
Aug 07 13:32:48 copilot-002 chronyd[22824]: Source 204.2.134.163 online
Aug 07 13:32:48 copilot-002 chronyd[22824]: Source 2604:880:398:371::1 online
Aug 07 13:32:48 copilot-002 chrony[22817]: chronyd is running and online.
Aug 07 13:32:48 copilot-002 systemd[1]: Started LSB: Controls chronyd NTP time daemon.
Aug 07 13:32:51 copilot-002 chronyd[22824]: Selected source ptp

 $ chronyc tracking
Reference ID    : 112.116.112.0 (ptp)
Stratum         : 1
Ref time (UTC)  : Wed Aug  7 18:33:30 2019
System time     : 0.000000026 seconds slow of NTP time
Last offset     : -0.000000013 seconds
RMS offset      : 0.000000145 seconds
Frequency       : 15.938 ppm fast
Residual freq   : -0.000 ppm
Skew            : 0.020 ppm
Root delay      : 0.000000 seconds
Root dispersion : 0.000004 seconds
Update interval : 2.0 seconds
Leap status     : Normal

Output from /os1_cloud_node/imu (36 seconds ahead of a seperate topic i.e. /imu/data from seperate sensor):

header:
  seq: 1635
  stamp:
    secs: 1565160665      (36 seconds ahead from ros time of all other nodes using system time)
    nsecs: 469307754
  frame_id: "os1_imu"
orientation:
  x: 0.0
  y: 0.0
  z: 0.0
  w: 0.0
orientation_covariance: [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0]
angular_velocity:
  x: -0.00532632218016
  y: -0.00532632218016
  z: -0.0271642431188
angular_velocity_covariance: [0.0006, 0.0, 0.0, 0.0, 0.0006, 0.0, 0.0, 0.0, 0.0006]
linear_acceleration:
  x: -0.586579406738
  y: -0.114921679688
  z: 9.69172832031
linear_acceleration_covariance: [0.01, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.01]

Does anyone know why the delay starts to grow after the use of phc2sys? Thanks in advance.

@l0g1x
Copy link

l0g1x commented Aug 7, 2019

Additionally, if i revert (remove) all the changes made for phc2sys, phc2shm, and chrony, and just keep the additions for ptp4l, and then power cycle the sensor, the time delay starts off at almost 0 but starts getting larger but then stabilizes at -0.374 seconds of delay between the os1 and system time.

 $ rostopic delay /os1_cloud_node/imu
subscribed to [/os1_cloud_node/imu]
average delay: 0.095
	min: 0.088s max: 0.102s std dev: 0.00390s window: 41
average delay: 0.077
	min: 0.051s max: 0.102s std dev: 0.01446s window: 148
average delay: 0.058
	min: 0.015s max: 0.102s std dev: 0.02530s window: 252
average delay: 0.040
	min: -0.023s max: 0.102s std dev: 0.03614s window: 356
average delay: 0.022
	min: -0.060s max: 0.102s std dev: 0.04688s window: 460
average delay: 0.003
	min: -0.097s max: 0.102s std dev: 0.05751s window: 563
average delay: -0.015
	min: -0.134s max: 0.102s std dev: 0.06823s window: 667
average delay: -0.034
	min: -0.171s max: 0.102s std dev: 0.07894s window: 771
average delay: -0.052
	min: -0.208s max: 0.102s std dev: 0.08966s window: 875
average delay: -0.071
	min: -0.245s max: 0.102s std dev: 0.10039s window: 979
average delay: -0.089
	min: -0.281s max: 0.102s std dev: 0.11099s window: 1082
average delay: -0.108
	min: -0.318s max: 0.102s std dev: 0.12168s window: 1186
...
...
average delay: -0.374
	min: -0.374s max: -0.373s std dev: 0.00035s window: 95
average delay: -0.374
	min: -0.374s max: -0.373s std dev: 0.00037s window: 195
average delay: -0.374
	min: -0.374s max: -0.373s std dev: 0.00038s window: 296

@asherikov
Copy link

@l0g1x, apparently Ouster PTP client ignores TAI-UTC offset, you can force it to zero by passing -O 0 to phc2sys.

@Pisikoll, we have the same issue, but no solution. I guess the best option is to contact Ouster supp ort directly.

@tuandle
Copy link

tuandle commented Aug 8, 2019

@l0g1x I have the same problem. Enabling phc2sys introduced the 36 seconds delay. I tried the option -O 0 but it does not have any effect either. Do you have any other suggestion @asherikov ?
Using only ptp4l I have a stable 3 ms delay between os1 and system time.
average delay: 0.003 min: 0.001s max: 0.052s std dev: 0.00303s window: 50000

@asherikov
Copy link

  1. command /usr/sbin/phc2sys -c ${INTERFACE} -s CLOCK_REALTIME -w -O 0
  2. version phc2sys -v 1.8
  3. Convergence takes time, you can compare system and network card clocks with sudo phc_ctl eno1 get && date +%s.%N

@l0g1x
Copy link

l0g1x commented Aug 8, 2019

@asherikov Interesting. It does seems like that the TCP query /api/v1/system/time/ptp to the device shows 36 seconds for current_utc_offset.

time_properties_data_set |  
-- | --
leap61 | 0
time_traceable | 0
time_source | 160
current_utc_offset_valid | 0
frequency_traceable | 0
ptp_timescale | 1
leap59 | 0
current_utc_offset | 36

Seems like the offset would (if it does) happen at the firmware level. Can anyone from Ouster comment on this? Also to mention, im on 16.04 with 4.15.0-55 kernel. Just realized my phc2sys version is at 1.6 so will try updating it and see what happens.

@tuandle I did also try the -O 0 option which doesnt affect anything. Few questions:

  1. What did your configuration look like for just ptp4l without phc2sys? Was it ran from commandline or did you modify/add the service config file for ptp4l?
  2. Did you notice any difference in offset with or without a internet connection after a power cycle of your PC?
  3. Any idea how to get the time sync to converge faster? I.e. modifying the

For what I am trying to achieve, the timestamp which all components on my system utilize needs to be as close to eachother as possible (i.e. for SLAM), and can be different from the time reported by NTP servers, say, like in the case the vehicle is on the road but not connected to internet. What i'm additionally uncertain about is whether the configuration of chrony using ptp as its reference source and phc2shm serving the ptp source to chrony via shared memory, is not needed or not (for my application).

@l0g1x
Copy link

l0g1x commented Aug 8, 2019

1. command `/usr/sbin/phc2sys -c ${INTERFACE} -s CLOCK_REALTIME -w -O 0`

2. version `phc2sys -v 1.8`

3. Convergence takes time, you can compare system and network card clocks with `sudo phc_ctl eno1 get && date +%s.%N`

@asherikov does v1.8 allow for both the -w and -O arguments to be used together? v1.6 doesn't seem to allow it

@asherikov
Copy link

This is how I use it. AFAIK, newer ptp4l (>=1.9) also allows to set the offset to zero.

@tuandle
Copy link

tuandle commented Aug 9, 2019

@l0g1x

  1. What did your configuration look like for just ptp4l without phc2sys? Was it ran from command line or did you modify/add the service config file for ptp4l?
    In order to have the 3ms delay I don't have phc2sys enabled. The output of ptp4l is:
ptp4l.service - Precision Time Protocol (PTP) service
   Loaded: loaded (/etc/systemd/system/ptp4l.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/ptp4l.service.d
           └─override.conf
   Active: active (running) since to. 2019-08-08 12:53:01 CEST; 20h ago
 Main PID: 12562 (ptp4l)
   CGroup: /system.slice/ptp4l.service
           └─12562 /usr/sbin/ptp4l -f /etc/linuxptp/ptp4l.conf

aug. 09 09:30:18 tuan ptp4l[12562]: [81948.449] selected best master clock 94c691.fffe.140fbf
aug. 09 09:30:18 tuan ptp4l[12562]: [81948.449] assuming the grand master role
aug. 09 09:30:19 tuan ptp4l[12562]: [81949.450] timed out while polling for tx timestamp
aug. 09 09:30:19 tuan ptp4l[12562]: [81949.450] increasing tx_timestamp_timeout may correct this issue, but it is l
aug. 09 09:30:19 tuan ptp4l[12562]: [81949.450] port 1: send sync failed
aug. 09 09:30:19 tuan ptp4l[12562]: [81949.450] port 1: MASTER to FAULTY on FAULT_DETECTED (FT_UNSPECIFIED)
aug. 09 09:30:35 tuan ptp4l[12562]: [81965.451] port 1: FAULTY to LISTENING on FAULT_CLEARED
aug. 09 09:30:42 tuan ptp4l[12562]: [81972.136] port 1: LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
aug. 09 09:30:42 tuan ptp4l[12562]: [81972.136] selected best master clock 94c691.fffe.140fbf
aug. 09 09:30:42 tuan ptp4l[12562]: [81972.136] assuming the grand master role

I followed the OS1 software guideline to create the override.conf for both ptp4l and phc2sys

  1. Did you notice any difference in offset with or without a internet connection after a power cycle of your PC?
    Now that you have asked, the PC that OS1 connects to does not have internet connection. When I connect to a network and power cycle the OS1 and the PC, the delay is not 3ms but -1ms. I have no idea why it happens. The report from api/v1/system/time/ptp says the offset is 36: "current_utc_offset": 36.

  2. Any idea how to get the time sync to converge faster?
    Honestly, from the begining, I naively hoped that the OS1 hardware timesync capability is reliable enough to just use it as is.

@Ashik19
Copy link

Ashik19 commented Aug 19, 2019

Hi @l0g1x

Can you tell me how to check the offset between the master and slave time? I have set up my ptp4l and it's working fine. But I need to verify the process of getting the offset time .

Thanks

@daniel-dsouza
Copy link
Contributor

@Dinatorehanova
Copy link

@daniel-dsouza ,

Thanks for your kind reply.

When I am running the command you mentioned its's giving me the following error

image

Can you please guide me on this ?

Thanks

@daniel-dsouza
Copy link
Contributor

@Dinatorehanova "Permission denied" probably means that you need to run that command as root. Try prefacing it with sudo.

@mrgransky
Copy link

my question may sound really stupid, my apologies in advance!

But to synchronize with an external PTP master, did you use a physical device such as Trimble Thunderbolt ptp GM100 explained in user SW guide?

@daniel-dsouza
Copy link
Contributor

@mrgransky nope, I am using a computer with an Intel LAN controller.

@daniel-dsouza
Copy link
Contributor

daniel-dsouza commented Jan 14, 2020 via email

@dmitrig dmitrig added the sensor-support Potential for ouster support label Dec 7, 2020
@dmitrig dmitrig added the timesync Ptp, ntp, gps, etc. label Dec 7, 2020
@afridi26
Copy link

afridi26 commented Dec 15, 2020

For example, After setting up the PTP from the official documentation. I am getting the following outputs for the command:

sudo pmc 'get PARENT_DATA_SET' 'get CURRENT_DATA_SET' 'get PORT_DATA_SET' 'get TIME_STATUS_NP' -i eth0

ptp2

and output from the HTTP server as follows http://172.17.3.102/api/v1/time

{
"sensor": {
"timestamp": {
"mode": "TIME_FROM_PTP_1588",
"time_options": {
"internal_osc": 30,
"ptp_1588": 1608034315,
"sync_pulse_in": 0
},
"time": 1608034315.195737
},
"multipurpose_io": {
"sync_pulse_out": {
"angle_deg": 360,
"pulse_width_ms": 10,
"polarity": "ACTIVE_HIGH",
"frequency_hz": 1
},
"mode": "OFF",
"nmea": {
"polarity": "ACTIVE_HIGH",
"baud_rate": "BAUD_9600",
"ignore_valid_char": 0,
"diagnostics": {
"decoding": {
"last_read_message": "",
"not_valid_count": 0,
"utc_decoded_count": 0,
"date_decoded_count": 0
},
"io_checks": {
"start_char_count": 0,
"char_count": 0,
"bit_count_unfiltered": 0,
"bit_count": 1
}
},
"leap_seconds": 0,
"locked": 0
}
},
"sync_pulse_in": {
"polarity": "ACTIVE_HIGH",
"locked": 0,
"diagnostics": {
"count": 0,
"last_period_nsec": 0,
"count_unfiltered": 1
}
}
},
"system": {
"realtime": 38.366773332,
"monotonic": 38.366736465,
"tracking": {
"rms_offset": 0,
"leap_status": "not synchronised",
"update_interval": 0,
"residual_frequency": 0,
"frequency": 28.098,
"root_dispersion": 1,
"remote_host": "",
"root_delay": 1,
"skew": 0,
"system_time_offset": 0,
"stratum": 0,
"reference_id": "00000000",
"ref_time_utc": 0,
"last_offset": 0
}
},
"ptp": {
"time_properties_data_set": {
"leap59": 0,
"leap61": 0,
"ptp_timescale": 1,
"time_traceable": 0,
"time_source": 160,
"current_utc_offset_valid": 0,
"frequency_traceable": 0,
"current_utc_offset": 36
},
"port_data_set": {
"port_state": "SLAVE",
"log_min_pdelay_req_interval": 0,
"announce_receipt_timeout": 3,
"log_sync_interval": 0,
"port_identity": "bc0fa7.fffe.000867-1",
"log_min_delay_req_interval": 0,
"version_number": 2,
"peer_mean_path_delay": 0,
"delay_mechanism": 1,
"log_announce_interval": 1
},
"time_status_np": {
"master_offset": -115678291,
"last_gm_phase_change": "0x0000'0000000000000000.0000",
"scaled_last_gm_phase_change": 0,
"ingress_time": 1608034314486962000,
"gm_identity": "00044b.fffe.e59fc7",
"gm_present": true,
"cumulative_scaled_rate_offset": 0,
"gm_time_base_indicator": 0
},
"profile": "default",
"current_data_set": {
"offset_from_master": -115678291,
"steps_removed": 1,
"mean_path_delay": -4109495
},
"parent_data_set": {
"grandmaster_identity": "00044b.fffe.e59fc7",
"parent_port_identity": "00044b.fffe.e59fc7-1",
"observed_parent_clock_phase_change_rate": 2147483647,
"gm_offset_scaled_log_variance": 65535,
"gm_clock_accuracy": 254,
"grandmaster_priority2": 128,
"gm_clock_class": 128,
"grandmaster_priority1": 128,
"parent_stats": 0,
"observed_parent_offset_scaled_log_variance": 65535
}
}
}

I have the same question @daniel-dsouza, How to verify that the PTP synchronization is working correctly?

@Krishtof-Korda
Copy link
Collaborator

Krishtof-Korda commented Jan 7, 2021

Hi all,

We have a useful little tool called the timestomper.py which is the best and most accurate way to verify your sensor's time sync with PTP.

The code for timestomper.py is below:

#!/usr/bin/env python3
#
# Probably need to run with PYTHONUNBUFFERED=1 ./timestomper.py
#
import json
import logging
import os
import socket
import socketserver
import struct
import sys
import threading
import time

DEBUG = os.getenv('DEBUG', False)
PORT_LIDAR = 7502
PORT_IMU = 7503

SO_TIMESTAMPNS = 35
SOF_TIMESTAMPING_TX_HARDWARE = 1 
SOF_TIMESTAMPING_RX_HARDWARE = 4

# PTP uses the TAI offset in all well-formed profiles.  time.time()
# returns a UNIX timestamp which has leap seconds, compensate here.
TAI_OFFSET = 37

shutdown_evt = threading.Event()

def get_packet_timestamp(ancdata):
    if len(ancdata) and ancdata[0][1] == SO_TIMESTAMPNS:
        secs, ns = struct.unpack('qq', ancdata[0][2])
        return secs + ns/1E9
    else:
        logging.warn('No socket timestamp')
        return time.time()

#
# This server is single-threaded and low-performance, handling each request
# should be quick.
#
# It's trivial to make this multi-threaded with the Python ThreadingMixIn.
#
# A new object is created for each request, so try to loop over the socket to
# avoid new stuff.
#
# The ThreadingMixIn isn't needed as there should only be one sender at a time
# or the output will be nonsensible.

#class LidarHandler(socketserver.ThreadingMixIn, socketserver.BaseRequestHandler):
class LidarHandler(socketserver.BaseRequestHandler):

    def handle(self):

        sock = self.request[1]

        while not shutdown_evt.is_set():
            data, ancdata, flags, address = sock.recvmsg(65535, 1024)

            now = get_packet_timestamp(ancdata)

            t = struct.unpack('q', data[0:8])[0]
            t_float = t/1E9 - TAI_OFFSET

            diff = now - t_float

            dout = { 'lidar': { 'local_rx': now, 'sensor_lidar':  { 'timestamp': t_float, 'diff': diff } } }

            try:
                print(json.dumps(dout))
            except (BrokenPipeError):
                shutdown_evt.set()

            if diff > 0.01:
                logging.warn(f"Big delta, exiting: {dout}")
                shutdown_evt.set()

#class IMUHandler(socketserver.ThreadingMixIn, socketserver.BaseRequestHandler):
class IMUHandler(socketserver.BaseRequestHandler):

    def handle(self):
        sock = self.request[1]
        while not shutdown_evt.is_set():
            data, ancdata, flags, address = sock.recvmsg(65535, 1024)

            now = get_packet_timestamp(ancdata)

            t = struct.unpack('qqq', data[0:24])
            t_float = [i/1E9 - TAI_OFFSET for i in t]

            diff = [now - t for t in t_float]

            dout = {'imu': 
                    { 'local_rx': now, 
                        'sensor_mono':  { 'timestamp': t_float[0], 'diff': diff[0] },
                        'sensor_accel': { 'timestamp': t_float[1], 'diff': diff[1] },
                        'sensor_gyro':  { 'timestamp': t_float[2], 'diff': diff[2] },
                    }
                }

            try:
                print(json.dumps(dout))
            except (BrokenPipeError):
                shutdown_evt.set()

            for d in diff[1:]:
                if d > 0.01:
                    logging.warn(f"Big delta, exiting: {dout}")
                    shutdown_evt.set()

if __name__ == '__main__':
    level = logging.DEBUG if DEBUG else logging.INFO
    logging.basicConfig(stream=sys.stderr, level=level)

    lidar = socketserver.UDPServer(('', PORT_LIDAR), LidarHandler)
    lidar.socket.setsockopt(socket.SOL_SOCKET, SO_TIMESTAMPNS, SOF_TIMESTAMPING_TX_HARDWARE)
    lidar.socket.setsockopt(socket.SOL_SOCKET, SO_TIMESTAMPNS, SOF_TIMESTAMPING_RX_HARDWARE)

    lidar.thread = threading.Thread(target=lidar.serve_forever)
    # Avoid doing hacking threads, we can clean these up properly.
    #lidar_thread.daemon = True

    imu = socketserver.UDPServer(('', PORT_IMU), IMUHandler)
    imu.socket.setsockopt(socket.SOL_SOCKET, SO_TIMESTAMPNS, SOF_TIMESTAMPING_TX_HARDWARE)
    imu.socket.setsockopt(socket.SOL_SOCKET, SO_TIMESTAMPNS, SOF_TIMESTAMPING_RX_HARDWARE)
    imu.thread = threading.Thread(target=imu.serve_forever)
    # Avoid doing hacking threads, we can clean these up properly.
    #imu_thread.daemon = True

    servers = [lidar, imu]

    [s.thread.start() for s in servers]

    try:
        shutdown_evt.wait()
    except (KeyboardInterrupt, SystemExit):
        shutdown_evt.set()
    finally:
        print("\nShutting down... ", file=sys.stderr, end='')
        [s.shutdown() for s in servers]

        print("Joining... ", file=sys.stderr, end='')
        [s.thread.join() for s in servers]

        print("Please play again.", file=sys.stderr)

Regarding leap seconds in ptp4l and ROS please reference #144 (comment)

Thanks!

@Krishtof-Korda
Copy link
Collaborator

Regarding converging PTP faster checkout How to enable PTP Profiles on Ouster Sensor

The "automotive-slave" profile converges much faster since it does away with the BMCA and has an 8x faster PID.

@mfatihkoc
Copy link

Hi,

I just want to ask that I am trying to communicate the one server and 2 clients using UDP protocol to send a file but I need to synchronization of it. I mean I want to synchronize 3 virtual machines (2 clients and 1 server) without a cable connection, but for start, I will do wired PTP synchronization. The time must be as close as possible to each other in the client PCs (low latency).
I want to do synchronization but I do not understand how to implement ptp4l and phc2sys. I only have 3 PCs and one router connected first for wired synchronization between each other. Is that possible to make this system wireless as well as the next question of mine?
Since I've lack information about this field, do you have any recommendations/suggestions about that? Have any of you implemented PTP time synchronization?

I will look forward to hearing from you.

@mfatihkoc
Copy link

Hi @l0g1x

Can you tell me how to check the offset between the master and slave time? I have set up my ptp4l and it's working fine. But I need to verify the process of getting the offset time .

Thanks

Do I need to run ptp4l and phc2sys for the different terminals? Since for example while ptp4l working it continues and never stops, do I need to open a new terminal window to run the phc2sys there (also the same with PMC)?

I wonder how to use ptp4l and phc2sys. Since hardware time stamping is more accurate actually I want to use both, If I am not wrong if I need to use hardware stamping I need to use phc2sys however how to run these two things?

@mfatihkoc
Copy link

os1_cloud_node/imu

I have 3 PCs with 1 router LAN wired connected for clock PTP synchronization. I do not understand why 2 of them behave as a grandmaster(servers) and the 1 other one is a slave. Actually do not suppose to be 1 grandmaster and the other 2 clients (Ordinary clock). Does the router behave as a boundary clock? I want to do synchronization among 3 PCs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sensor-support Potential for ouster support timesync Ptp, ntp, gps, etc.
Projects
None yet
Development

No branches or pull requests