Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Verify lengths of waits recorded correctly #5

Closed
philipstarkey opened this issue Feb 7, 2021 · 34 comments
Closed

Verify lengths of waits recorded correctly #5

philipstarkey opened this issue Feb 7, 2021 · 34 comments
Labels
help wanted Extra attention is needed

Comments

@philipstarkey
Copy link
Member

The length of waits should be recorded by the PrawnBlaster (to +/- 1 clock cycle accuracy).

This should be checked (while noting that the values returned by getwaits are actually the remainder of the timeout, not the length of the wait)

@philipstarkey philipstarkey added the help wanted Extra attention is needed label Feb 7, 2021
@dihm
Copy link
Collaborator

dihm commented Mar 3, 2021

I have to wrap up my day now, but I have some very preliminary results on waits with timeouts. I'm using a standard pulseblaster to send a hwstart trigger pulse then a retriggering pulse at a hard-coded time. I manually program the prawnblaster to be waiting at approximately that time. It's janky because I have to manually start the prawnblaster, timetagger, and BLACS, but it appears to be OK for now.

I'm observing something where I can only specify a wait timeout up to ~1ms. Anything longer caps there. Anything shorter times out at the expected time.

I'm also having trouble getting the Prawnblaster to retrigger once it has hit a wait. This may very well be user error given the janky setup.

@dihm
Copy link
Collaborator

dihm commented Mar 4, 2021

Now that I'm not rushed, I have sorted things out.

I'm not seeing the first issue anymore. Suspect I just had some stale data in the analysis somewhere. I can observe correct timing of timeout retriggering, and the wait measurement reports 0 as expected.

The second issue was definitely user error in programming the pulseblaster triggers. I have it sorted out now. The PB sends an initial trigger (for the hwstart) then a second trigger 15us later (each trigger is 150ns long thanks to the min trigger length of the PB itself). The PrawnBlaster instruction set I'm working with is quite simple.

set 0 100 4
set 1 50 2
set 2 1000 0
set 3 50 3
set 4 0 0

Here are three representative time traces taken with the timetagger. I've also set a second PB channel that pulses with the Prawnblaster triggers into the timetagger for reference (green traces). All times are referenced to the rising edge of the first PB trigger.
SingleWaitTiming_HWStartPB

Here is the data for the prawnblaster edges from this plot. Including the data for the reference PB channel was annoying, but I've checked that the wait retriggering edge is within +/-0.1ns of 15us after the first pulse rising edge.

Three things to note:

  1. It appears the hwstart results in a 9 clock cycle delay from the triggering rising edge. I assume this is due to 3 clock cycles for detecting the pulse and a further 6 to program the first output. There is also about 5ns of timing jitter, no doubt due to the trigger pulse starting during a clock cycle.
  2. The wait retrigger appears to have a 3 cycle delay, as expected (once the 9 cycle delay from the first pulse is removed). Similar 5ns timing jitter is observed.
  3. The reported wait is a rock-solid 508 (giving a 4920 ns delay, 5000 expected). This time tracks with small changes in the PB trigger timing within +/-1 clock cycle. The rest of the difference comes from the hwstart delay causing a slip relative to the PB timing that sets the retrigger pulse.

So I'd say that the timeout waits are working well and the timing seems accurate at first blush. How rigorously do we want to test the wait timing? I could probably dream up a more automated system but it would take some effort and different hardware than what I have right now.

@dihm
Copy link
Collaborator

dihm commented Mar 4, 2021

I've also just done some testing with indefinite waits. They appear to work with 150ns retrigger pulse length just fine. The only thing is that I am getting a significantly longer retrigger delay (~180ns) than before. It's probably as expected, but I can't figure out the exact numerology.

I'm also finding that that getwait 0 is returning a number that is not zero (consistently 999508). getwait 1 does return 0. My slightly modified instruction set is

set 0 100 4
set 1 50 2
set 2 10 0
set 3 10 0
set 4 50 3
set 5 0 0

Edit: fixed sequence instructions.

@philipstarkey
Copy link
Member Author

That's actually a bit confusing. I would have expected the hwstart (and indefinite wait) to be 2-3 clock cycles shorter than the retrigger (when not an indefinite wait). And the jitter on the retrigger (not indefinite wait) should be +/- 10ns (one assembly instruction)! So I'm surprised you're seeing the wait retrigger be shorter than the hwstart.

I can't reproduce the odd wait length you are seeing with indefinite waits either. When I ran the program you listed (with fixed instruction numbers since you are missing instruction address 2!), I got a wait length of 0 (or 10 if I continually held the retrigger pin high which is as expected). So not really sure what is going on there either!

When you said the retrigger delay was 180ns for an indefinite wait, what was that measured from? Once it's in the second wait (after the first wait timeout has elapsed) it should be as fast as the hwstart (it's using the same bit of assembly code). But diagnosing indefinite waits is probably not particularly important though come to think of it as I've just realised labscript requires a finite wait timeout anyway!

It would be good to get a definitive explanation for the hwstart and retrigger delays though as labscript requires that those delays are the same, which means I need to work out where to pad the assembly code to make them the same length!

Ohhhh....Just realised you are using a pulseblaster to do this, but you said the pulseblaster is running from it's crystal (and not externally referenced)? Would that maybe explain why your measured delays don't align with what I expected?

@philipstarkey
Copy link
Member Author

Actually, looking at the raw data, I think it might be fine? The PrawnBlaster takes 100ns to respond to the first hardware start trigger. The second trigger occurs at 15us and the PrawnBlaster responds ~120ns later. I don't think you need to subtract off the initial hwstart trigger delay here because it was absorbed by the wait (and reported wait length). This puts the retrigger as 2 clock cycles longer than the hwstart. I suspect if you set the PB retrigger time to be 10ns later then we would see the retrigger time be 130ns (3 clock cycles longer - but not sure if the PulseBlaster has the resolution to check this).

I have a suspicion that the wait length is out by 20ns (in that I have an off-by-one error somewhere in my code). It would make more sense for it to be reported as 510 (4900ns wait) so that the initial start trigger + wait length was equal to the expected wait length. I'll see if I can figure out a logical explanation for this based on the assembly code.

@philipstarkey
Copy link
Member Author

All I've found from looking at the assembly code is the wait should actually be 30ns longer than reported (because that's how long it takes before it's ready to respond to the trigger).

This is opposite to what I was hoping for! I think...let me write out things again because I'm getting confused.

The PrawnBlaster starts at 100ns (PulseBlaster time), and the program runs for 10us, so it enters the wait at 10.1us. There is a 30ns (0.03us) delay before it starts counting the wait. It reported 508 for the wait time, which means 4.92us. We should not count how long it takes the pico to detect the pulse because we only know what the assembly code sees, not what the internal buffer of the pico is doing. So it exits the wait at 15.05us (pulseblaster time) and begins the next pulse 70ns later. Huh...and there are exactly 7 asm instructions between the wait condition being met and the output being set high! So that is all reasonably consistent.

What I guess is not consistent is that we're saying it exits the wait at 15.05us when we expect that to be 15.0us. There is definitely not 50ns of buffering on the pico chip between the signal and the wait ending. There is 2 cycles of buffering in the pico chip. Maybe (up to) an additional 19ns in my assembly code loop if the trigger occurs at just the wrong moment. But let's consider the initial trigger again (the hwstart). This should only take 7 clock cycles (2 pico buffering cycles and 5 assembly instructions). But we see 10. If we were to assume that the PulseBlaster is actually triggering the PrawnBlaster at t=30ns, then that means we would expect to see the second trigger at 15.03us, which is 20ns prior to when the PrawnBlaster detects the wait ending - exactly the expected time the pico buffers the input.

So that would actually appear completely consistent with what I expect to happen, assuming that (a) the PulseBlaster clock is not losing synchronisation with the PrawnBlaster clock over 15us and (b) the first PulseBlaster trigger to the PrawnBlaster is occurring 30ns later than you think.

Not sure if that rambling makes a huge amount of sense, but, based on the analysis of my code, we should see:

  1. The first PrawnBlaster pulse 70ns after trigger,
  2. The measured wait be 30ns longer than reported by the PrawnBlaster
  3. The PrawnBlaster to take 70ns between end of measured wait and start of next pulse (or 90ns after trigger)

And I think we do see this if the PulseBlaster triggers are actually occurring at 0.03us and 15.03us relative to the data you shared.

@dihm
Copy link
Collaborator

dihm commented Mar 6, 2021

I'll admit I'm going to have to read all the above a few times to try and follow it. Following the timings here is a bit confusing. Just a couple notes to follow-up right now while you have time on the weekend.

  1. Oops on the sequence. I have written code that automatically populates instruction numbers from an array of times and reps so I have to write these comments out by hand.
  2. I am pretty confident the pulseblaster timing. I set a mirroring pulse channel to the triggering pulses that is simultaneously measured by the timetagger. Those green traces are measured (rather than just calculated like the orange dashed), and all times are referenced to the measured first rising edge of the pulseblaster. For better or worse, it is the absolute clock reference right now (and it is not externally clocked, being a USB enclosure version).
  3. I think sub 10ns jitter makes sense under an implicit assumption I made about how the Prawnblaster detection works, so maybe you can correct my understanding here. I assume that if a pulseblaster trigger begins mid-clock cycle on the prawnblaster, it is unlikely for the prawnblaster to count that clock cycle in part of its trigger detection. Put another way, the 3 cycle detection circuit requires 3 complete (or nearly complete) cycles with trigger high. Since the clocks are uncorrelated, an observed diff in times basically measures the difference and should be expected.

In any case, I'm thinking I might need to modify the test rig a bit to better automate things and try to remove some of these ambiguities. If you have any particular suggestions, by all means. Assume you have a very well stocked lab with any piece of equipment you could imagine and I can work from there.

@philipstarkey
Copy link
Member Author

  • I am pretty confident the pulseblaster timing. I set a mirroring pulse channel to the triggering pulses that is simultaneously measured by the timetagger. Those green traces are measured (rather than just calculated like the orange dashed), and all times are referenced to the measured first rising edge of the pulseblaster. For better or worse, it is the absolute clock reference right now (and it is not externally clocked, being a USB enclosure version).

Hmm. I might create a simplified firmware that will allow you to measure how long it takes the Pico to respond to a trigger. Your data seems to point to the trigger detection taking 3 cycles longer than I expect, which is odd, but I guess not out of the question.

  • I think sub 10ns jitter makes sense under an implicit assumption I made about how the Prawnblaster detection works, so maybe you can correct my understanding here. I assume that if a pulseblaster trigger begins mid-clock cycle on the prawnblaster, it is unlikely for the prawnblaster to count that clock cycle in part of its trigger detection. Put another way, the 3 cycle detection circuit requires 3 complete (or nearly complete) cycles with trigger high. Since the clocks are uncorrelated, an observed diff in times basically measures the difference and should be expected.

There are several different components to the trigger detection for waits. The first is inside the RP2040 chip on the pico, which I believe buffers the signal for 2 cycles (to avoid detecting spurious edges). The second is inside my assembly code. The assembly code consists of 2 assembly instructions in a loop. The first jumps out of the loop if the input pin is high. The second jumps back to the first instruction as long as a counter variable is not 0 (and then subsequently decrements the counter regardless of whether it jumps or not...which I find odd but that's what the RP2040 designers decided to do!)

So if the trigger arrives just after a clock tick, then it would take 20ns-29.99ns to get through the RP2040 buffer. Assuming this buffer is synchronised to the PIO core running my code, then there is a potential 0-10ns delay depending on which instruction in my loop is currently being run. So that gives a possible 20-39.99ns delay range from what I can tell. But, if everything is sychronised, I would expect to see the same delay on every shot for the same instruction set. Slightly shifting the timing of the trigger pulse though may result in a bigger jump in observed delay if it wraps from 20ns to 39.99ns though. And this only applies to resuming from a finite wait, not an initial trigger (or indefinite wait)

None of this helps to explain the data though. It is very odd that the PrawnBlaster does not start until t=100ns. Once the trigger makes it to the PIO core there is only a delay of 5 clock cycles from assembly instructions between trigger and output. I think 2 more can be explained by the RP2040 buffering, but I don't understand the other 3. I think once we solve that then maybe we can make sense of the wait length (maybe...)

philipstarkey added a commit that referenced this issue Mar 21, 2021
Fixes #12

An attempt at reducing the number of assembly instructions per half period loop. This should reduce it from 6 to 5, meaning the minimum pulse length is now 10 clock cycles.

This does mean that the delay between external trigger and resume has increased by 1 clock cycle to 80 ns (+ the apparent 40ns delay before the input reaches the assembly code, see #5)

This is also untested since I don't have the means to determine changes to pulse lengths of the scale of 1 clock cycle!
@dihm
Copy link
Collaborator

dihm commented Mar 26, 2021

So I've been testing wait retriggering some more with the master branch (at d2adf1e). I'm also testing using all four clocks with the same program, and using the delay generator to send in retriggers after being triggered by the first rising edge of the prawnblaster (basically the same setup as in #9). This time I did some testing on the delay generator first. The way it works, I'm triggering the prawnblaster off the front panel outputs and using a combined output on the back into the time tagger to confirm timing. Turns out, there is ~8.3ns delay going from front to back, so I've subtracted that out from the delay generator reference pulses that I measure.

Anyway, my current results are (for 7,8,9,10 us retrigger times) that the prawnblaster restarts 119.4, 119.9, 121, 101 ns after the pulse. The self-reported waits are 684, 584, 484, 386. I've also checked that clock 3 always retriggers 20ns faster, even if I trigger it first or at the same time as all of the others.

Here is the raw data. Note that channels 6 and 11 are the same signal from the front and rear panel (which can be used to correct the measured retrigger pulses for channel 10).

So now not only am I confused on how long the retrigger delay should be, but it isn't even consistent across clocks. Great. Any specific requests for things to measure?

@philipstarkey
Copy link
Member Author

Could you repeat again with the latest firmware in master? I've made a few changes (the changes in #12 that I merged in required some tweaking in how I calculate wait lengths

I think everything is lining up (for the old firmware) except for how long it takes to respond to a trigger. I think you might have a typo in your previous message as it looks like clock 3 restarts 92ns (not 101ns) after the 4th trigger pulse?

Anyway, working from your raw data (and reported wait values), we have the PrawnBlaster entering waits at 4.00us. The first wait is 3.16us long, second wait is 4.16us, third is 5.16us and forth is 6.14us (for clocks 0-3 respectively).

This gives the prawnblaster detecting the trigger at 7.16us, 8.16us, 9.16us and 10.14us respectively (according to the reported wait length at least)

I all cases the prawnblaster is resuming 60ns after this. Technically this should be 70ns (based on assembly instruction count) but the wait length is only accurate to 2 clock cycles I think. This discrepancy may be "fixed" in the latest firmware because instead of 3 clock cycles before trigger, and 7 clock cycles after trigger, it's now 4 and 8. Which is nicely divisible by 2 and shouldn't get mangled during divide/multiply by 2 that I have in the code.

That means it took the PrawnBlaster 60ns to detect the trigger, except for clock 3 which was 30ns....I can't really explain this!
(I guess technically if we assume it was 70ns between trigger detection and resume as the assembly instructions imply then these numbers would be 50ns and 20ns...).

Do you have the ability to offset all of the triggers by a nanosecond? Could be worth seeing how they all respond to slightly shifting the trigger times relative to the first PrawnBlaster pulse (e.g. all by 1ns, then 2ns, then 3ns, .... then 20ns) as well as testing with the latest firmware (which, unrelated, should now report timed out waits as 4294967295 to distinguish from waits resumed by trigger in the last iteration of the wait loop).

@philipstarkey
Copy link
Member Author

I've also made #14 which might change things 🤷

@philipstarkey
Copy link
Member Author

An additional thought to the other 2 messages - can you control how long the trigger is high for? If you could make the trigger pulse only high for 10ns, 20ns, 30ns, etc. this would also probably help us figure out how much buffering the pico is doing against spurious pulses (aka what is the minimum high time for the trigger). Doesn't help determine delays for input (or output...!! What if the output is buffered?) once it has detected the pulse but it's more information if you have that level of control over your trigger pulse length. Don't worry if you don't though.

@dihm
Copy link
Collaborator

dihm commented Mar 27, 2021

Oh boy! Look at all the new functionality/fixes! I will happily re-run tests with the newest firmware.

The two tests you suggest are definitely good to check. My delay generator can easily turn down the pulse widths to sub-10ns so that will be a simple test. It also technically has 5 ps resolution in delay settings, though there is absolute timing error on the order of 1 ns, so testing small variations in timing shouldn't be too bad either.

I have paper review deadline Monday, so if that goes well hopefully I can get the new tests done then. I'll be sure to check my math on clock 3 retriggering, though I was pretty confident since I calculate all the delays simultaneously with list comprehensions. Be a little odd if only one of them was off.

@philipstarkey
Copy link
Member Author

Oh boy! Look at all the new functionality/fixes! I will happily re-run tests with the newest firmware.

The two tests you suggest are definitely good to check. My delay generator can easily turn down the pulse widths to sub-10ns so that will be a simple test. It also technically has 5 ps resolution in delay settings, though there is absolute timing error on the order of 1 ns, so testing small variations in timing shouldn't be too bad either.

I have paper review deadline Monday, so if that goes well hopefully I can get the new tests done then. I'll be sure to check my math on clock 3 retriggering, though I was pretty confident since I calculate all the delays simultaneously with list comprehensions. Be a little odd if only one of them was off.

No worries! Whenever you have time :) I'll probably attempt to finish off the labscript device class over the Easter weekend.

I have made some mistakes in my analysis of the raw data...and also not taken into account the delay between front and back (mainly because I wasn't clear on where channels 6/10/11 were physically coming from)

Ignoring the delay between front and back, my analysis should have said the following (numbers corrected in quote below):

That means it took the PrawnBlaster 50ns to detect the trigger, except for clock 3 which was 30ns....I can't really explain this!
(I guess technically if we assume it was 70ns between trigger detection and resume as the assembly instructions imply then these numbers would be 40ns and 20ns...).

However, I'm guessing that then taking into account the 8.3ns delay would then make these 48ns and 28ns? (unless I have the direction wrong...) Still slightly outside the range I expected. At the very least I suspect slightly shifting the trigger pulse time (by a few ns) should bring these numbers into line with each other (even if we can't explain the magnitude).

@dihm
Copy link
Collaborator

dihm commented Mar 31, 2021

OK, so more testing with the newest firmware.

For reference, doing the exact same sequences and timings as before, I get basically the same, but all retrigger delays are 10ns longer (129.5,129.9,131,111). Reported waits are 684, 584, 484, 386. Data is Mult_clocks_wait_timing_150ns.

Next, I set the width of the retriggering pulses to be 10, 20, 30, 40 ns for clocks 0-4. Clock 0 did not retrigger, but all of the others did. Reported waits are 4294967295, 584, 484, 386. Data is Mult_clocks_wait_timing_10-40ns.

Next is quite interesting. I set all of the triggers to start at 7us+(0:22:2)ns in sets of four [so 7, 7.002, 7.004, 7.006 for clocks 0-3 for the first batch). The reported waits were all 684. The restart times are all within the jitter noise (7229.97, 7230.1, 7229.9, 7229.9,7230.0,7230.1,7230.0,7229.9,7229.8,7249.8,7249.8,7229.7) ns. Data is in Mult_clocks_wait_timing_7plusxxns. Note that the reference output for the trigger pulses is actually a logical or of all of the output channels, so when they overlap there is no edge.

Retriggering for clock 3 did not advance to 7249.8 like its brethren until the retrigger time was put to 7us + 25ns. It retriggers at 7209.8 until 7us + 5ns. Going in and setting all the clocks to retrigger at the same times, I find the edges between 7209.8 & 7229.8 and 7229.8 & 7249.8 for each clock to be (time given is first to give the next level, so 6.999us on clock 0 gives 7229.8ns):

  1. 6.999 us: 7.019 us
  2. 6.997 us: 7.017 us
  3. 6.997 us: 7.017 us
  4. 7.005 us: 7.025 us

So it seems the clocks are just a tiny bit off from each other in when things happen, and that seems to be why clock 3 has reported short since I've always chosen a nice round number that straddles this line.

All data files and figures.

Finally, to aid in communicating, here is a block diagram of the setup.
PB_TimingTestRig
The delay generator lag is that the back panel T0 and AB+CD+EF+GH outputs are 8.3ns later than their front panel counterparts.

Is there any other tests you can think of?

@philipstarkey
Copy link
Member Author

Cool, I'll try and analyse the data you've provided and see if I can come to any further conclusions.

Did you also try with the firmware in #14? I'd give it a 50/50 chance it will synchronise the 4 PIO cores.

@dihm
Copy link
Collaborator

dihm commented Apr 1, 2021

Did you also try with the firmware in #14? I'd give it a 50/50 chance it will synchronise the 4 PIO cores.

Oops. Knew I forgot something. Using #14 I get the same results for retriggering times as the final described set of data above (ie need at least 7.025us for clock 3 to retrigger at 7250ns, etc).

@philipstarkey
Copy link
Member Author

Interesting! I assume we can rule out cable lengths as the issue - if I remember correctly you tried switching the trigger lines (hopefully on the pico end?) and it didn't change things?

I've also pushed a new version to #14 that uses the other set of 4 PIO cores...figured I'd cover all bases in case it's a defect/fluctuation with the silicon itself!

@philipstarkey
Copy link
Member Author

Also, in addition to trying the other set of PIO cores, I assume your tests were with external clocking. What if it's internally clocked at 100 MHz? What about 125 MHz? Obviously the wait length will be out due to drifts between the clocks but are the PIO cores better synchronised under those circumstances?

Otherwise I think I'm out of ideas! (Short of trying another board!) We might just have to live with the 8ns offset between cores that can translate to 20ns due to firmware design.

Actually, further question, are the initial pulses offset like this too if the Pico is hardware triggered to start? If no, is the offset between cores proportional to retrigger time? Seems a bit bizarre to have this offset for retriggering but not the initial hardware start..

@dihm
Copy link
Collaborator

dihm commented Apr 1, 2021

I assume your tests were with external clocking

I am actually internally clocking the prawnblaster (default 100 MHz), though I am clocking the time tagger from the delay generator to hopefully limit errors from that particular relationship. It would be interesting to see how these inter-core delays scale with overall clock frequency though.

As for another board, I am in the process of trying to get a few more for the lab, so that will have to wait for the time being.

I also haven't run the hwstart tests in a while. I'll try that next after I run the updated #14.

@dihm
Copy link
Collaborator

dihm commented Apr 1, 2021

Checking the transition times for a 125 MHz internal clocking. The retrigger times step from 7192:7208:7224 ns. The corresponding reported times are 508, 506, 504.

  1. 7.001 us: 7.017 us
  2. 6.999 us: 7.015(6) us
  3. 6.999 us: 7.015 us
  4. 7.007 us: 7.023 us

Interesting that there is now a very small difference between clocks 1 & 2 that is intermittent. Namely, I checked the first transition, found a difference in the second, went back to the first, then went back to the second and found them to be the same. Annoying.

Checking the same with 125 MHz internal clocking for the hwstarts (delay generator is manually triggered here, with programmed delays around 1 us). Since the absolute timing start point is determined by my manual start and is not generally phase synchronous with the prawnblaster clock, I am seeing some variability in the trigger delays on the order of 8 ns (or half the clock period). That said, I can see that the trigger times are multiples of the clock period (instead of 2*clock period). I can also see that each clock doesn't trigger at the same time (in multiples of the clock period) when given the same trigger times in general. I observe plenty of jitter in which clock cycle each clock starts just by running many shots with the same trigger times. About the best I can say with this setup is that clock 3 generally needs to be delayed and clocks 0,1,2 need to be advanced to more reliably get each clock to start on the same cycle. To really test this well, I'd probably need configure the delay generator to trigger off of a clock from the prawnblaster after I have manually sent 'hwstart'. I may be able to do that from the 48 MHz debug clock output?

Also, I'm not sure when this changed, but I think I am getting reliable starts with only 30ns pulses. For a cushion I'm using 40ns. Maybe this actually makes sense given that 4 clock cycles with 125 MHz is 32ns. Probably enough slop to make that work.

@philipstarkey
Copy link
Member Author

I've fixed #14 now (my find-replace was case sensitive and didn't detect some constants I needed to update for the DMA transfers). Should now be running on the other 4 cores.

I am basically out of ideas though. Either it's something to do with pushing the limits of your testing equipment/setup/etc., or more likely, the cores are for some reason not in phase. The phase shift at least seems reasonably constant 🤷

From what I can tell the delay between trigger and detection is thus ~50ns (+/- 10ns), and the delay from detecting the trigger and PrawnBlaster output is 80ns (as expected by the number of assembly instructions). Wait lengths are accurate to +/- 10ns as expected.

Happy for you to keep investigating (I'm interested to know if the other set of 4 cores are different and/or if other boards are different) but I'm going to refocus on finishing the labscript suite device classes as I don't think there is much else I can do with the firmware! If we don't have any new information by the time they're done and tested I'll just add a section to the readme about what we've found and encourage people to characterise their own boards if they need better than 20ns accuracy in absolute time. I suspect most ultracold atom experiments probably only need accuracy in the 100-1000ns range and/or don't care as long as it's repeatable (which it seems to be!). Can't expect too much from a $4 board! 😄

@dihm
Copy link
Collaborator

dihm commented Apr 2, 2021

The fix to #14 works for me now. I get very similar results except that the transitions edges on clocks 1 & 2 seem to be have moved between 6.996 & 6.995 (7.016 & 7.017) so they shift from shot to shot somewhat randomly. I do agree that up to the 1 ns level, the phase shift between clocks is very, very solid.

@philipstarkey
Copy link
Member Author

I should really stop saying "I'm out of ideas".

I've now also made #18 (based on #14 - so it's using the other set of cores for now that we haven't tested before...). It should reduce the time between trigger going high and the PrawnBlaster responding. No idea if it will also change the out of phase issue (as we have no idea what component is actually causing the phase delay).

Anyway, could be worth a test of the latest #14 and now #18 too!

Also, #18 includes a fix for something I noticed on my board - It seems the program is ending with an extra high pulse? I think it might have been caused by the reduction from 6 -> 5 clock cycle minimum (#12). I have not put the fix in any other branches yet!

@dihm
Copy link
Collaborator

dihm commented Apr 2, 2021

I should really stop saying "I'm out of ideas".

I feel this comment in my bones 😄

Can confirm #18 reduces the retrigger time by 20ns (100 MHz default clock), so retrigger times went from 7230 to 7210. It does seem something has changed as well. Three retrigger times are 7190:7210:7230 ns. The edges to the next step up occur 1 ns earlier than before, but the relative differences are the same:

  1. 6.998 us: 7.018 us
  2. 6.996 us: 7.016 us
  3. 6.996 us: 7.016 us
  4. 7.004 us: 7.024 us

Also, I checked how long the trigger pulses needed to be. If I set the retrigger times to be those of the second column above, they need to be 20ns. If I take off 1ns from each, the trigger pulses only need to be 10ns.

It seems the program is ending with an extra high pulse?

I haven't actually noticed this. The way my test rig works, the time tagger just records pulses until I tell it to stop in python time.

filewriter = TimeTagger.FileWriter(tagger=tagger,filename=filename,channels=[1,-1,2,-2,3,-3,4,-4,6,-6,10,-10,11,-11])
sleep(0.2)
ser.write(b'start\r\n')
print(ser.readline())
sleep(1)
filewriter.stop()

So I think I'd see extra pulses if I were getting them. It very well could be much more subtle and wiring dependent though, and whatever change you made doesn't seem to have broken anything I can see.

@dihm
Copy link
Collaborator

dihm commented Apr 2, 2021

Just to cover all the bases, I also tried testing hwstart timing by connecting the delay generator trigger to GPIO21 (the 48 Mhz debug clock) and manually hitting start (effectively gating the trigger line to try an synchronize the two systems reliably from shot to shot). While it helped, it didn't really fix it. I think it's an issue with the 48 MHz clock having almost, but not quite twice the period of 100 MHz so there are two possible phase offsets that I flip between from shot to shot. That said, if I maintain the same relative timing offsets for the clocks I can relaibly trigger on the same clock cycle for each one.

Also, I discovered accidentally that the hwstart works pretty reliably with on 10ns (under these conditions) which I found somewhat surprising. Ultimately I went up to 20 ns to avoid issues, but figured that was worth sharing. This is all still with #18.

@philipstarkey
Copy link
Member Author

Cool, thanks for testing! I think I will disregard #18 as I think it's only safe to turn off if the input signal is synchronous with the state machine. Given it's an external trigger, I don't think this can be assumed and the (apparent) risk is that the state machine will sample a meta-stable input signal which (again apparently) could corrupt the state machine execution 🤷

Even though #14 didn't do much I'll probably keep it as it made the code a bit nicer (and I'll add a serial command for choosing which set of 4 cores to use in case it is board dependent - setpio 0 or setpio 1 - defaults to 0 on powerup)

@philipstarkey
Copy link
Member Author

I've added some text to the FAQ regarding this issue. Would appreciate your eyes on it to see if I've got everything correct!

I've also made a bunch of changes to address all the other issues that were outstanding. If you wouldn't mind rerunning your standard suite of tests just to make sure I have not broken anything I would appreciate it!

@philipstarkey
Copy link
Member Author

Oh, I also meant to add that I have made it possible to configure all 4 pseudoclocks to use the same trigger pin (e.g. setinpin 0 0, setinpin 1 0, setinpin 2 0, setinpin 3 0)

Could be worth testing how synchronised the resume from waits are (both wait lengths recorded and output pulse times) for a single trigger feeding all pseduoclocks via the same GPIO pin (possibly also scanning the trigger time as you did before). I think that would more definitively rule out an issue with your testing setup and point to some internal issue in the RP2040 chip if the 8ns offset is still there for pseudoclock 3.

@philipstarkey
Copy link
Member Author

Just flagging this to remind myself that there is also section 2.19.4 in https://datasheets.raspberrypi.org/rp2040/rp2040-datasheet.pdf to consider. Not only can you in theory configure the drive current (could be useful?) but there is also slew rate and a schmitt trigger (which might be an additional input buffer?)

Not sure if it's worth playing with these or not...

@dihm
Copy link
Collaborator

dihm commented Apr 5, 2021

Oh, I also meant to add that I have made it possible to configure all 4 pseudoclocks to use the same trigger pin

So this is proving fascinating. When I set all of the clocks to trigger off GPIO 0, I transition from 7210 to 7230 on all of the clocks at 6.999 us. For GPIO 2, it's at 6.997us. For GPIO 4, it's at 6.997us. For GPIO 6 it's at 7.005us. So I guess this means the phase delays are all in the inputs. I thought I had checked that at some point but apparently it wasn't carefully enough. I can also confirm inverting the input pins for the clocks (so clock 0 is triggered by GPIO 6 etc) I get the phase slip that corresponds to the input pin, not the output. I guess this means we could look at other options for the GPIO to balance things. I'm not sure how worth it it would be either. Maybe the first thing to do is wait for me to get more boards to test just in case it is due to hardware and is actually consistent across boards and could be fixed by judicious selection of input pins.

Otherwise, things look good. The only difference I can see is that the diff errors in pulse widths is slightly improved for clock 0 now.
image
Not really sure what is different, but it's all within "spec" so whatever.

Finally, in the readme, the pseudoclock pin table indexes the clocks from 1 instead of 0.

@philipstarkey
Copy link
Member Author

That's good news! I've updated the readme :)

I'll leave this open while you consider investigating other boards, but I think for our purposes this is solved. The labscript suite only supports one trigger per pseudoclock device, regardless of how many independent pseudoclocks are contained within that device, so we are constrained to using a common input pin anyway!

A functioning labscript device class for the PrawnBlaster is here if you would like to test it!

@dihm
Copy link
Collaborator

dihm commented Apr 13, 2021

Finally got a shipment of picos. Testing out a second Pico I see the same transition times up to about 0.5 ns (a few of the clocks will report different retrigger times from shot to shot now). So that implies these differences are actually structural and reasonably consistent from device to device. I guess that is what it is. Given labscript limitations that enforce single input triggers anyway, it's probably not worth really digging in to it, but now we know for posterity.

A functioning labscript device class for the PrawnBlaster is here if you would like to test it!

Yay, more testing!

@philipstarkey
Copy link
Member Author

Since there didn't seem to be variation between boards, and we think we've got it as good as we can, I'll close this now :)
Thanks for all the testing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants