Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improved register layout and tweaked event submission code #636

Closed
jordens opened this issue Dec 1, 2016 · 55 comments
Closed

improved register layout and tweaked event submission code #636

jordens opened this issue Dec 1, 2016 · 55 comments

Comments

@jordens
Copy link
Member

@jordens jordens commented Dec 1, 2016

(from sinara-hw/sinara#47)

  • Potentially, there is also the option of a tighter coupling of the RTIO core with the CPU, so it doesn't have to program all the RTIO CSRs through external bus transactions at every event.
  • fewer registers (merge address and data, merge address and write_enable)
  • delta-timestamp register
  • CSR/memory-pinning or register-pinning of now
  • maybe event-generating gateware that makes "full" events from some channel-class dependent mmapped space
  • maybe IRQs can be used to handle error conditions and perform submission retrials
  • use bus wait states and bus errors to signal RTIO exceptions
@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Dec 2, 2016

fewer registers (merge address and data)

We can merge address and write enable instead.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Dec 2, 2016

maybe IRQs can be used to handle error conditions and perform submission retrials

No, because that will lose precise RTIO exceptions or create horrible race conditions. But we can use bus wait states and bus errors.

@jordens

This comment has been minimized.

Copy link
Member Author

@jordens jordens commented Dec 6, 2016

We can merge address and write enable instead.

We might be able to do both getting rid of two CSR writes in many cases:

  • merge address and data and let the event replacement happen (at the RTIO FIFO input side) using a channel-dependent bit mask.
  • merge write-enable into the channel number.

But we can use bus wait states and bus errors.

Good!

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Dec 6, 2016

Doing that to the channel register is difficult because DRTIO uses it to look up first the state of the remote channel, and test for underflow, FIFO full, etc.
And it is also used for muxing status between multiple (D)RTIO cores, so if the gateware redoes the transaction with the channel first, you'll have to block until the full transaction is completed so you get a valid status. There would still be a speed-up though as the gateware can do the transaction in a few cycles.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Dec 6, 2016

using a channel-dependent bit mask.

Is the bit-twiddling that will be required to compute the data faster than one register write? Note that we can have a second RTIO output function that writes to address zero by not doing the address CSR write.

@jordens

This comment has been minimized.

Copy link
Member Author

@jordens jordens commented Dec 6, 2016

Does it do that channel status lookup early to be in parallel with the CPU doing the address/data writes?
Pretty sure that the bit twiddling is faster. But it also pays to have fewer arguments being passed up and down the API. And it saves DMA and DRTIO size increasing throughput. Having separate rtio backend functions hurts caching and code size/maintainability.
The other thing we could do is merge that address into the channel number. Then the (compound) channel number would be {u8 core, u16 channel, u8 address}, maybe with permutations.
AFAICT we will also want a "length" field for the data in the RTIO API so that the DRTIO master and the DMA storage engine can truncate. But maybe it can just count the number of data writes.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Dec 7, 2016

Merging the address into the channel sounds OK.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Dec 7, 2016

The DMA playback engine takes LSB-first data of arbitrary length with byte granularity and zeros the missing MSBs, and DRTIO similarly removes zeros in front of data.
There is no gateware DMA storage engine and I don't know why you want one.

The new RTIO writes could be done this way, with a total of 4 bus write accesses (instead of the current 6) if the data is small:

  1. channel with integrated address
  2. timestamp (2 bus accesses as it's 64-bit)
  3. data triggering the RTIO write when the LSB CSR is accessed

(plus status readout)

@jordens

This comment has been minimized.

Copy link
Member Author

@jordens jordens commented Dec 7, 2016

The automatic zero-stripping/extending sounds good.
I'd think that there would be a DMA storage engine for either DMA input events or for remote DMA.

@whitequark

This comment has been minimized.

Copy link
Contributor

@whitequark whitequark commented Feb 2, 2017

register-pinning of now

If I recall correctly, I've tried this and it made no practical difference.

@jordens

This comment has been minimized.

Copy link
Member Author

@jordens jordens commented Feb 2, 2017

Since event submission (i.e. writing the now to CSR) seems to be as frequent if not more frequent than manipulation of now, why don't we do "CSR pinning of now", i.e. to the timestamp CSR of the RTIO interface. This may be problematic for DRTIO (it might be using a separate CSR) but that could probably be changed.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Feb 2, 2017

DRTIO is not using a separate CSR for the timestamp. The (D)RTIO cores are demuxed after the CSRs, see cri.py.

@whitequark

This comment has been minimized.

Copy link
Contributor

@whitequark whitequark commented Feb 2, 2017

CSR pinning of now sounds good and easy to implement (just pin the global to the address of CSR instead of having a global in ksupport).

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Mar 29, 2018

The other thing we could do is merge that address into the channel number. Then the (compound) channel number would be {u8 core, u16 channel, u8 address}

@jordens Do you confirm that a 8-bit address is still sufficient for SAWG and all foreseen needs?

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Mar 29, 2018

@whitequark What do you think of now pinning to the CSR with atomicity implemented in gateware by committing the value when the 32 least significant bits are written?

@jordens

This comment has been minimized.

Copy link
Member Author

@jordens jordens commented Mar 29, 2018

I don't see anything that would need more than 8 bit address space.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Mar 29, 2018

use bus wait states and bus errors to signal RTIO exceptions

Are bus errors working correctly with mor1kx and do they result in exceptions that can be traced?

@whitequark

This comment has been minimized.

Copy link
Contributor

@whitequark whitequark commented Mar 29, 2018

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Mar 29, 2018

OK, that would be acceptable.

@whitequark

This comment has been minimized.

Copy link
Contributor

@whitequark whitequark commented Mar 29, 2018

This needs to be rechecked before committing to this implementation though, it was a while ago when I looked at it.

@whitequark

This comment has been minimized.

Copy link
Contributor

@whitequark whitequark commented Mar 30, 2018

@sbourdeauducq What do you think about mapping the RTIO registers to OR1K SPRs? I've just thought of a good way to expose those to LLVM; we can add a new address space so that accesses to SPRs are just pointer accesses and most optimizations still apply to them.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Mar 30, 2018

Why does that help, single-cycle access? What kind of interface would that have? Wouldn't that cause problems with now pinning? Wouldn't that cause timing issues in the FPGA? Also that contributes to breaking compatibility with other CPUs. Is it really worth it?

@whitequark

This comment has been minimized.

Copy link
Contributor

@whitequark whitequark commented Mar 30, 2018

Why does that help, single-cycle access?

Yes.

What kind of interface would that have?

Interface where? To software? More or less the same, just use mfspr/mtspr in Rust and slightly different codegen in ARTIQ Python.

Wouldn't that cause problems with now pinning?

Quite the opposite, it will eliminate some inefficiency from now pinning (if pinned via the linker all accesses to now will still have to go via the GOT offset, which adds a few setup instructions and increases register pressure).

Wouldn't that cause timing issues in the FPGA?

I think you have more insight into this than me at this moment.

Also that contributes to breaking compatibility with other CPUs.

All mature soft CPUs I'm aware of have some sort of coprocessor interfaces.

Is it really worth it?

Well, how much do we want to shave cycles off RTIO submission time?

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Mar 30, 2018

Interface to gateware.
How many of those registers can we have anyway? The RTIO data register is quite large.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Mar 30, 2018

Well, how much do we want to shave cycles off RTIO submission time?

That would shave 4 cycles (32ns) and seems quite complicated to do.

@dhslichter

This comment has been minimized.

Copy link
Contributor

@dhslichter dhslichter commented Mar 30, 2018

Also that contributes to breaking compatibility with other CPUs. Is it really worth it?

It seems to me that ARTIQ is pretty heavily invested in mor1kx, and that changing to a different CPU (even if there were a better option, which it seems at present there isn't) would involve a great deal of work anyway.

@whitequark

This comment has been minimized.

Copy link
Contributor

@whitequark whitequark commented Mar 30, 2018

Interface to gateware.

   wire [OPTION_OPERAND_WIDTH-1:0] spr_bus_dat_o;
   wire			spr_bus_stb_o;
   wire			spr_bus_we_o;
   wire [15:0]		spr_sr_o;

How many of those registers can we have anyway? The RTIO data register is quite large.

At least 32K.

@dhslichter

This comment has been minimized.

Copy link
Contributor

@dhslichter dhslichter commented Mar 30, 2018

Well, how much do we want to shave cycles off RTIO submission time?

In my experience this is one of the main pain points for ARTIQ users, and whatever improvements we can squeeze out in a reasonable fashion are really welcome for all users.

@whitequark

This comment has been minimized.

Copy link
Contributor

@whitequark whitequark commented Mar 30, 2018

@dhslichter Something you can do to advance this is submit realistic benchmark code that I can profile. There might be inefficiencies that I don't currently expect in different parts of the stack.

@dhslichter

This comment has been minimized.

Copy link
Contributor

@dhslichter dhslichter commented Mar 30, 2018

@sbourdeauducq I assume that the proposals for shaving time off RTIO submissions would also shave time off RTIO retrieval (i.e. from reading timestamps from an input FIFO). Is this accurate?

@dhslichter

This comment has been minimized.

Copy link
Contributor

@dhslichter dhslichter commented Mar 30, 2018

@whitequark ack, I will send along some code for sideband cooling to give a sense of what we're up against, and to give you something to benchmark with.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Mar 30, 2018

I assume that the proposals for shaving time off RTIO submissions would also shave time off RTIO retrieval (i.e. from reading timestamps from an input FIFO). Is this accurate?

Not really, apart from now pinning this only applies to outputs.

It seems to me that ARTIQ is pretty heavily invested in mor1kx, and that changing to a different CPU (even if there were a better option, which it seems at present there isn't) would involve a great deal of work anyway.

Not that much, it's basically a bit of system code, the exception handling/unwinder, and dealing with rustc/LLVM breakage that I cannot imagine will miss the opportunity to manifest itself. A lot of things are portable.

@whitequark instead of SPRs, we can maybe do Wishbone combinatorial feedback for writes; would that work (i.e. no mor1kx bugs, and single-cycle performance) and meet timing? And does it actually improve things - with the write buffer, does the 2-cycle access have an impact on performance at all?

And it's just 32ns at most...

@dhslichter

This comment has been minimized.

Copy link
Contributor

@dhslichter dhslichter commented Mar 30, 2018

@sbourdeauducq are there modifications that would help with input speeds as well?

Ack on the CPU portability comments.

@whitequark

This comment has been minimized.

Copy link
Contributor

@whitequark whitequark commented Mar 30, 2018

Not that much, it's basically a bit of system code, the exception handling/unwinder, and dealing with rustc/LLVM breakage that I cannot imagine will miss the opportunity to manifest itself. A lot of things are portable.

I don't imagine we're going to migrate to LM32 at this point because of the time investment for the LLVM backend and more importantly the lack of features in the CPU core, so the only option is RISC-V. I've just looked at RISC-V status in LLVM and it's somewhat underwhelming compared to what I expected.

As far as I can tell they have a decently working LLVM backend but very little of it is upstream (10/84 patches), so there's no real advantage over our current OR1K LLVM backend. I cannot imagine the situation with rustc is any better, though rustc needs only a tiny amount of architecture-specific code (it's something around 100 LOC for OR1K, I think).

The EH/unwinder changes are trivial (especially after I made them once for OR1K) so that's not a factor.

Apart from what you listed, what else should be handled is linking, but that's not major either.

To summarize, whether we should switch to RISC-V should be informed exclusively by what we win by using a different CPU softcore, since that's the only thing that will matter in practice, at least until RISC-V is fully upstream and widely used, which isn't going to happen until the end of 2018 or even 2019.

@sbourdeauducq Worth a try.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Mar 30, 2018

For inputs, in addition to now pinning, we can shave 1 programming register access, plus another one and (in the fast code path) some tests by using Wishbone bus wait/error states.

@dhslichter

This comment has been minimized.

Copy link
Contributor

@dhslichter dhslichter commented Mar 30, 2018

@sbourdeauducq ack, thanks.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Apr 1, 2018

(optionally) use the Wishbone wait state for flow control.
(optionally) use Wishbone bus errors to signal underflows and link errors.

Currently, RTIO CSR writes go through the write buffer, and each <=32-bit write takes 2 cycles to complete, which (thanks to the write buffer) is done in parallel with the CPU preparing the next operation.

The problem with this scheme is, write error exceptions become asynchronous (raised some instructions after the write), so race conditions can occur. For example, a RTIOUnderflow can potentially be raised after the software has made a decision based on the incorrect information that no underflow has occured.

So to implement this, the write buffer would have to be disabled for RTIO CSRs. This raises two difficult issues:

  • Adding the write buffer bypass feature to mor1kx without obscure bugs.
  • The CSR interface has to complete writes in 1 cycle (i.e. Wishbone combinatorial feedback) to gain back the performance that is lost to not being able to background the 2-cycle writes anymore. It also has to be able to complete them in an arbitrarily large number of cycles. This makes meeting timing in the FPGA challenging, especially on the slow Artix-7 used in Kasli, which is already a pain (#891).

I'm not sure if saving 1 CSR read and 1 test (dozens ns) is worth going through those difficulties.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Apr 1, 2018

Maybe the best solution is to dig into the CPU pipeline and implement custom instructions for RTIO operations. But that's significantly more complicated than the other optimizations proposed here.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Apr 1, 2018

Such instructions, on the other hand, could be pretty fast. With optimized assembly (the production of which by the compiler is another issue), producing a square waveform on a TTL would be something like (@whitequark correct me if I am wrong):

  • increment 64-bit timestamp: 2 cycles
  • invert level (RTIO data): 1 cycle
  • wait for the data to be available in the CPU pipeline: 3 cycles
  • RTIO write: 3 cycles
  • loop: 3 cycles

So, a total of ~96ns per TTL state change.

@dnadlinger

This comment has been minimized.

Copy link
Collaborator

@dnadlinger dnadlinger commented Apr 1, 2018

So to implement this, the write buffer would have to be disabled for RTIO CSRs. This raises two difficult issues: […]

You could emit an msync after the stores for which you need precise exceptions to get around this, but I haven't checked whether that would make sense performance-wise.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Apr 1, 2018

I know, but I think it's marginally faster than reading the status register and a lot more complicated.

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Aug 9, 2018

Funded by NIST.

@hartytp

This comment has been minimized.

Copy link
Collaborator

@hartytp hartytp commented Aug 9, 2018

Great!

@sbourdeauducq

This comment has been minimized.

Copy link
Member

@sbourdeauducq sbourdeauducq commented Dec 1, 2018

Done. test_pulse_rate results:
Before: 7.20e-07
After RTIO output layout change: 4.44e-07
After now-pinning: 4.56e-07

@dhslichter

This comment has been minimized.

Copy link
Contributor

@dhslichter dhslichter commented Dec 4, 2018

Nice, thanks @sbourdeauducq. We will test here in the next week or two.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
6 participants
You can’t perform that action at this time.