Replies: 10 comments 40 replies
-
Here is a more concret example, with a fictive little SoC with two clock domain : val slowCd = ClockDomain.external("slow")
val fastCd = ClockDomain.external("fast")
implicit val interconnect = new Interconnect()
val system = fastCd on new Area {
val cpu0 = new CPU()
val cpu1 = new CPU()
val busA = interconnect.createNode()
busA << cpu0.node
busA << cpu1.node
}
val peripheral = slowCd on new Area {
val peripheralBus = interconnect.createNode()
peripheralBus at(0x10000000, 16 MiB) of system.busA
val adapter = new Adapter()
adapter.input << peripheralBus
val uart0 = new UART()
uart0.node at(0x100) of adapter.output
val uart1 = new UART()
uart1.node at(0x200) of adapter.output
} |
Beta Was this translation helpful? Give feedback.
-
I've thought about trying this for AXI switches or at least IDs. If you can figure out one way maybe we can make it generic or backport it to other buses? |
Beta Was this translation helpful? Give feedback.
-
Here is some progress (it generate the hardware) of a simple pseudo SoC where master / slave buses are toplevel io new Component{
implicit val interconnect = new Interconnect()
//A fictive CPU which has 2 memory bus, one from the data cache and one for peripheral accesses
val cpu = new Area{
val main = simpleMaster(coherentOnly)
val io = simpleMaster(readWrite)
}
//Will manage memory coherency
val hub = new CoherencyHubIntegrator()
val p0 = hub.createPort()
p0 << cpu.main.node
p0 << cpu.io.node
//Define the main memory of the SoC (ex : DDR)
val memory = simpleSlave(20, 32)
memory.node.addTag(PMA.MAIN)
memory.node at 0x80000000l of hub.memGet
memory.node at 0x80000000l of hub.memPut
//Define all the peripherals / low performance stuff, ex uart, scratch ram, rom, ..
val peripheral = new Area{
val bus = interconnect.createNode()
bus at 0x20000000 of hub.memGet
bus at 0x20000000 of hub.memPut
val gpio = simpleSlave(12, 32)
val uart = simpleSlave(12, 32)
val spi = simpleSlave(12, 32)
val memory = simpleSlave(16, 32)
val rom = simpleReadOnlySlave(12, 32)
memory.node.addTag(PMA.MAIN)
rom.node.addTag(PMA.MAIN)
gpio.node at 0x1000 of bus
uart.node at 0x2000 of bus
spi.node at 0x3000 of bus
memory.node at 0x10000 of bus
rom.node at 0x20000 of bus
}
//Will analyse the access capabilities of the CPU buses
Elab check new Area{
val mainSupport = MemoryConnection.getSupportedTransfers(cpu.main.node)
println("cpu.main.node can access : ")
println(mainSupport.map("- " + _).mkString("\n"))
val ioSupport = MemoryConnection.getSupportedTransfers(cpu.io.node)
println("cpu.io.node can access : ")
println(ioSupport.map("- " + _).mkString("\n"))
}
} The elab check stuff will print out :
So it can track down for each master, exactly which slave can be accessed, and which kind of access can be done. |
Beta Was this translation helpful? Give feedback.
-
Funny update, there is no more Interconnect class which centralize stuff, things are now completly distributed : new Component{
val m0, m1 = simpleMaster(readWrite)
val s0, s1 = simpleSlave(8)
val b0 = InterconnectNode()
b0 << m0.node
b0 << m1.node
s0.node at 0x200 of b0
s1.node at 0x400 of b0
} |
Beta Was this translation helpful? Give feedback.
-
Looks quite nice! I assume that TileLink is currenly hardcoded? |
Beta Was this translation helpful? Give feedback.
-
Currently, crossing to multiple component isn't supported, but i think that would be fesable.
It isn't realy hardcoded. The idea is more to have a distributed system. If for instance you want to add AXI in the mix, what you need is to implement a new axi.Node / axi.Connection in a standalone way, and if you want to connect tilelink.Node to axi.Node, you just need to implement a Bridge which will convert the negociation skim + add the hardware. |
Beta Was this translation helpful? Give feedback.
-
Those developments will be presented in the FSiC 2023 (10-12 july) Here are the slides : Let's me know if you have any question / comment ^^ |
Beta Was this translation helpful? Give feedback.
-
After looking at your informative slides, some future features which might be fun: automatic address allocation, and whole bus register map generation (RegIf and introspection). |
Beta Was this translation helpful? Give feedback.
-
Yes right, mostly the elaboration system is made in a way that no fiber phase start before the main thread is done executing. So, if you want to interact with stuff generated from one of the fiber phase (axibr.down) you have to fork a new fiber, or at leat a new hardware elaboration thread (to not block the main thread) So hardFork(io.axiDown << axibr.down) // the short and kinda hacky way, as it bypass Fiber API
Or :
val binding = Fiber build new Area{
io.axiDown << axibr.down
..
} |
Beta Was this translation helpful? Give feedback.
-
It is curious when i try on my side the tilelink Opcode.A stuff, it seems fine : type A is (PUT_FULL_DATA,PUT_PARTIAL_DATA,GET,ACQUIRE_BLOCK,ACQUIRE_PERM); Which version of scala do you use ? do you have anything specific to you setup ? For the Cache stuff the proper fix is to provide a naming space to the parent scope :) object Cache extends AreaObject{
val CtrlOpcode = new SpinalEnum {
val ACQUIRE_BLOCK, ACQUIRE_PERM, RELEASE, RELEASE_DATA, PUT_PARTIAL_DATA, PUT_FULL_DATA, GET, EVICT = newElement()
}
I will push that fix |
Beta Was this translation helpful? Give feedback.
-
Hi,
I'm working on getting Tilelink support in SpinalHDL :
https://github.com/SpinalHDL/SpinalHDL/tree/tilelink/lib/src/main/scala/spinal/lib/bus/tilelink
The idea is to have a negociation framework to solve the parameters, but not in the same way as the rocket Diplomacy. Mostly it would be using the Handle / spinal.core.fiber framework to allow different elaboration thread to "solve" the design.
Here is an example of the current usage API :
https://github.com/SpinalHDL/SpinalHDL/blob/tilelink/lib/src/test/scala/spinal/lib/bus/tilelink/InterconnectTester.scala#L38
Let's me know if you are interrested into using / constributing to the feature ^^
Beta Was this translation helpful? Give feedback.
All reactions