-
Notifications
You must be signed in to change notification settings - Fork 510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider fork-like interface that doesn't schedule new coroutine immediately #1541
Comments
That's how |
Yes I believe so. From a StackOverflow example: task = asyncio.create_task(coro)
await asyncio.sleep(0) # return control to loop so task can start |
What's the use case for this? As mentioned in #1526, this can be implemented by adding coroutines to the scheduler using We don't really have a way to force a reschedule. I also brought up in my comment that because you must yield control to the scheduler for the coroutines to start, and you may forget to do this, it's possible for the test to end before the coroutine runs; resulting in unexpected or invalid behavior. We need a way to force the scheduler to run after a call to |
I'll amend this "What's the use case for supporting both behaviors?" I wouldn't be against changing the behavior of |
After diagnosing the root cause of #1688 and #1347, I am even more on board with the idea of |
It also puts the originating Task in a weird state, where it's not pending a trigger but it's also not the running task. await fork(coro()) |
@garmin-mjames What's the purpose of that trigger? |
The task that calls I guess it's sort of a half-way between the two options, where it creates the new task without scheduling it but then immediately yields to the scheduler to run the new task. It would preserve current behavior whereby the scheduler runs the new task then returns to the task that called |
@garmin-mjames I think we should just add the forked coroutine to the list of pending tasks, so when the current task yields control back to the scheduler, the forked coroutine starts up. If you want it to start up immediately, you can simply immediately yield control with a future = fork(coro()) # `RunningTask` is 90% of the way to a future
# ...
await NullTrigger()
# ...
res = await future |
@ktbarrett I agree.
|
@garmin-mjames I think we should change that behavior. If we are going to break the scheduler interface, we should just break it and do what we need to make it better. We need to queue up issues and detail what the end product of a scheduler rewrite should look like. This is the test I used to check the behavior of @cocotb.test()
async def test_nulltrigger_reschedule(dut):
@cocotb.coroutine
async def reschedule(n):
for i in range(4):
cocotb.log.info("Fork {}, iteration {}".format(n, i))
await NullTrigger()
await Combine(*(reschedule(i) for i in range(4))) The output shows
Funnily enough, that test fails too! Yay scheduler! With EDIT: |
Nice test-case @ktbarrett, thanks |
I'm working on an overhaul of the scheduler debug logging, and as part of that I found something interesting that's related. If I add a @cocotb.test()
async def test_nulltrigger_reschedule(dut):
@cocotb.coroutine
async def reschedule(n):
for i in range(4):
cocotb.log.info("Fork {}, iteration {}".format(n, i))
await NullTrigger()
await Timer(1)
await Combine(*(reschedule(i) for i in range(4))) Result:
When we are in the event loop, yielding a Lines 332 to 334 in a3fb4b5
I changed |
Bumping this because this issue is now directly blocking a feature a user (me) wants to implement. One of our testbenching methodologies is built around the CSP model. Each part of the testbench is an independent process communicating to other parts via channels. The processes are implemented using The limitation is that processes must be instantiated and then started as separate steps. This is because we don't want the process to start immediately (due to Because we have to start the process as a separate step, we cannot support anonymous processes. All processes must be managed by some object, that object needs to know about the processes it manages, which starts them up after all testbenching structure is instantiated. We were hoping we could use the concept of anonymous processes for testbench customization. The idea is we delete an anchor point, anonymous processes which are are only referenced by their channels, weakly, will be automatically pruned. For example, this is useful for customizing a Scoreboard in a reuse VIP, or removing a Driver from an Agent when it should be passive. All Processes which connect to the deleted elements would be automatically pruned and not left dangling, in case it would improve performance or prevent errors. Changing |
What does the code in #1541 (comment) do on master? (edit: it does what that comment describes, it's tested in #1705). My impression was we "fixed" |
@ktbarrett: Can you get what you need with def fork_later(coro):
e = Event()
async def wrapped():
await e.wait()
return (await coro)
return e, cocotb.fork(wrapped()) used as e, task = fork_later(coro)
# later
e.set() |
@eric-wieser I forgot that was fixed... The second suggestion is equivalent to what we have now but uses an But if def run_task_soon(coro: Union[RunningTask, coroutine]) -> RunningTask:
"""Runs a coroutine concurrently, starting after the current coroutine yields control"""
async def later():
await NullTrigger()
await coro
return cocotb.fork(later()) I took this issue as a general complaint about the scheduler being recursive and causing a number of bugs, but that should probably be in a separate issue. |
Currently when calling
fork
, the new forked coroutine gets to run immediately, and the originating coroutine waits untilfork
returns before continuing. It might be useful to provide a way to queue the forked coroutine to run after the current one yields back to the scheduler.Originally posted by @garmin-mjames in #1526 (comment)
The text was updated successfully, but these errors were encountered: