Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to fix #96 #134

Closed
wants to merge 8 commits into from

Conversation

txomon
Copy link
Contributor

@txomon txomon commented Apr 28, 2016

So, @agronholm it took me one day of work to figure out the best layout for asyncio.

I did a lot of experiments, but this I think is the best answer. I haven't looked too much for python3 backwards compatibility, but it should support python3.3 onwards.

Asyncio after all is not available before (AFAIK)

@txomon
Copy link
Contributor Author

txomon commented Apr 28, 2016

BTW, I expect tests to fail terribly because I didn't touch them...

@agronholm
Copy link
Owner

agronholm commented Apr 28, 2016

I see quite a few problems here:

  1. This will break Python 2.7 compatibility due to the use of "yield from" and not supporting the Trollius backport (which AsyncIOScheduler does)
  2. This will break backwards compatibility because AsyncIOExecutor now always runs everything in the event loop thread where it previously ran jobs in the default (thread pool) executor
  3. BasePoolExecutor submits a task pointing to a bound method, which will break process pool based executors (this is the reason run_job() was a free function)
  4. It duplicates a lot of code, which I was specifically hoping to avoid

@agronholm
Copy link
Owner

I don't want to discourage you, but this is a hard problem and may require one or more backwards incompatible API changes to be properly implemented.

@txomon
Copy link
Contributor Author

txomon commented Apr 28, 2016

Thanks for the fast review!

  1. I completely agree, I had totally forgotten about trollius =) I will make the changes to support 2.7
  2. This was done on purpose. With asyncio, we should not need more than one thread. This is my main complaint about the current implementation and the reason for this patch
  3. I will have a look on that
  4. Only duplicates the necessary parts in code to support both coroutines and normal functions. If you check the diff, I have only added the necessary support for Futures, etc.

About 1, I totally forgot about trollius, so breaking 2.7 compatibility wasn't a problem because asyncio couldn't be used.

@agronholm
Copy link
Owner

agronholm commented Apr 28, 2016

Regarding (2), a point release should not break backwards compatibility (although this inadvertently happened with 3.1). However, as coroutine callables didn't work to begin with, it'd be acceptable to get them to work while still running regular functions in the default thread pool.

@txomon
Copy link
Contributor Author

txomon commented Apr 28, 2016

BTW, reading the trollius documentation, it is a deprecated project... http://trollius.readthedocs.io/deprecated.html

@agronholm
Copy link
Owner

I'm aware. But it does work.

@agronholm
Copy link
Owner

In APScheduler 4.0 I will think about ditching Trollius.

@txomon txomon force-pushed the feature/asyncio-truly-asyncio branch from 413a8cf to 8deb71d Compare April 28, 2016 18:26
else:
self._logger.info('Job "%s" executed successfully', job)
return JobExecutionEvent(EVENT_JOB_EXECUTED, job.id, job._jobstore_alias, run_time, retval=retval)
if real_asyncio:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope you're not being serious about this duplication. It also won't fix the syntax error. There are better ways to emulate "yield from" in a way that works for both asyncio and trollius.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am all ears =)

The alternative was to place if else depeding on trollius or asyncio for the returns and the yield from, but that looked better.

I am really out of ideas on how to emulate yield from.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Native coroutines have the __await__ method which returns an iterable. On Python < 3.4, you can just iterate over the generator directly:

for v in job.func(*job.args, **job.kwargs):
    retval = yield v

I haven't actually done this kind of emulation before but this should help you figure out a common code path that works for both asyncio and trollius plus all versions of Python.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also an option of using the async() or ensure_future() function which should work nicely together with run_on_executor which also returns a future. You'll need to experiment a bit though!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Native coroutines have the await method which returns an iterable. On Python < 3.4, you can just iterate over the generator directly:

The problem is that they don't accept return inside generators, I already tried and the function doesn't get to execute, that's why trollius has the raise Return()

There is also an option of using the async() or ensure_future() function which should work nicely together with run_on_executor which also returns a future. You'll need to experiment a bit though!

The problem with these is that you need someone to read the future result, or it won't execute. ATM, with the future + callback it works smoothly

@txomon
Copy link
Contributor Author

txomon commented Apr 28, 2016

@agronholm Perfect, nice to see proper semantic versioning =)

Anyway, I have updated this PR with the latest solution, re-extracting the run_job to make pool executors happy, and created a dirty if/else for trollius.From / trollius.Return

Is there anything else, apart from that that I should take into account?

@coveralls
Copy link

coveralls commented Apr 28, 2016

Coverage Status

Coverage decreased (-1.8%) to 91.423% when pulling b0f75dc on txomon:feature/asyncio-truly-asyncio into 72769ab on agronholm:master.

@coveralls
Copy link

coveralls commented Apr 28, 2016

Coverage Status

Coverage decreased (-1.8%) to 91.423% when pulling 16b423e on txomon:feature/asyncio-truly-asyncio into 72769ab on agronholm:master.

@coveralls
Copy link

coveralls commented Apr 29, 2016

Coverage Status

Coverage decreased (-1.8%) to 91.423% when pulling 0bef535 on txomon:feature/asyncio-truly-asyncio into 72769ab on agronholm:master.

@agronholm
Copy link
Owner

A "return" in a generator function is still a syntax error in Python 2, so this won't work. But I like your notion of moving run_job() to the executor class. It just needs to be a class method or static method in order to work with PoolExecutor (using process pools). This just needs a lot more deduplication and some innovative refactoring.

@txomon
Copy link
Contributor Author

txomon commented May 4, 2016

@agronholm But the if real_asyncio: is taking care of that, isn't it? That's the whole point of using raise() instead of return in the else clause...

@agronholm
Copy link
Owner

No, you don't understand. Merely importing the module causes a SyntaxError in 2.7. Try it.

@txomon
Copy link
Contributor Author

txomon commented May 4, 2016

@agronholm, I need to add tests then. The only alternative I can think of really is separating it in two files... It's the cleanest solution, with no dirty if/else, we would need to duplicate a little code, but I think it's the best solution, does any alternative come to your mind?

@agronholm
Copy link
Owner

I'm still banking on using create_task() or ensure_future() and adding a callback for that. Unfortunately there are higher priority issues for me to fix so I can't really concentrate on this feature yet.

@agronholm
Copy link
Owner

I've taken another look of this problem in depth. The code that checks for misfires must absolutely run in the same task as the target function, so "yield from" is a must. It's starting to look like code duplication is pretty much unavoidable, at least until the next major version where the API can be renovated. The immediate problem here is using "yield from" in a way that still lets the library work on Python 2.

@txomon
Copy link
Contributor Author

txomon commented Jul 17, 2016

Yes, the only way I found was to have two modules, one for Python with
tulip and the other for Python3 with coroutines

On Sun, 17 Jul 2016 12:41 Alex Grönholm, notifications@github.com wrote:

I've taken another look of this problem in depth. The code that checks for
misfires must absolutely run in the same task as the target function, so
"yield from" is a must. It's starting to look like code duplication is
pretty much unavoidable, at least until the next major version where the
API can be renovated. The immediate problem here is using "yield from" in a
way that still lets the library work on Python 2.


You are receiving this because you authored the thread.

Reply to this email directly, view it on GitHub
#134 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAN7mogRSXUtZYjRgVhMMEpE1CAO381kks5qWgbggaJpZM4ISCs0
.

@agronholm
Copy link
Owner

I ended up limiting support to native coroutines and as such, excluding trollius. Even then it was fairly painful. Thank you for your effort even though it didn't end up merged.

@agronholm agronholm closed this Sep 15, 2016
@txomon
Copy link
Contributor Author

txomon commented Sep 15, 2016

Yeah, it was really painful for me too, don't worry about the code!! Thanks
for the hard work!

On Thu, 15 Sep 2016 15:25 Alex Grönholm, notifications@github.com wrote:

I ended up limiting support to native coroutines and as such, excluding
trollius. Even then it was fairly painful. Thank you for your effort even
though it didn't end up merged.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#134 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAN7mhb11top9kVnUF5nCJBwPRFRd0jJks5qqVV0gaJpZM4ISCs0
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants