Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

permits black to run in AWS Lambda: #1141

Merged
merged 1 commit into from
May 8, 2020

Conversation

allan-simon
Copy link
Contributor

AWS Lambda and some other virtualized environment may not permit access
to /dev/shm on Linux and as such, trying to use ProcessPoolExecutor will
fail.

As using parallelism is only a 'nice to have' feature of black, if it fails
we gracefully fallback to a monoprocess implementation, which permits black
to finish normally.

should fix #1138 and #776

black.py Outdated Show resolved Hide resolved
@allan-simon allan-simon force-pushed the permit_black_to_run_in_aws_lambda branch from baee24f to 7a75383 Compare November 5, 2019 20:43
@zsol
Copy link
Collaborator

zsol commented Nov 5, 2019

Would be amazing if you could add a test that patches out ProcessPoolExecutor and makes it raise so we have coverage of that branch

@allan-simon
Copy link
Contributor Author

@zsol haha I was afraid you would say that, hmmm let me try

@allan-simon allan-simon force-pushed the permit_black_to_run_in_aws_lambda branch from 7a75383 to 8c00c45 Compare November 5, 2019 21:10
AWS Lambda and some other virtualized environment may not permit access
to /dev/shm on Linux and as such, trying to use ProcessPoolExecutor will
fail.

As using parallelism is only a 'nice to have' feature of black, if it fails
we gracefully fallback to a monoprocess implementation, which permits black
to finish normally.
@allan-simon allan-simon force-pushed the permit_black_to_run_in_aws_lambda branch from 8c00c45 to 5559f49 Compare November 5, 2019 21:15
@allan-simon
Copy link
Contributor Author

@zsol I've tried something, but as I'm not that familiar with unittest library and the way "invokeBlack" works, I'm not against a review from you even though the test are failing, to see if i'm missing the elephant in the roomt

@hawkins
Copy link

hawkins commented Dec 18, 2019

I'm not sure if either of you figured this out already or not, but I've found that simply replacing ProcessPoolExecutor wherever it occurs in black.py with ThreadPoolExecutor allows Black to run on AWS Lambda without issue. I can't speak to other virtual environments though.

So, I would suggest using ThreadPoolExecutor instead of ProcessPoolExecutor in the event that an OSError is raised.

@allan-simon
Copy link
Contributor Author

allan-simon commented Dec 19, 2019

yes actually, none default to threadpool, so I think it's more pragmatic to let python use whichever default it consider appropriate ? (I've not strong opinion on it)

@allan-simon
Copy link
Contributor Author

@hawkins btw, help appreciated on this PR, I struggle with the test , so if you're also interested in getting it upstream :)

@hawkins
Copy link

hawkins commented Dec 19, 2019

Oh - so by setting executor = None we're somehow multiprocessing with ThreadPoolExecutor? Very cool. Have you tested this code on Lambda?

And sure, I can try to take a stab at the test too.

@allan-simon
Copy link
Contributor Author

@hawkins yes our CI is based on my fork currently, running in lambda , so i can assure you it's working :)

@hawkins
Copy link

hawkins commented Dec 19, 2019

Just dove through the asyncio docs and now I see how that works. Thanks! Very cool, nice job with using None then 👌

@JelleZijlstra
Copy link
Collaborator

CI seems unhappy, could you take a look? Thanks!

@allan-simon
Copy link
Contributor Author

@JelleZijlstra , I tried in the past, but I think I reach the limitation of my understanding on how the tests are made with black. (i.e I think the test I've added is not exactly written the way it should be written, but why ? 🤷‍♂️ )

@Terrance
Copy link
Contributor

For what it's worth, Termux on Android also doesn't support ProcessPoolExecutor, and fails in the same place, though slightly differently:

Traceback (most recent call last):
  File "/data/data/com.termux/files/usr/lib/python3.8/multiprocessing/synchronize.py", line 28, in <module>
    from _multiprocessing import SemLock, sem_unlink
ImportError: cannot import name 'SemLock' from '_multiprocessing' (/data/data/com.termux/files/usr/lib/python3.8/lib-dynload/_multiprocessing.cpython-38.so)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/data/data/com.termux/files/usr/bin/black", line 8, in <module>
    sys.exit(patched_main())
  File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/black.py", line 4135, in patched_main
    main()
  File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/click/decorators.py", line 17, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/black.py", line 463, in main
    reformat_many(
  File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/black.py", line 529, in reformat_many
    executor = ProcessPoolExecutor(max_workers=worker_count)
  File "/data/data/com.termux/files/usr/lib/python3.8/concurrent/futures/process.py", line 555, in __init__
    self._call_queue = _SafeQueue(
  File "/data/data/com.termux/files/usr/lib/python3.8/concurrent/futures/process.py", line 165, in __init__
    super().__init__(max_size, ctx=ctx)
  File "/data/data/com.termux/files/usr/lib/python3.8/multiprocessing/queues.py", line 42, in __init__
    self._rlock = ctx.Lock()
  File "/data/data/com.termux/files/usr/lib/python3.8/multiprocessing/context.py", line 67, in Lock
    from .synchronize import Lock
  File "/data/data/com.termux/files/usr/lib/python3.8/multiprocessing/synchronize.py", line 30, in <module>
    raise ImportError("This platform lacks a functioning sem_open" +
ImportError: This platform lacks a functioning sem_open implementation, therefore, the required synchronization primitives needed will not function, see issue 3770.

Replacing the executor creation to fall back to a ThreadPoolExecutor also works here. This pull request only catches OSError though, so wouldn't detect this case at present.

@zsol zsol self-assigned this May 8, 2020
@ambv ambv merged commit c0a7582 into psf:master May 8, 2020
@ambv
Copy link
Collaborator

ambv commented May 8, 2020

Thanks! ✨ 🍰 ✨

@hugovk
Copy link
Contributor

hugovk commented May 8, 2020

Looks like this has turned the CI red on master?

mypy.....................................................................Failed
- hook id: mypy
- exit code: 1

black.py:627:20: error: Incompatible types in assignment (expression has type "None", variable has type "ProcessPoolExecutor")
tests/test_black.py:1277:5: error: Function is missing a type annotation for one or more arguments
Found 2 errors in 2 files (checked 6 source files)

@JelleZijlstra
Copy link
Collaborator

@ambv is telling me he's fixing it.

@Terrance
Copy link
Contributor

Terrance commented May 9, 2020

Looks like the PR didn't get updated to also handle ImportError -- should I raise a separate issue for that, or can that be fixed whilst we're here?

@JelleZijlstra
Copy link
Collaborator

@Terrance just send a new PR and we'll take a look!

@Terrance
Copy link
Contributor

Terrance commented May 9, 2020

Submitted #1400.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature request: permits to disable parallel run for system without support for it
8 participants