-
-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MultiError v2 #611
Comments
Thanks for writing this up! I'm indeed thinking about how we can implement MultiErrors in asyncio, and the more I think about this the more I'm leaning towards not adding a full-blown MultiError semi-standard API at all. What I think would be enough instead is to have a new standard attribute on Exception objects to represent a list of exception this exception instance wraps. Let's assume we name that new attribute |
@1st1 Hmm, yeah, our strategy there is an important question. I'm not sure I agree that pushing Anyway, there'll be plenty of time later to decide on our python-dev strategy. For right now... you're going to need a "full-blown MultiError API" in any case, right, in some namespace or another? And IIRC you were hoping to prototype its usage in uvloop, right, so its initial implementation will be outside the stdlib? I think it'd be great to collaborate on this so it works for both asyncio and trio, and for right now it seems like the way to do that is to make a |
(I love the new profile picture by the way, that's a great photo) |
Played around with this a bit tonight and re:
...it immediately became clear that what you actually want is not the ability to annotate a |
Old way: - exceptions and regular returns from main were captured, then re-raised/returned from init, then captured again, then re-raised/returned from run() - exceptions in system tasks were converted into TrioInternalErrors (one-at-a-time, using MultiError.filter()), except for a whitelist of types (Cancelled, KeyboardInterrupt, GeneratorExit, TrioInternalError), and then re-raised in the init task. - exceptions in the run loop machinery were caught in run(), and converted into TrioInternalErrors (if they weren't already). New way: - exceptions and regular returns from main are captured, and then re-raised/returned from run() directly - exceptions in system tasks are allowed to propagate naturally into the init task - exceptions in the init task are re-raised out of the run loop machinery - exceptions in the run loop machinery are caught in run(), and converted into TrioInternalErrors (if they aren't already). This needs one new special case to detect when spawning the main task itself errors, and treating that as a regular non-TrioInternalError, but otherwise it simplifies things a lot. And, it removes 2 unnecessary traceback frames from every trio traceback. Removing the special case handling for some exception types in system tasks did break a few tests. It's not as bad as it seems though: - Cancelled was allowed through so it could reach the system nursery's __aexit__; that still happens. But now if it's not caught there, it gets converted into TrioInternalError instead of being allowed to escape from trio.run(). - KeyboardInterrupt should never happen in system tasks anyway; not sure why we had a special case to allow this. - GeneratorExit should never happen; if it does, it's probably because things blew up real good, and then the system task coroutine got GC'ed, and called coro.close(). In this case letting it escape is the right thing to do; coro.close() will catch it. In other cases, letting it escape and get converted into a TrioInternalError is fine. - Letting TrioInternalError through works the same as before. Also, if multiple system tasks crash, we now get a single TrioInternalError with the original MultiError as a __cause__, rather than a MultiError containing multiple TrioInternalErrors. This is probably less confusing, and it's more compatible with the python-triogh-611 approach to things.
Looking at @belm0's example code here, I realized that this proposal is also going to complicate code that "hides" a nursery inside a custom context manager: python-trio/trio-websocket#20 (comment) This code looks innocuous: async with open_websocket("https://...") as ws:
raise ValueError but if We can't hide this entirely, because of #264 – if But, for the
In fact, this is exactly the pattern used by Trio's internal "system nursery" (with the main task playing the role of the Maybe it should be something we support explicitly, e.g. with a custom nursery type that implements the above logic, or some sort of setting passed to |
This is a nice example too because it forces us to think about how we can access attributes of embedded exceptions. |
Subprocesses might be too big of a first bite for me, but I'd like to take a stab at this (much easier since it should be doable in pure Python). Here is my work in progress. I would like to add more tests of the contract before trying to integrate it into Trio. Do you have any ideas for Trio-agnostic tests? Any other comments? How best can I proceed to fix this issue? |
@thejohnfreeman Sorry I didn't get back to you before! This is a high priority but I started writing up a bunch of notes from the Python core sprints a few weeks ago, stalled out, and then have been stuck on that..... let me finish those up real quick and push my totally incomplete prototype to https://github.com/python-trio/exceptiongroup , and then can compare notes and figure out how to move this forward :-) |
Notes from the python core summit@1st1 and I spent a bunch of time talking about this at the python core summit a few weeks ago; these are my notes from that. The initial goal is to get enough consensus that we can start writing code and get some experience. So all of the conclusions below are tentative and subject to change, but it's looking like we have enough to get started: https://github.com/python-trio/exceptiongroup Topics discussedShould MultiError inherit from BaseException, or from Exception? (Or something else?)Quick review: In [1]: BaseException.__subclasses__()
Out[1]: [Exception, GeneratorExit, SystemExit, KeyboardInterrupt] Also, In Trio, However, if At least with regards to catching exceptions, the general principle is: a In principle we could dynamically set each So trying to get super clever here doesn't seem to be a good idea. If we do anything here, it seems like it should be a special case target specifically at those It seems like the two options are:
Yury is tentatively planning to take the second approach in asyncio. I don't want to do that in Trio; too much ad-hackery. So is there a way we can still share our toys? Plan: make a class TaskGroupError(MultiError, Exception):
... to make a version of This may mean that If we have multiple In this context there was also some discussion of whether we wanted a generic protocol ("any exception object can contain sub-exceptions by defining an What goes into Python?Yury thinks we might want to be really minimal with what actually goes into Python: like maybe just the exception type and the traceback printers, but leave NameI've been saying Guido disliked the After scrutinizing all the We did some brainstorming during lunch; I suggested |
@thejohnfreeman OK, so I pushed my very-rough-draft code to the exceptiongroup repo so you can at least see it! I think we'll want to move over there and use that name. But, I don't know what code we'll want to use – I haven't really had a chance to look at yours yet. So I guess the next step is for someone to read through both sets of code and figure out which parts are most worth salvaging, and then moving forward in whatever direction makes sense? If you're still interested in working on this, then I guess that's what I'd suggest doing next? (I also want to work on it, but I'm juggling a lot of things, so if you want to take the lead on that it's fine with me :-).) |
Cases of MultiError in my app tend to be Cancelled combined with something else, and so far I always want to defer to Cancelled. I'm using this pattern: try:
...
except Foo:
...
except MultiError as e:
raise MultiError.filter(lambda exc: None if isinstance(exc, Foo) else exc, e) |
I wonder if we can improve the ergonomics (i.e., put the exception handler below the code it handles exceptions in, like Python normally does) with something like: try:
# code
except:
@ExceptionGroup.handle_each(FooException)
def catch(exc):
# log, raise another exception (which would chain), etc Which would be implemented along the lines of class ExceptionGroup:
...
@staticmethod
def handle_each(type, match=None):
def decorate(catcher):
with ExceptionGroup.catch(type, match=match):
raise
return decorate |
Or possibly even try:
...
except:
@ExceptionGroup.handle(FooException)
def handler(exc):
... I'm not sure how to extend this to chain multiple blocks together though. Perhaps: try:
...
except:
handler_chain = exceptiongroup.HandlerChain()
@handler_chain(FooException)
def handler(exc):
...
@handler_chain(BarException)
def handler(exc):
...
handler_chain.run() |
We could let
I guess this adds a traceback frame that could be avoided with |
@oremanj That doesn't work with the v2 design, because |
Oh! I had assumed that I suspect we might want interfaces for both "handle everything in the ExceptionGroup that's a subtype of X" (for logging errors) and "handle each atomic exception that's a subtype of X separately" (for more specific handling). Maybe there's something I'm overlooking though... |
Hi all - what's the current state of the Hypothesis is currently considering how to generalise minimal examples, and one of the major challenges that our status quo for reporting multiple errors is not as helpful as we'd like it to be (for example, it doesn't support So if there is or soon will be a widely shared way of doing this which we could support, that would probably improve the lives of our users as well as developers! I'd be happy to help out too, if there's some way to do so 😄 |
I thought I understood
There are two errors here:
I ended up writing a new filter that removes cancellations and all but one |
@mehaase I think what you want is a decorator like It's along the lines of https://trio-util.readthedocs.io/en/latest/#trio_util.defer_to_cancelled In fact I think it could be generalized across the two cases by including |
I tried to make an API and implementation for a general
def multi_error_defer_to(*privileged_types: Type[BaseException],
propagate_multi_error=True,
strict=True):
"""
Defer a trio.MultiError exception to a single, privileged exception
In the scope of this context manager, a raised MultiError will be coalesced
into a single exception with the highest privilege if the following
criteria is met:
1. every exception in the MultiError is an instance of one of the given
privileged types
additionally, by default with strict=True:
2. there is a single candidate at the highest privilege after grouping
the exceptions by repr(). For example, this test fails if both
ValueError('foo') and ValueError('bar') are the most privileged.
If the criteria are not met, by default the original MultiError is
propagated. Use propagate_multi_error=False to instead raise a
RuntimeError in these cases.
Synopsis:
multi_error_defer_to(trio.Cancelled, MyException)
MultiError([Cancelled(), MyException()]) -> Cancelled()
MultiError([Cancelled(), MyException(), MultiError([Cancelled(), Cancelled())]]) -> Cancelled()
MultiError([Cancelled(), MyException(), ValueError()]) -> *no change*
MultiError([MyException('foo'), MyException('foo')]) -> MyException('foo')
MultiError([MyException('foo'), MyException('bar')]) -> *no change*
multi_error_defer_to(MyImportantException, trio.Cancelled, MyBaseException)
# where isinstance(MyDerivedException, MyBaseException)
# and isinstance(MyImportantException, MyBaseException)
MultiError([Cancelled(), MyDerivedException()]) -> Cancelled()
MultiError([MyImportantException(), Cancelled()]) -> MyImportantException()
""" |
I've published |
ExceptionGroup([KeyboardInterrupt()]) needs special handling also https://github.com/python/cpython/pull/21956/files#diff-75445bdc3b6b3dd20b005698fa165444R290 |
?
…On Wed, 21 Oct 2020, 22:50 Guido van Rossum, ***@***.***> wrote:
See also #611 <#611>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#611 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADFATCME6KV4C3725U6AX3SL5JTNANCNFSM4FQKGRZA>
.
|
Hi, I have a suggestion to make here. Even though I'm the one making it, even I'm not really sure it's a good idea. But it doesn't seem to have been mentioned before so I wanted to at least bring it up. I'm quite a new (/naive) user of both Trio and asyncio, and I was trying to get a classic catch-all exception handler to work: My suggestion is something a bit blunt but a lot more straightforward than runtime manipulation of base classes. It's to have actually have two classes instead of one:
There could be a helper function to build a multi-error object from a list of child exceptions, and you'd get a The main weakness with my suggestion is that it's very focused on my one specific use case (albeit quite a common one). If you want to find some specific type of exception (other than |
For subscribers of this issue: there's now PEP654 for Guess now would be a good time for feedback, if you have opinions™ |
With PEP-678, you can loop over the inner exceptions and add a note to each: Line 513 in 7b86d20
exc = MultiError.filter(self._exc_filter, exc)
for e in exc.exceptions:
e.add_note(f"This came from {scope_task!r}") (OK, in practice we'll also need the |
Closing this issue because Python 3.11 has been released with a built-in Thanks to everyone involved in this remarkably long effort! |
What's the backport's equivalent to |
from exceptiongroup import ExceptionGroup, catch
def value_key_err_handler(excgroup: ExceptionGroup) -> None:
for exc in excgroup.exceptions:
print('Caught exception:', type(exc))
def runtime_err_handler(exc: ExceptionGroup) -> None:
print('Caught runtime error')
with catch({
(ValueError, KeyError): value_key_err_handler,
RuntimeError: runtime_err_handler
}):
... |
@graingert The unfortunate tree-like design of def leaf_generator(exc, tbs=None):
if tbs is None:
tbs = []
tbs.append(exc.__traceback__)
if isinstance(exc, BaseExceptionGroup):
for e in exc.exceptions:
yield from leaf_generator(e, tbs)
else:
# exc is a leaf exception and its traceback
# is the concatenation of the traceback
# segments in tbs.
# Note: the list returned (tbs) is reused in each iteration
# through the generator. Make a copy if your use case holds
# on to it beyond the current iteration or mutates its contents.
yield exc, tbs
tbs.pop()
def value_key_err_handler(excgroup: ExceptionGroup) -> None:
for exc in leaf_generator(excgroup):
print('Caught exception:', type(exc)) You'd have the same problem with Edit: If you don't care about tracebacks then you could use this rather simpler leaf iterator (untested!): def leaf_generator(exc: BaseExceptionGroup):
for e in exc.exceptions:
if isinstance(e, BaseExceptionGroup):
yield from leaf_generator(e)
else:
yield e |
MultiError
is the one part of trio's core design that I'm really not satisfied with. Trying to support multiple exceptions on top of the language's firm one-exception-at-a-time stance raises (heh) all kinds of issues. Some of them can probably be fixed by changing the design. But the hardest problem is that there are lots of third-party packages that want to do custom handling of exception tracebacks (e.g. ipython, pytest, raven/sentry). And right now we have to monkeypatch all of them to work with MultiError, with more or less pain and success.Now @1st1 wants to add nurseries to asyncio, and as part of that he'll need to add something like
MultiError
to the stdlib. We talked a bunch about this at PyCon, and the plan is to figure out how to do this in a way that works for both asyncio and trio. Probably we'll start by splittingMultiError
off into a standalone library, that trio and uvloop can both consume, and then add that library to the stdlib in 3.8 (and then the library will remain as a backport library for those using trio on 3.7 and earlier). This way asyncio can build on our experience, and trio can get out of the monkeypatching business (because ifMultiError
s are in the stdlib, then it becomes ipython/pytest/sentry's job to figure out how to cope with them).But before we can do that we need to fix the design, and do it in writing so we (and Yury) can all see that the new design is right :-).
Current design
[If this were a PEP I'd talk more about the basic assumptions underlying the design: multiple errors cna happen concurrently, you need to preserve that fact, you need to be able to catch some-but-not-all of those errors, you need to make sure that don't accidentally throw away any errors that you didn't explicitly try to catch, etc. But this is a working doc so I'm just going to dive into the details...]
Currently, trio thinks of
MultiError
objects as being ephemeral things. It tries as much as possible to simulate a system where multiple exceptions just happen to propagate next to each other. So it's important to keep track of the individual errors and their tracebacks, but theMultiError
objects themselves are just a detail needed to accomplish this.So, we only create
MultiError
s when there are actually multiple errors – if aMultiError
has only a single exception, we "collapse" it, soMultiError([single_exc]) is single_exc
. The basic primitive for working with aMultiError
is thefilter
function, which is really a kind of weird flat-map-ish kind of thing: it runs a function over each of the "real" exceptions inside aMultiError
, and can replace or remove any of them. If this results in anyMultiError
object that has zero or one child, thenfilter
collapses it. And the catch helper,MultiError.catch
, is a thin wrapper forfilter
: it catches an exception, then runs afilter
over it, and then reraises whatever is left (if anything).One more important detail: traceback handling. When you have a nested collection of
MultiError
s, e.g.MultiError([RuntimeError(), MultiError([KeyError(), ValueError()])])
, then the leaf exceptions'__traceback__
attr holds the traceback for the frames where they traveled independently before meeting up to become aMultiError
, and then eachMultiError
object's__traceback__
attr holds the frames that that particularMultiError
traversed. This is just how Python's__traceback__
handling works; there's no way to avoid it. But that's OK, it's actually quite convenient – when we display a traceback, we don't want to say "exception 1 went through frames A, B, C, D, and independently, exception 2 went through frames A', B, C, D" – it's more meaningful, and less cluttered, to say "exception 1 went through frame A, and exception 2 went through frame A', and then they met up and together they went through frames B, C, D". The way__traceback__
data ends up distributed over theMultiError
structure makes this structure really easy to extract.Proposal for new design
Never collapse
MultiError
s. Example:If you do
await some_func()
then currently you get aRuntimeError
; in this proposal, you'll instead get aMultiError([MultiError([RuntimeError()])])
.Get rid of
filter
, and replace it with a new primitivesplit
. Given an exception and a predicate,split
splits the exception into one half representing all the parts of the exception that match the predicate, and another half representing all the parts that don't match. Example:The
split
operation always takes an exception type (or tuple of types) to match, just like anexcept
clause. It should also take an optional arbitrary function predicate, likematch=lambda exc: ...
.If either
match
orrest
is empty, it gets set toNone
. It's a classmethod rather than a regular method so that you can still use it in cases where you have an exception but don't know whether it's aMultiError
or a regular exception, without having to check.Catching
MultiError
s is still done with a context manager, likewith MultiError.catch(RuntimeError, handler)
. But now,catch
takes a predicate + a handler (as opposed to filter, which combines these into one thing), uses the predicate tosplit
any caught error, and then if there is a match it calls the handler exactly once, passing it the matched object.Also, support
async with MultiError.acatch(...)
so you can write async handlers.Limitations of the current design, and how this fixes them
Collapsing is not as helpful to users as you might think
A "nice" thing about collapsing out
MultiError
s is that most of the time, when only one thing goes wrong, you get a nice regular exception and don't need to think about thisMultiError
stuff. I say "nice", but really this is... bad. When you write error handling code, you want to be prepared for everything that could happen, and this design makes it very easy to forget thatMultiError
is a possibility, and hard to figure out whereMultiError
handling is actually required. If the language made handlingMultiError
s more natural/ergonomic, this might not be as big an issue, but that's just not how Python works. So Yury is strongly against the collapsing design, and he has a point.Basically, seeing
MultiError([RuntimeError()])
tells you "ah, this time it was a single exception, but it could have been multiple exceptions, so I'd better be prepared to handle that".This also has the nice effect that it becomes much easier to teach people about
MultiError
, because it shows up front-and-center the first time you have an error inside a nursery.One of my original motivations for collapsing was that
trio.run
has a hidden nursery (the "system nursery") that the main task runs inside, and if you dotrio.run(main)
andmain
raisesRuntimeError
, I wantedtrio.run
to raiseRuntimeError
as well, notMultiError([RuntimeError()])
. But actually this is OK, because the way things have worked out, we never raise errors through the system nursery anyway: either we re-raise whatevermain
raised, or we raiseTrioInternalError
. So my concern was unfounded.Collapsing makes traceback handling more complicated and fragile
Collapsing also makes the traceback handling code substantially more complicated. When
filter
simplifies aMultiError
tree by removing intermediate nodes, it has to preserve the traceback data those nodes held, which it does by patching it into the remaining exceptions. (In our example above, if exception 2 gets caught, then we patch exception 1's__traceback__
so that it shows frames A, B, C, D after all.) This all works, but it makes the implementation much more complex. If we don't collapse, then we can throw away all the traceback patching code: the tracebacks can just continue to live on whichever object they started out on.Collapsing also means that
filter
is a destructive operation: it has to mutate the underlying exception objects'__traceback__
attributes in place, so you can't like, speculatively run afilter
and then change your mind and go back to using the originalMultiError
. That object still exists but after thefilter
operation it's now in an inconsistent state. Fine if you're careful, but it'd be nicer if users didn't have to be careful. If we don't collapse, then this isn't an issue:split
doesn't have to mutate its input (and neither wouldfilter
, if we were still usingfilter
).Collapsing loses
__context__
for intermediateMultiError
nodesCurrently, Trio basically ignores the
__context__
and__cause__
attributes onMultiError
objects. They don't get assigned any semantics, they get freely discarded when collapsing, and they often end up containing garbage data. (In particular, if you catch aMultiError
, handle part of it, and re-raise the remainder... the interpreter doesn't know that this is semantically a "re-raise", and insists on sticking the oldMultiError
object onto the new one's__context__
. We have countermeasures, but it's all super annoying and messy.)It turns out though that we do actually have a good use for
__context__
onMultiError
s. It's super not obvious, but consider this scenario: you have two tasks, A and B, executing concurrently in the same nursery. They both crash. But! Task A's exception propagates faster, and reaches the nursery first. So the nursery sees this, and cancels task B. Meanwhile, the task B has blocked somewhere – maybe it's trying to send a graceful shutdown message from afinally:
block or something. The cancellation interrupts this, so now task B has aCancelled
exception propagating, and that exception's__context__
is set to the original exception in task B. Eventually, theCancelled
exception reaches the nursery, which catches it. What happens to task B's original exception?Currently, in Trio, it gets discarded. But it turns out that this is not so great – in particular, it's very possible that task A and B were working together on something, task B hit an error, and then task A crashed because task B suddenly stopped talking to it. So here task B's exception is the actual root cause, and task A's exception is detritus. At least two people have hit this in Trio (see #416 and python-trio/pytest-trio#30).
In the new design, we should declare that a
MultiError
object's__context__
held any exceptions that were preempted by the creation of thatMultiError
, i.e., by the nursery getting cancelled. We'd basically just look at theCancelled
objects, and move their__context__
attributes onto theMultiError
that the nursery was raising. But this only works if we avoid collapsing.It would be nice if tracebacks could show where exceptions jumped across task boundaries
This has been on our todo list forever. It'd be nice if we could like... annotate tracebacks somehow?
If we stopped collapsing
MultiError
s, then there's a natural place to put this information: eachMultiError
corresponds to a jump across task boundaries, so we can put it in the exception string or something. (Oh yeah, maybe we should switchMultiError
s to having associated message strings? Currently they don't have that.)Filtering is just an annoying abstraction to use
If I wanted to express exception catching using a weird flat-map-ish thing, I'd be writing haskell. In Python it's awkward and unidiomatic. But with
filter
, it's necessary, because you could have any number of exceptions you need to call it on.With
split
, there's always exactly 2 outputs, so you can perform basicMultiError
manipulations in straight-line code without callbacks.Tracebacks make filtering even more annoying to use than it would otherwise be
When
filter
maps a function over aMultiError
tree, the exceptions passed in are not really complete, standalone exceptions: they only have partial tracebacks attached. So you have to handle them carefully. You can't raise or catch them – if you did, the interpreter would start inserting new tracebacks and make a mess of things.You might think it was natural to write a filter function using a generator, like how
@contextmanager
works:But this can't work, because the tracebacks would get all corrupted. Instead, handlers take exceptions are arguments, and return either that exception object, or a new exception object (like
MyLibraryError
).If a handler function does raise an exception (e.g., b/c of a typo), then there's no good way to cope with that. Currently Trio's
MultiError
code doesn't even try to handle this.In the proposed design, all of these issues go away. The exceptions returned by
split
are always complete and self-contained. Probably forMultiError.catch
we still will pass in the exception as an argument instead of using a generator and.throw
ing it – the only advantage of the.throw
is that it lets you use anexcept
block to say which exceptions you want to catch, and with the newMultiError.catch
we've already done that before we call the handler. But we can totally allowraise
as a way to replace the exception, or handle accidental exceptions. (See the code below for details.)Async catching
Currently we don't have an async version of
filter
orcatch
(i.e., one where the user-specified handler can be async). Partly this is because when I was first implementing this I hit an interpreter bug that made it not work, but it's also becausefilter
's is extremely complicated and maintaining two copies makes it that much worse.With the new design, there's no need for async
split
, I think the newcatch
logic makes supporting both sync and async easy (see below).Details
Notes:
As noted in comments,
__context__
and__traceback__
handling is super finicky and has subtle bugs. Interpreter help would be very... helpful.Notice that all the logic around the
logger.exception
call is always synchronous and can be factored into a context manager, so we can do something like:Other notes
Subclassing
Do we want to support subclassing of
MultiError
, likeclass NurseryError(MultiError)
? Should we encourage it?If so, we need to think about handling subclasses when cloning
MultiError
s in.split
.I think we should not support this though. Because, we don't have, and don't want, a way to distinguish between a
MultiError([MultiError([...])])
and aMultiError([NurseryError([...])])
– we preserve the structure, it contains metadata, but still it's structural.split
andcatch
still only allow you address the leaf nodes. And that's important, because if we made it easy to match on structure, then people would do things like try to catch aMultiError([MultiError([RuntimeError()])])
, when what they should be doing is trying to catch one-or-more-RuntimeError
s. The point of keeping theMultiError
s around instead of collapsing is to push you to handle this case, not continue to hard-code assumptions about there being only a single error.Naming
MultiError
isn't bad, but might as well consider other options while we're redoing everything.AggregateError
?CombinedError
?NurseryError
?Relevant open issues
#408, #285, #204, #56, python-trio/pytest-trio#30
The text was updated successfully, but these errors were encountered: