Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't passthrough_errors unless instructed. #2006

Merged
merged 1 commit into from Sep 6, 2016

Conversation

@untitaker
Copy link
Member

commented Sep 6, 2016

Fix #2005

Revert #1679 and #1996


This change is Reviewable

@untitaker

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2016

@miguelgrinberg

This comment has been minimized.

Copy link
Member

commented Sep 6, 2016

If you merge this, we are back at square one, the passthrough errors option for flask will be broken again. Flask will passthrough errors all right, but then werkzeug will block them.

I see now how the change in cli.py is a problem, but the change in app.py is perfectly fine and can/should stay. Why is there a need to undo that one as well?

As for the cli.py change, would it make sense to adapt that change to only set passthrough errors when the app is loaded eagerly? That seems doable and will only leave passthrough errors incorrectly set when the reloader is used, and that to me is fine, since the reloader and a debugger are normally not used together.

@untitaker

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2016

IMO it's problematic to have two different behaviors for the Flask command and app.run. They should behave the same as far as possible.

On 6 September 2016 18:14:30 CEST, Miguel Grinberg notifications@github.com wrote:

If you merge this, we are back at square one, the passthrough errors
option for flask will be broken again. Flask will passthrough errors
all right, but then werkzeug will block them.

I see now how the change in cli.py is a problem, but the change in
app.py is perfectly fine and can/should stay. Why is there a need to
undo that one as well?

As for the cli.py change, would it make sense to adapt that change to
only set passthrough errors when the app is loaded eagerly? That seems
doable and will only leave passthrough errors incorrectly set when the
reloader is used, and that to me is fine, since the reloader and a
debugger are normally not used together.

You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
#2006 (comment)

Sent from my Android device with K-9 Mail. Please excuse my brevity.

@miguelgrinberg

This comment has been minimized.

Copy link
Member

commented Sep 6, 2016

Uniform behavior is good, but in this case you are uniformly applying the wrong behavior, so it isn't a great solution IMHO.

For starters, let's agree that the use of flask run with a debugger isn't that friendly. It's not even documented how to do it at this point, the only debugger example in the documentation is still based on app.run.

I do not agree that it is a good idea to break passthrough errors for app.run just for the sake of uniformity. If we agree that the end goal is to have passthrough errors working in the correct way across the board, then I think what makes sense is to fix the cases that are easy to fix, and to leave bugs open for the cases that need more careful thinking.

I think we can leave app.run out of this discussion, since it is working 100% right at this time. For the flask run case, we can fix the eager loading case fairly easily, as I indicated above (I can write that fix if you want, and I'm happy to also throw updated docs for debugging with flask run to sweeten the deal ;-). This addresses the cases in which debug mode and/or the reloader is off. These are the cases in which it makes more sense to use a debugger, and thus to also use passthrough errors.

What's left are the lazy loading cases, which in general are the cases where debug mode is enabled, or at least when the reloader is on. The usefulness of passthrough errors in these cases is debatable. I have never used passthrough errors in conjunction with Flask's debug mode myself. I think addressing passthrough errors in this case is much lower priority than the other cases, so I would leave an issue open for that. I'm happy to think about addressing this case as well.

@untitaker

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2016

On Tue, Sep 06, 2016 at 09:14:30AM -0700, Miguel Grinberg wrote:

If you merge this, we are back at square one, the passthrough errors option for flask will be broken again. Flask will passthrough errors all right, but then werkzeug will block them.

I see now how the change in cli.py is a problem, but the change in app.py is perfectly fine and can/should stay. Why is there a need to undo that one as well?

As for the cli.py change, would it make sense to adapt that change to only set passthrough errors when the app is loaded eagerly? That seems doable and will only leave passthrough errors incorrectly set when the reloader is used, and that to me is fine, since the reloader and a debugger are normally not used together.

Currently we already have a very complex net of dependencies for the default
values of PROPAGATE_EXCEPTIONS (true if app.debug or app.testing), use_reloader
(true if app.debug), use_debugger (true if app.debug), a few Jinja settings,
and a few others I already forgot. Having those dependencies differ for flask cli and app.run respectively is IMO out of the question for the simple
reason that the current interdependencies are hard enough to understand. At
least having them differ would require a better reason than "it's implementable
on one side and not on the other".

The upside for those is that in the end this chain reaction of setting
app.debug makes sense from a user's perspective, as the usual explanation is
"Turn on app.debug and everything will fall into place". To make
passthrough_errors dependent on eager_loading, however, makes no sense from a
user perspective. There is no reason for this behavior other than "this is
implementable".

I know that this PR breaks the usecase of attaching a debugger, but I consider
reverting all PRs the only agreeable behavior at the moment that doesn't break
usecases that have existed before your PR got merged. We will have to think
about how to enable that usecase again for flask cli, but I can't think of a
good solution right now.

You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
#2006 (comment)

@miguelgrinberg

This comment has been minimized.

Copy link
Member

commented Sep 6, 2016

that doesn't break usecases that have existed before your PR got merged

@untitaker do you have a list of these? I'm not aware of any broken use cases outside of this whack-a-mole situation we are in now, where we fix an issue, and a new one pops out. As far as I'm aware, the only problem with current master code is #2005.

@untitaker

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2016

The whack-a-mole situation is exactly what I'm referring to.

@untitaker untitaker force-pushed the untitaker:issue2005 branch from 077840b to 9c2e2e9 Sep 6, 2016

@untitaker

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2016

If you don't mind I'll merge this PR for now. If a solution for #1679 comes up I'll be happy to merge, but I don't want to leave master in a broken state.

@miguelgrinberg

This comment has been minimized.

Copy link
Member

commented Sep 6, 2016

I would have preferred to leave the app.run side alone, since that works, but sure, if you think it is better to undo the whole thing instead of just that, then go for it.

@untitaker untitaker merged commit c4ec695 into pallets:master Sep 6, 2016

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details

@untitaker untitaker deleted the untitaker:issue2005 branch Sep 6, 2016

@untitaker untitaker removed the in progress label Sep 6, 2016

untitaker added a commit that referenced this pull request Sep 7, 2016

@jeffwidman

This comment has been minimized.

Copy link
Member

commented Sep 10, 2016

@untitaker @miguelgrinberg A thought--this issue contains a lot of thoughtful back and forth between two folks who deeply understand the underlying issue... but the discussion/disagreement/decision all happened in a single day.

In general I'm in favor of being decisive and keeping moving--we've got plenty of issues in the tracker where I wish a decision had just been made and moved forward. However, this particular issue strikes me as one that might have been better to sit on for 2-3 days before making a final decision. You're both responsive and not going to abandon the issue, and you're clearly disagreeing w/o getting upset. That's fine, healthy even for the community.

Historically when I've disagreed about a solution with another engineer who was knowledgable about the problem space, a night or two of sleep gives everyone's brains a little more time to ponder. Sometimes we're still at the same place and have to just decide, but other times we realize the problem constraints are slightly different than we'd originally thought which makes the better solution obvious. Not a big deal, a thought for the future.

@untitaker

This comment has been minimized.

Copy link
Member Author

commented Sep 10, 2016

This is just reverting a PR that caused issues, not a final decision.

@miguelgrinberg

This comment has been minimized.

Copy link
Member

commented Sep 10, 2016

@jeffwidman Thanks for writing this. @untitaker can correct me if I'm wrong, but I believe the action that he took to address this issue was not a final decision. I see it more as an interim solution. His choice was to leave a known bug in the code, instead of keep trying to fix it and risk causing other problems, which we all agree is a possibility, given how tricky this issue demonstrated to be. I would not have done what he did, but I can understand the choice and as you say, I'm not upset. I think both agree there is a problem that needs a fix. I am thinking about how to solve it in a safe way and will bring it here for review once I have it.

@untitaker

This comment has been minimized.

Copy link
Member Author

commented Sep 10, 2016

Yeah I think what @miguelgrinberg said is the consensus.

@arotman

This comment has been minimized.

Copy link

commented Mar 20, 2017

THANKS for fixing this issue. My Flask server had been crashing due to error: [Errno 32] Broken pipe, and now there is just an exception and the server keeps running. Others also suffered from this issue too. I appreciate very much this fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.