Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Curl: Add methods to parse response #11252

Merged
merged 5 commits into from Apr 21, 2022

Conversation

samford
Copy link
Member

@samford samford commented Apr 26, 2021

  • Have you followed the guidelines in our Contributing document?
  • Have you checked to ensure there aren't other open Pull Requests for the same change?
  • Have you added an explanation of what your changes do and why you'd like us to include them?
  • Have you written new tests for your changes? Here's an example.
  • Have you successfully run brew style with your changes locally?
  • Have you successfully run brew typecheck with your changes locally?
  • Have you successfully run brew tests with your changes locally?

Overview

This work was originally part of #9535 and then #10834 (a recreation of the former PR) but I've refactored the methods to make them a bit more general purpose here. While working on migrating livecheck to curl, it became apparent that there are a few areas in brew where similar code exists to parse curl output.

The intention of this PR is to provide methods for working with curl output in a standardized way, so we can avoid unnecessary code duplication. Notable changes are as follows:

  • Adds parse_curl_output and parse_curl_head methods to parse curl output (a string containing response head(s) and/or the final response body). The code in these methods is partly taken from existing code in brew, as the aim is to replace existing code with these methods. The main difference is that these methods parse response heads into hashes and organize the data in a specific fashion (so we're not reinventing the wheel in multiple locations).
  • Adds curl_response_status_code and curl_response_last_location methods that take the parsed output from the aforementioned methods and extract specific information (the status code of the last response and the last location header, respectively).
  • Expands Utils::Curl tests to cover the new methods.

Example

If we have curl output like:

HTTP/2 301 \r
date: Mon, 26 Apr 2021 15:06:34 GMT\r
content-type: text/plain; charset=utf-8\r
content-length: 21\r
location: https://example.com/\r
\r
HTTP/2 200 \r
content-encoding: gzip\r
cache-control: max-age=604800\r
content-type: text/html; charset=UTF-8\r
date: Mon, 26 Apr 2021 15:09:15 GMT\r
last-modified: Mon, 26 Apr 2021 12:34:56 GMT\r
content-length: 169\r
\r
<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>Example</title>
  </head>
  <body>
    <h1>Example</h1>
    <p>Hello, world!</p>
  </body>
</html>

parse_curl_output will produce a hash like:

{
  heads: [
    {
      status_code: "301",
      headers: {
        "date"           => "Mon, 26 Apr 2021 15:06:34 GMT",
        "content-type"   => "text/plain; charset=utf-8",
        "content-length" => "21",
        "location"       => "https://example.com/"
      }
    },
    {
      status_code: "200",
      headers: {
        "content-encoding" => "gzip",
        "cache-control"    => "max-age=604800",
        "content-type"     => "text/html; charset=UTF-8",
        "date"             => "Mon, 26 Apr 2021 15:09:15 GMT",
        "last-modified"    => "Mon, 26 Apr 2021 12:34:56 GMT",
        "content-length"   =>  "169"
      }
    }
  ],
  body: "<!DOCTYPE html>\n<html>\n  <head>\n    <meta charset=\"utf-8\">\n    <title>Example</title>\n  </head>\n  <body>\n    <h1>Example</h1>\n    <p>Hello, world!</p>\n  </body>\n</html>\n"
}

The response heads are parsed using parse_curl_head, which produces a hash containing the status code and header strings.

If we provide the heads array (the first member of the output array from parse_curl_output), it returns "200". If we do the same with curl_response_last_location, it returns "https://example.com/".

Conclusion

This PR only focuses on adding these methods and I would appreciate feedback on the shape of this. I'll add comments in places that I think may benefit from discussion.

If/when this is merged, I'll create a follow-up PR that replaces existing code with these methods (as it was suggested that it may be better to keep this separate). I'll be rebasing #10834 onto this shortly and I'll also update that PR to account for any changes that we make here in review.

@BrewTestBot
Copy link
Member

Review period will end on 2021-04-27 at 16:48:56 UTC.

@BrewTestBot BrewTestBot added the waiting for feedback Merging is blocked until sufficient time has passed for review label Apr 26, 2021
@BrewTestBot BrewTestBot removed the waiting for feedback Merging is blocked until sufficient time has passed for review label Apr 27, 2021
@BrewTestBot
Copy link
Member

Review period ended.

BrewTestBot
BrewTestBot previously approved these changes Apr 27, 2021
Copy link
Member

@MikeMcQuaid MikeMcQuaid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to see some internal usage of these methods before we add them. That will help inform the APIs and review here.

@MikeMcQuaid
Copy link
Member

This PR only focuses on adding these methods and I would appreciate feedback on the shape of this. I'll add comments in places that I think may benefit from discussion.

If/when this is merged, I'll create a follow-up PR that replaces existing code with these methods (as it was suggested that it may be better to keep this separate). I'll be rebasing #10834 onto this shortly and I'll also update that PR to account for any changes that we make here in review.

I'll need to see the planned usage of these methods before I can give decent review. I don't think it's worth designing them divorced from their planned usage. If that means #10834 is the demonstration of this: ok but please let's try and do what we can to get that merged ASAP, thanks.

BrewTestBot
BrewTestBot previously approved these changes May 4, 2021
@samford
Copy link
Member Author

samford commented May 4, 2021

I'll need to see the planned usage of these methods before I can give decent review. I don't think it's worth designing them divorced from their planned usage.

I have a separate branch [based on this one] for the commits that make changes to adopt the methods in this PR but I've merged it into this branch so you can see what code it's intended to replace and how it works in practice. At the moment, this branch is based on #10834, so I could properly include the Strategy changes (the last commit in this series).

If we still want to handle those changes separately (per #10834 (review)), I can always pare this back to the original commit before we consider merging.

Copy link
Member

@MikeMcQuaid MikeMcQuaid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hope the comments are helpful, nice work so far.

Library/Homebrew/cask/audit.rb Outdated Show resolved Hide resolved
Library/Homebrew/download_strategy.rb Outdated Show resolved Hide resolved
@@ -303,6 +308,85 @@ def curl_http_content_headers_and_checksum(url, specs: {}, hash_needed: false, u
def http_status_ok?(status)
(100..299).cover?(status.to_i)
end

# Separates the output text from `curl` into an array of response heads and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is a response "head"? Headers?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@samford said:

A "head" here is the status line and header lines for one HTTP response (i.e., not the body). The heads array can contain head hashes from multiple responses. [Examples of the head hash format can be found in curl_spec.rb.]

For a response with no redirections, the curl output can consist of a head (the status line and headers) and/or a body (the content). When there are redirections, the curl output would contain heads for multiple responses. The head(s) and the body are separated by \r\n\r\n and that's what we use to #partition the output.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think heads could get a better name here, it's not obvious what it means from this alone. Perhaps incorporate some of the above into the comment?

Library/Homebrew/utils/curl.rb Outdated Show resolved Hide resolved
# body output, if found.
def parse_curl_output(output)
heads = []
return { heads: heads, body: "" } unless output.is_a?(String)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When is the output not a string and what else can it be?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@samford said:

I'm not sure it would ever be anything other than a string but my original thinking was that this would allow us to gracefully handle bad input and avoid an error (since output needs to be a string for output.lstrip, output.match(...), and output.include?(...) to work).

Looking at a later comment in the previous review, it does feel more appropriate to use a Sorbet type signature instead of this explicit guard, so I'll update this accordingly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, let's lean into Sorbet here instead of sniffing types. I also think this could probably raise rather than silently returning no headers/body.

Library/Homebrew/utils/curl.rb Outdated Show resolved Hide resolved
Library/Homebrew/utils/curl.rb Outdated Show resolved Hide resolved
Library/Homebrew/utils/curl.rb Outdated Show resolved Hide resolved
Library/Homebrew/utils/curl.rb Outdated Show resolved Hide resolved
# @param head_text [String] The head text of a `curl` response.
# @return [Hash] A hash containing the status information and headers
# (as a hash with header names as keys).
def parse_curl_head(head_text)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to only be used once so I'd suggest it may be better moved from utils.

@samford samford added do not merge in progress Maintainers are working on this labels May 22, 2021
@samford samford force-pushed the curl-add-response-parsing-methods branch from ae57da3 to 6970fe0 Compare May 27, 2021 17:58
@Delos25

This comment has been minimized.

@samford samford force-pushed the curl-add-response-parsing-methods branch from 6970fe0 to d9c56da Compare December 27, 2021 15:18
@MikeMcQuaid
Copy link
Member

@samford What's the latest here? Thanks ❤️

@samford
Copy link
Member Author

samford commented Apr 4, 2022

I remembered this PR last week and had been meaning to come back to it, so thanks for the reminder.

I had previously put this on hold until I could look into the Typhoeus (or Ethon) gem but that approach may not end up being feasible. When I originally brought it up, you suggested it could be a possibility if it doesn't build native extensions. It initially looked promising because it doesn't build native extensions on macOS but I later discovered it does on Linux. Namely, typhoeus depends on ethon which depends on ffi, which builds native extensions on Linux.

There may be some way around it but I think it makes sense to move this PR forward in the interim time. Regardless of how we approach curl in the long-term, this PR will clean up some duplicated code for the time being.

I've incorporated some of the suggested changes locally and I'll update this branch after I've responded to the unresolved comments.

@samford
Copy link
Member Author

samford commented Apr 4, 2022

I receive a Could not resolve to a node with the global id of 'MDE3OlB1bGxSZXF1ZXN0UmV2aWV3'. error when trying to reply to the open comments, so I'm going to have to respond to them separately. Pardon the relative inconvenience.


-       return if !appcast_contents || appcast_contents.include?(adjusted_version_stanza)
+       return if appcast_contents&.include?(adjusted_version_stanza)

Unfortunately, using the safe navigation operator wouldn't achieve the same goal. If appcast_contents is nil, the safe navigation operator would evaluate to nil, the condition won't be true, and #check_appcast_contains_version won't return early. As a result, the subsequent error message about the appcast not containing the version number will apply, which doesn't make sense if we weren't able to check the appcast content.

As one example, this would occur if the curl request in #curl_http_content_headers_and_checksum exceeds the timeout, as details[:file] would be nil in the return hash.

It may be better to handle a falsy appcast_contents value with a different error message before this point (e.g., "appcast at URL '...' could not be retrieved"), making it clear that we didn't get the appcast content. This is arguably better than silently failing, which would erroneously suggest the appcast was successfully retrieved and contained the version.

If we want to go that route, we could avoid duplicating the rescue error by simply setting appcast_contents = nil in the rescue block and then handle the error and early return outside of it. This would handle both types of failures equally.


What is a response "head"? Headers?

A "head" here is the status line and header lines for one HTTP response (i.e., not the body). The heads array can contain head hashes from multiple responses. [Examples of the head hash format can be found in curl_spec.rb.]

For a response with no redirections, the curl output can consist of a head (the status line and headers) and/or a body (the content). When there are redirections, the curl output would contain heads for multiple responses. The head(s) and the body are separated by \r\n\r\n and that's what we use to #partition the output.


When is the output [in #parse_curl_output] not a string and what else can it be?

I'm not sure it would ever be anything other than a string but my original thinking was that this would allow us to gracefully handle bad input and avoid an error (since output needs to be a string for output.lstrip, output.match(...), and output.include?(...) to work).

Looking at a later comment in the previous review, it does feel more appropriate to use a Sorbet type signature instead of this explicit guard, so I'll update this accordingly.


When is head_text not a String and what else can it be? Duck typing and Sorbet feel like they may help avoid these manual type checks.

Same as above, I don't imagine this would ever be anything other than a string and I'll replace the !head_text.is_a?(String) condition with a Sorbet type signature as well.


[#parse_curl_head] seems to only be used once so I'd suggest it may be better moved from utils.

It's only used in #parse_curl_output, so it could technically be inlined but I opted for a separate #parse_curl_head method to make the former easier to read. I've made #parse_curl_head private for the moment (it wasn't my intention for it to be offered as a public method unless/until it's needed) but let me know if you had something different in mind.

@samford samford force-pushed the curl-add-response-parsing-methods branch from d9c56da to 0bd6e11 Compare April 4, 2022 22:09
Copy link
Member

@MikeMcQuaid MikeMcQuaid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, nice work @samford! General themes:

  • prefer .blank? and .present? when we're dealing with strings to handle the empty string case more nicely
  • avoid sniffing types. Instead, rely on Sorbet and exceptions being raise on unexpected data.

Library/Homebrew/cask/audit.rb Outdated Show resolved Hide resolved
Library/Homebrew/download_strategy.rb Show resolved Hide resolved
Library/Homebrew/livecheck/strategy.rb Outdated Show resolved Hide resolved
@@ -303,6 +308,85 @@ def curl_http_content_headers_and_checksum(url, specs: {}, hash_needed: false, u
def http_status_ok?(status)
(100..299).cover?(status.to_i)
end

# Separates the output text from `curl` into an array of response heads and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@samford said:

A "head" here is the status line and header lines for one HTTP response (i.e., not the body). The heads array can contain head hashes from multiple responses. [Examples of the head hash format can be found in curl_spec.rb.]

For a response with no redirections, the curl output can consist of a head (the status line and headers) and/or a body (the content). When there are redirections, the curl output would contain heads for multiple responses. The head(s) and the body are separated by \r\n\r\n and that's what we use to #partition the output.

@@ -303,6 +308,85 @@ def curl_http_content_headers_and_checksum(url, specs: {}, hash_needed: false, u
def http_status_ok?(status)
(100..299).cover?(status.to_i)
end

# Separates the output text from `curl` into an array of response heads and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think heads could get a better name here, it's not obvious what it means from this alone. Perhaps incorporate some of the above into the comment?

# body output, if found.
def parse_curl_output(output)
heads = []
return { heads: heads, body: "" } unless output.is_a?(String)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@samford said:

I'm not sure it would ever be anything other than a string but my original thinking was that this would allow us to gracefully handle bad input and avoid an error (since output needs to be a string for output.lstrip, output.match(...), and output.include?(...) to work).

Looking at a later comment in the previous review, it does feel more appropriate to use a Sorbet type signature instead of this explicit guard, so I'll update this accordingly.

# body output, if found.
def parse_curl_output(output)
heads = []
return { heads: heads, body: "" } unless output.is_a?(String)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, let's lean into Sorbet here instead of sniffing types. I also think this could probably raise rather than silently returning no headers/body.

Library/Homebrew/utils/curl.rb Outdated Show resolved Hide resolved
Library/Homebrew/utils/curl.rb Outdated Show resolved Hide resolved
Library/Homebrew/utils/curl.rb Outdated Show resolved Hide resolved
@samford samford force-pushed the curl-add-response-parsing-methods branch 2 times, most recently from 8f9d6f9 to 670cb4e Compare April 19, 2022 05:26
@MikeMcQuaid
Copy link
Member

I agree that heads isn't a terribly evocative or recognizable name. responses may be more understandable, as long as users understand that the :responses hashes from #parse_curl_output don't include body content (i.e., :body is implicitly the body from the final response).

responses_headers or something?

Copy link
Member

@MikeMcQuaid MikeMcQuaid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me now! Thanks for all the work here and sorry for back and forth. Good to 🚢 when you're happy.

Library/Homebrew/livecheck/strategy.rb Outdated Show resolved Hide resolved
Library/Homebrew/utils/curl.rb Outdated Show resolved Hide resolved
@samford samford force-pushed the curl-add-response-parsing-methods branch from 88ce5f5 to 9e37a03 Compare April 19, 2022 16:13
@samford samford removed do not merge in progress Maintainers are working on this labels Apr 19, 2022
parsed_output = parse_curl_output(range_stdout)

headers = if parsed_output[:responses].present?
parsed_output[:responses].first[:headers]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one area that may benefit from some discussion. range_stdout.split("\r\n\r\n").first seems to be intended to only parse the headers of the first response and ignore any others (i.e., #parse_headers was only able to handle one response), so I simply replicated the existing behavior in the new code.

However, this may not be appropriate for a URL that redirects, where responses before the final response would be redirections. Since the behavior in this method is determined by the presence/value of accept-ranges and content-length headers, any header differences between the first and last responses could lead to unintended behavior.

When we're downloading something, I imagine we would be primarily interested in the accept-ranges and content-length headers from the last response (after any redirections). If that makes sense, it's as simple as using parsed_output[:responses].last[:headers] instead of #first.

Alternatively, we could merge the headers from all responses into one, where headers from later responses would overwrite headers from earlier responses. livecheck's HeaderMatch strategy does this internally (to simplify working with headers) and similar code here could look like: parsed_output[:responses].collect { |res| res[:headers] }.reduce(&:merge). I would only be useful if the last response is missing headers that earlier responses include, which may not be applicable in this specific context.

I think #last is probably appropriate/sufficient here but I'll have to test this idea. I figured I would mention it in the interim time, to get your thoughts on this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one area that may benefit from some discussion. range_stdout.split("\r\n\r\n").first seems to be intended to only parse the headers of the first response and ignore any others (i.e., #parse_headers was only able to handle one response), so I simply replicated the existing behavior in the new code.

Given that: can we merge this and then discuss in a future PR? I'd really like to just get this one merged out and then be able to discuss/review smaller changes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me. I'll create a follow-up PR after I merge this in the morning (EDT).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shoot, I forgot that I had committed the #last change when I was testing it yesterday, so it was merged with the other changes earlier. I'll open a follow-up PR that would revert this one change and explain the reasoning behind why #last is the correct behavior in this context. If we agree on #last, then we can simply close the PR without merging but it will be there if we need to revert this change (highly unlikely, based on how #curl_download is used).

Sorry about this. I intended to incorporate this change in a follow-up PR but it was so small that I missed it in my final review.

@samford samford force-pushed the curl-add-response-parsing-methods branch from 9e37a03 to 1c4faaa Compare April 21, 2022 04:09
@samford
Copy link
Member Author

samford commented Apr 21, 2022

I did some manual testing over the past couple days and didn't encounter any related failures. I looked over these changes another time and the only thing that stuck out was that I forgot to commit the aforementioned break in Strategy#page_headers. I've added it in this latest push and this should be good to go at this point.

I'm going to bed over here and I'll merge this in the morning (EDT), so I'll be around if anything starts smoking 😆

@samford
Copy link
Member Author

samford commented Apr 21, 2022

Merging now. Thanks for your patience and all the review along the way, @MikeMcQuaid!

@samford samford merged commit 92e4a5e into Homebrew:master Apr 21, 2022
@samford samford deleted the curl-add-response-parsing-methods branch April 21, 2022 14:26
@MikeMcQuaid
Copy link
Member

Great work thanks @samford!

samford added a commit to samford/brew that referenced this pull request Apr 21, 2022
Before Homebrew#11252 was merged, #curl_download used the headers
from the first response to check for `accept-ranges` and
`content-length` headers. This behavior was incorrect and it should
have been checking the last response headers, as earlier responses
would be redirections that may omit the `accept-ranges` header and/or
use a different `content-length` value than the final response.

I intended to maintain the "first" behavior in Homebrew#11252 and switch to
`#last` in a follow-up PR but I accidentally incorporated this change
into the aforementioned PR. This commit should not be merged and it's
simply for the sake of explaining this change after-the-fact.
@bevanjkay
Copy link
Member

bevanjkay commented Apr 26, 2022

Hey @samford, there's something in this PR that is causing some audits to fail on the cask side of things. From what I can see, I've drilled it down to this PR.

Here's a couple of examples;
Homebrew/homebrew-cask#122377
Homebrew/homebrew-cask#122490

The files are fetched correctly, however the URL is still failing the audit

audit for opencore-configurator: failed
 - The binary URL https://mackie100projects.altervista.org/apps/opencoreconf/download-new-build.php?version=last is not reachable
Error: 1 problem in 1 cask detected

I may be able to do some more digging tonight - but not for a few hours.

@samford
Copy link
Member Author

samford commented Apr 26, 2022

@bevanjkay I wasn't able to replicate this locally with the opencore-configurator changes but I'm able to replicate this with the visual-paradigm PR. What I'm seeing is that curl fails on the binary URL with a 28 error, meaning the curl_output call is timing out after the 25 seconds max_time limit. This issue manifests for visual-paradigm because the dmg is ~750 MB. Even if the file was successfully fetched before (i.e., cached locally), it's downloaded again and times out.

This issue was introduced in the Curl: Update to use response parsing methods commit because it changed the behavior of #curl_http_content_headers_and_checksum so that it only parses the curl output on status.success?. If I'm understanding correctly, the previous behavior may have been parsing the intermediate responses (redirection(s), in this case) but not the final response that was timing out. The behavior introduced in this PR doesn't return information from any responses because I moved the response parsing logic into the if status.success? block.

I tested this and it should be as simple as moving the parsing logic back to its previous location, so it isn't guarded by status.success? anymore. This reinstates the old behavior and allows the audit to pass in this particular scenario. I'll have to look into this further tomorrow (it's a weird situation) but I'll open a PR for the fix in a moment.

@Bo98
Copy link
Member

Bo98 commented Apr 26, 2022

How do we best handle third party taps which may be hit by the new redirection limit? Just tell them to change their formula?

https://github.com/orgs/Homebrew/discussions/3215

Not sure if proxies are having a further effect on this or not.

I suppose another question would be: any reason for 5 specifically over something more conservative?

@samford
Copy link
Member Author

samford commented Apr 26, 2022

How do we best handle third party taps which may be hit by the new redirection limit?

This shouldn't be something that anyone needs to think about, so I think we should just increase the default max_iterations value to something that's really unlikely to be reached in practice. max_iterations should almost never come into play (i.e., it's not intended to restrict redirections, as that should be done using curl's --max-redirs option) but the current default is low enough that it has sometimes been reached under normal circumstances. I think something like 25 is probably fine, where it's high enough to only come into play if something weird is going on but low enough that we won't have to go through a bunch of needless iterations if/when it does. I'll open a PR for this in a moment (edit: #13202).

Long term, we may want to figure out a programmatic way of setting the limit for the while loop (e.g., count the number of instances of \r\n\r\n and/or HTTP status lines in the output), so we don't have to deal with a fixed maximum. If we can accomplish that, it should obviate the need to manually control the max_iterations value (like we're currently doing in Livecheck::Strategy).

I suppose another question would be: any reason for 5 specifically over something more conservative?

5 was just an arbitrary number and there's nothing really special about it. If I remember correctly, our intention was to start low and increase it if we start seeing the related error under normal circumstances. Originally the code in #parse_curl_output lived in the Livecheck::Strategy#page_content method, so it wasn't a huge issue if a check failed for this reason (I typically resolved the error when I saw it by updating URLs). However, now this code is used in important methods like #curl_download, so it's a real problem if it prevents a user from installing a formula/cask (as in the linked discussion).

https://github.com/orgs/Homebrew/discussions/3215

Not sure if proxies are having a further effect on this or not.

I wasn't able to replicate it locally when installing the virt-manager formula from the third-party jeffreywildman/virt-manager tap (my guess at the source of this). If the higher max_iterations default doesn't resolve the issue, it may be something on their end that they need to sort out.

@MikeMcQuaid
Copy link
Member

This shouldn't be something that anyone needs to think about, so I think we should just increase the default max_iterations value to something that's really unlikely to be reached in practice. max_iterations should almost never come into play (i.e., it's not intended to restrict redirections, as that should be done using curl's --max-redirs option) but the current default is low enough that it has sometimes been reached under normal circumstances.

👍🏻 fine with me to jump bump this limit to whatever is necessary.

@Bo98
Copy link
Member

Bo98 commented Apr 27, 2022

Bumped to 25 in #13202 and tagged in 3.4.9.

@github-actions github-actions bot added the outdated PR was locked due to age label May 28, 2022
@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 28, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
outdated PR was locked due to age
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants