Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upgrade alias extra-deps #127

Merged
merged 6 commits into from
Oct 13, 2022
Merged

Conversation

russmatney
Copy link
Contributor

As a follow-up to the new neil dep upgrade feature - this adds support for
upgrading deps in :aliases -> :extra-deps.

By default, alias deps are included in the basic neil dep upgrade usage. You
can specify an upgrade to only an alias by including --alias <alias-to-upgrade>. This can be combined with --lib <lib-name>, if you need
to specify a lib in a particular alias.

Alias upgrades can be ignored with a new --no-aliases flag, if you
want to upgrade just your :deps.

If the same lib is specified in two aliases, in one case as a :git/sha, the
other as a :mvn/version, each dep maintains the same coordinate type, fetching
the latest for each.

Let me know if we prefer different defaults, behavior, or flag names!


I hit the github rate limit fairly quickly working on this - it looks like it's
only 60 unauthenticated requests per hour, and I ended up using the just-added
env vars and a personal access token to work around it. It might be worth
refactoring some of the tests I added in the last PR to reduce the number of
calls, or otherwise mocking the github/curl hits.

Includes :aliases <alias> :extra-deps in the deps upgraded by `neil dep
upgrade`. Supports specifying a :lib, which can be in the normal :deps
or an aliases's :extra-deps map.
`dep upgrade` now respects a passed --alias, which will limit the scope
of your upgrade to any alias's extra-deps.
opts->specified-deps seems to be better seperation of concerns, and is
perhaps more reusable/extenable now.
@teodorlu
Copy link
Contributor

I hit the github rate limit fairly quickly working on this - it looks like it's
only 60 unauthenticated requests per hour, and I ended up using the just-added
env vars and a personal access token to work around it. It might be worth
refactoring some of the tests I added in the last PR to reduce the number of
calls, or otherwise mocking the github/curl hits.

Are you aware that you can run only the tests from a single namespace, instead of all tests?

See my comment here: #110 (comment)

@borkdude borkdude merged commit ca073c4 into babashka:main Oct 13, 2022
@russmatney russmatney deleted the dep-upgrade-aliases branch October 13, 2022 11:41
@russmatney
Copy link
Contributor Author

russmatney commented Oct 13, 2022

Are you aware that you can run only the tests from a single namespace, instead of all tests?

See my comment here: #110 (comment)

@teodorlu Yep, i was happy to find and use that method yesterday, really as a last fallback/sanity check. When debugging and deving on this, I actually connected to a bb repl so that I could just eval the test code in-line, which was a nice feedback loop (that's why i moved those test vars from the let to top-level defs, so you can eval pretty much every line independently).

I extended the upgrade tests to cover more cases and remove the --dry-run, which is nice coverage, but alot of network traffic. In its current state those tests hit github 6 times per run, and maven/clojars even more. I think the fetch-latest requests could be something that gets cached at the curl layer, at least on the dev side. (do we have creds for CI, to avoid hitting the limit from github actions? maybe those builds end up in a different rate-limit tier?).

I wonder if users will hit the rate limit in actual use - it wouldn't be typical to upgrade so many times per hour, but it will probably happen to someone. the error handling for it right now is a bit confusing - it eventually logs an 'remote not found for lib' println.

@teodorlu
Copy link
Contributor

Nice!

(do we have creds for CI, to avoid hitting the limit from github actions? maybe those builds end up in a different rate-limit tier?).

I think we do. At least I've leaned a lot on CI to run tests, and haven't had any errors on CI. At least not locally.

I wonder if users will hit the rate limit in actual use - it wouldn't be typical to upgrade so many times per hour, but it will probably happen to someone. the error handling for it right now is a bit confusing - it eventually logs an 'remote not found for lib' println.

I see your point.

I don't think this has been a big problem yet. I'm also not sure what we can do about it. We probably don't want to ship out credentials to a user's computer. I guess we could create a proxy server for our use, but then we're left with the problem of how to authenticate against our proxy server.

Perhaps a problem we can leave until it annoys end users :)

@russmatney
Copy link
Contributor Author

Yeah, agreed it's not really a problem at the moment. Yet! I couldn't stop thinking about it, so here's a little brain dump/suggestion about caching.

A local cache of request params -> results might be worthwhile. Caching introduces some complexity, which is not ideal, but it could also speed up neil's usage (and the tests) somewhat, by not incurring so much network traffic for repeated latest-checks.

My thought so far is to cache outgoing requests for an hour (60 min ttl), so that after an hour new results would be fetched, but before then, we assume the results will be the same. The cache key would be the endpoint plus params, whatever that is for the services. (I did something similar here for fetching lichess games - note that this cache is in-memory only, and has no ttl - in neil it would need to read/write to/from a file somewhere and include a timestamp to compare against.)

Maybe this kind of behavior could be built into babashka/curl itself? Or it's already part of curl? I can do some digging.

We'd want the ux to make it obvious things are cached (maybe an extra :cached true or something in each upgrade output), and make it very easy to clear the cache (neil cache clear or hopefully something better).

Pre-emptively brusting the cache just gets us back to the same behavior, so we don't need to be too shy about doing that for users on purpose, or suggesting a cache clear in neil's help output if something is still in there.

Anyway, just some thoughts - if there's interest I can take a shot at a proposal PR.

@rads
Copy link
Collaborator

rads commented Oct 14, 2022

@russmatney: Can you create an issue for your proposal and we can discuss further there?

@russmatney
Copy link
Contributor Author

@rads created an issue here: #128

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants