Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v0.4.22 #6506

Closed
Stebalien opened this issue Jul 12, 2019 · 39 comments

Comments

@Stebalien
Copy link
Contributor

commented Jul 12, 2019

go-ipfs 0.4.22 release

We're releasing a PATCH release of go-ipfs based on 0.4.21 containing some critical fixes.

The past several releases have been shaky and the network has scaled to the point where where small changes can have a wide-reaching impact on the entire network. To keep this situation from escalating, we've put a hold on releasing new features until we can improve our release process (which we will be trialing in this release) and testing procedures.

Current RC: v0.4.22-rc1
Install: ipfs update install v0.4.22-rc1

🗺 What's left for release

🔦 Changelog

This release includes fixes for the following regressions:

  1. A major bitswap throughput regression introduced in 0.4.21 (ipfs/go-ipfs#6442).
  2. High bitswap CPU usage when connected to many (e.g., 10,000) peers. See ipfs/go-bitswap#154.
  3. The local network discovery service sometimes initializing before the networking module, causing it to announce the wrong addresses and sometimes complain about not being able to determine the IP address) (ipfs/go-ipfs#6415).

It also includes fixes for:

  1. Pins not being persisted after ipfs block add --pin (ipfs/go-ipfs#6441).
  2. Concurrent map access on GC due to the pinner (ipfs/go-ipfs#6419).
  3. Potential pin-set corruption given a concurrent ipfs repo gc and ipfs pin rm (ipfs/go-ipfs#6444).
  4. Build failure due to a deleted git tag in one of our dependencies (ipfs/go-ds-badger#64).

Release Checklist

For each RC published in each stage:

  • version string in version.go has been updated
  • tag commit with vX.Y.Z-rcN
  • upload to dist.ipfs.io
    1. Build: https://github.com/ipfs/distributions#usage.
    2. Pin the resulting release.
    3. Make a PR against ipfs/distributions with the updated versions, including the new hash in the PR comment.
    4. Ask the infra team to update the DNSLink record for dist.ipfs.io to point to the new distribution.

Checklist:

  • Stage 1 - Internal Testing
    • Feature freeze. If any "non-trivial" changes (see the footnotes of docs/releases.md for a definition) get added to the release, uncheck all the checkboxes and return to this stage.
    • CHANGELOG.md has been updated
    • Automated Testing (already tested in CI) - Ensure that all tests are passing, this includes:
    • [-] Network Testing:
      • [-] test lab things - Not Ready.
    • Infrastructure Testing:
      • Deploy new version to a subset of Bootstrappers
      • [-] Deploy new version to a subset of Gateways -- skipped as these are currently running a special fork.
      • Deploy new version to a subset of Preload nodes
      • Collect metrics every day. Work with the Infrastructure team to learn of any hiccup
    • IPFS Application Testing - Run the tests of the following applications:
  • Stage 2 - Public Beta
    • Reach out to the IPFS early testers listed in docs/EARLY_TESTERS.md for testing this release (check when no more problems have been reported). If you'd like to be added to this list, please file a PR.
    • Reach out to on IRC for beta testers.
    • Run tests available in the following repos with the latest beta (check when all tests pass):
      • orbit-db
        • two tests failed on both js-ipfs and go-ipfs.
  • Stage 3 - Soft Release
  • Stage 4 - Release
    • Final preparation
    • Publish a Release Blog post (at minimum, a c&p of this release issue with all the highlights, API changes, link to changelog and thank yous)
    • Broadcasting (link to blog post)

❤️ Contributors

Would you like to contribute to the IPFS project and don't know how? Well, there are a few places you can get started:

⁉️ Do you have questions?

The best place to ask your questions about IPFS, how it works and what you can do with it is at discuss.ipfs.io. We are also available at the #ipfs channel on Freenode, which is also accessible through our Matrix bridge.


This release is currently being readied in #6484.

@Stebalien Stebalien added the releases label Jul 12, 2019

@campoy

This comment has been minimized.

Copy link

commented Jul 12, 2019

Getting some love when I was actually the cause of the breakage?
You're a classy project 😄

❤️

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 12, 2019

@campoy I've done the same thing (deleted release tags). Thanks for jumping in and helping us fix the situation.

@daviddias daviddias pinned this issue Jul 15, 2019

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 19, 2019

Stage 1... done!
Stage 2...

Early testers:

  1. Please read https://github.com/ipfs/go-ipfs/blob/master/docs/EARLY_TESTERS.md (which now describes the expectations) and confirm that you're willing to participate (@obo20, I assumed you wanted to be on this list).

  2. go-ipfs v0.4.22-rc1 has passed all internal testing and is ready for public beta testing. Please try it out on your test infra (if any) and run your tests suites/apps against it.

  3. Please confirm when you've done all relevant testing so we can move on to stage 3.

This release adds no new features, just some critical fixes applied to v0.4.21. See the highlights in the issue description for a list of changes.

@sanderpick

This comment has been minimized.

Copy link
Contributor

commented Jul 19, 2019

Call me dangerous, but we've been running ahead of the releases because of a pending cluster integration, which depends on go-libp2p-core. I'm not sure it makes sense to backpedal and test with v0.4.22-rc1, but I can give that a shot this weekend.

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 20, 2019

@sanderpick don't bother. All the changes in 0.4.22-rc1 are also in master. I'll count that as a sign-off.

@obo20

This comment has been minimized.

Copy link

commented Jul 21, 2019

Things are working well on our end @Stebalien. Nothing major to report.

@koalalorenzo

This comment has been minimized.

Copy link
Contributor

commented Jul 21, 2019

Thx @Stebalien, have some suggestion from somebody with both feet still on planet earth 🙃

  1. Use Semantic Versioning to understand easily how changes are impacting our systems
  2. Include in the ChangeLog (the file) beta/rc versions so we know what to test ( see Orion's and Keep a Changelog)
  3. Is it possible to include the builds in GitHub when tagging rc or beta releases? (there is a feature to mark it as a "pre-release"). Reason: our pipelines are not downloading binaries from dist.ipfs.io because it is always failing (timeout mostly, due to being backed by IPFS), while GitHub is more reliable. 🤷‍♂️

I still don't know why Protocol Labs has its own non-conventional things 😅 and here is a cute cat:

cute-small-cat-wallpaper

@postables

This comment has been minimized.

Copy link
Member

commented Jul 21, 2019

Looks good in CI tests. No new or negative issues spotted in our development environment so far.

How long do we have to give our final analysis? Ideally I'd like to test things for a week or so in dev but if that's too long thats understandable

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 21, 2019

Use Semantic Versioning to understand easily how changes are impacting our systems.

The next release with features will be v0.5.0 so we can clearly distinguish between patch releases and feature releases. For some context:

  1. 0.5.0 was supposed to be the official "beta" release of IPFS. Hence the whole "stuck on 0.4.x" thing.
  2. Historically, we've used minor releases to indicate major breaking changes (0.3.0 -> 0.4.0 broke network compatibility). This is actually pretty common in pre-1.0 software to clearly indicate breaking changes.

Include in the ChangeLog (the file) beta/rc versions so we know what to test ( see Orion's and Keep a Changelog)

There's a changelog in the release PR. The previously named "highlights" now named "changelog" section in the issue body is a complete changelog (sorry for the confusion).

Is it possible to include the builds in GitHub when tagging rc or beta releases? (there is a feature to mark it as a "pre-release").

Sure.

Reason: our pipelines are not downloading binaries from dist.ipfs.io because it is always failing (timeout mostly, due to being backed by IPFS), while GitHub is more reliable.

Is this still happening (i.e., since July)?

I still don't know why Protocol Labs has its own non-conventional things

Sometimes, because we have good reasons we haven't written down. Other times, 🤷‍♂. Never hesitate to ask.

obligatory cat (mine and therefore the best in the world)
IMG_20190522_173855

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 21, 2019

@postables

How long do we have to give our final analysis? Ideally I'd like to test things for a week or so in dev but if that's too long thats understandable

Take your time. We'd like to get this out to users ASAP but we're also trying out our new release process here and we want to get this right.

@postables

This comment has been minimized.

Copy link
Member

commented Jul 22, 2019

@Stebalien understandable. It looks good right now, CI builds pass, no apparent new issues in dev, and so far no noticeable regressions. However if possible I'd like to hold off on a final judgement for a few more days in case anything crops up.

@obo20

This comment has been minimized.

Copy link

commented Jul 22, 2019

@Stebalien in response to your question:

Is this still happening (i.e., since July)?

We've been hitting this incredibly often and still do. It's got so bad that we started hosting our own copies of the binaries because dist.ipfs.io has gotten so bad (our ansible deployments were failing around 90% of the time due to timeouts)

The only reason we still encounter this problem is because we still have to initially pull new binary versions from dist.ipfs.io

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 22, 2019

@postables

This comment has been minimized.

Copy link
Member

commented Jul 22, 2019

FWIW if you run into issues with sites like dist.ipfs.io you can quite easily load it up via a gateway like so: https://foo.bar/ipns/dist.ipfs.io

@koalalorenzo

This comment has been minimized.

Copy link
Contributor

commented Jul 23, 2019

FWIW if you run into issues with sites like dist.ipfs.io you can quite easily load it up via a gateway like so: https://foo.bar/ipns/dist.ipfs.io

Most of the time it doesn't work as the content itself is not easy to be discovered, unless magic with DHT or a direct connection. :( Hopefully a new version fixes that :P

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 23, 2019

This new version won't fix that, it's just a patch release. We have some DHT patches that we believe will help once deployed to the entire network however, we're holding off till we can finish our test network so we can actually test how this code will affect the network.

@ianopolous

This comment has been minimized.

Copy link
Member

commented Jul 25, 2019

@Stebalien in response to your question:

Is this still happening (i.e., since July)?

We've been hitting this incredibly often and still do. It's got so bad that we started hosting our own copies of the binaries because dist.ipfs.io has gotten so bad (our ansible deployments were failing around 90% of the time due to timeouts)

The only reason we still encounter this problem is because we still have to initially pull new binary versions from dist.ipfs.io

For what it's worth, we hit the same issues, hence: https://github.com/peergos/ipfs-releases/

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 29, 2019

Early testers,

It's been a bit over a week. Any new issues with the release and/or can we move on to stage 3?

@b5

This comment has been minimized.

Copy link
Contributor

commented Jul 29, 2019

tl;dr; LGTM

The Qri crew is completely tied up in non-IPFS stuff at the moment, leaving us little time to give proper feedback on this release cycle. I've taken a quick look at the changelog and everything is in keeping with what we've expected, so I'd rubber-stamp this as good-to-go.

We're very much looking forward to properly contributing to the early testing process on the next release. Please keep us in the loop!

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 29, 2019

@b5 SGTM. Thanks for the signoff.

@postables

This comment has been minimized.

Copy link
Member

commented Jul 29, 2019

Totally forget to reply with my update, it looks good!

@koalalorenzo

This comment has been minimized.

Copy link
Contributor

commented Jul 29, 2019

It looks good also for us on Siderus Orion client!

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 29, 2019

Stage 1... done!
Stage 2... done!
Stage 3...

Early testers:

We have entered stage 3 of our release process, the "soft" release. At this point, we consider this go-ipfs release to be production ready and we don't expect any more RCs at this point. Please deploy it on production infrastructure as you would a normal release. This stage allows us to rapidly fix any last-minute issues with the release without cutting an entirely new release.

When you're satisfied that 0.4.22-rc1 is at least as stable as 0.4.21, please sign off on this issue.

@sanderpick

This comment has been minimized.

Copy link
Contributor

commented Jul 29, 2019

Same answer from me this time, @Stebalien. We're ahead of the release at the moment. Consider me a ✔️.

@obo20

This comment has been minimized.

Copy link

commented Jul 30, 2019

Same for us @Stebalien. Mark us as good to go.

@postables

This comment has been minimized.

Copy link
Member

commented Jul 30, 2019

Looks good from my end, and even have a noticeable (albeit small) drop in CPU utilization 🚀
lookin_good

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Jul 30, 2019

That's probably:

High bitswap CPU usage when connected to many (e.g., 10,000) peers. See ipfs/go-bitswap#154.

Good to know it helped.

@postables

This comment has been minimized.

Copy link
Member

commented Jul 31, 2019

That would definitely cause it, good to know its working!

For what it's worth as well, there also appears to also be an improvement to memory usage as well. This is looking like a great release so far 🚀 (this graph display free memory, as opposed to consumed memory)

@koalalorenzo

This comment has been minimized.

Copy link
Contributor

commented Jul 31, 2019

Planned to deploy it tomorrow ( ~10:00 CET )

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Aug 2, 2019

@koalalorenzo 🔥 or 😎?

@Stebalien Stebalien added the release label Aug 2, 2019

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Aug 6, 2019

Stage 3 done.
Stage 4...

Building and releasing today (hopefully).

@hacdias

This comment has been minimized.

Copy link
Member

commented Aug 8, 2019

@Stebalien status on this?

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Aug 8, 2019

Built but we're waiting on some blog post stuff. We may release first if we can't get everything ready in time.

@rklaehn

This comment has been minimized.

Copy link

commented Aug 9, 2019

Just wanted to let you know that I very much agree with the decision put a hold on releasing new features until there is a process to ensure that the existing features work reliably...

@Retia-Adolf

This comment has been minimized.

Copy link

commented Aug 13, 2019

non-Windows binary in go-ipfs_v0.4.22_windows-amd64.zip :|

@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Aug 13, 2019

@Retia-Adolf thanks for the report. This should be fixed now and I apologize for flubbing it.

@andrewheadricke

This comment has been minimized.

Copy link

commented Aug 15, 2019

darwin amd64 build looks borked.

11:31:21.989 ERROR   cmd/ipfs: error from node construction:  could not build arguments for function "reflect".makeFuncStub (/usr/lib/go/src/reflect/asm_amd64.s:12): failed to build provider.Provider: could not build arguments for function "github.com/ipfs/go-ipfs/core/node".ProviderCtor (pkg/mod/github.com/ipfs/go-ipfs@v0.4.22/core/node/provider.go:24): failed to build *provider.Queue: function "github.com/ipfs/go-ipfs/core/node".ProviderQueue (pkg/mod/github.com/ipfs/go-ipfs@v0.4.22/core/node/provider.go:19) returned a non-nil error: strconv.ParseUint: parsing "1565442853283077000/b": value out of range daemon.go:337

Error: could not build arguments for function "reflect".makeFuncStub (/usr/lib/go/src/reflect/asm_amd64.s:12): failed to build provider.Provider: could not build arguments for function "github.com/ipfs/go-ipfs/core/node".ProviderCtor (pkg/mod/github.com/ipfs/go-ipfs@v0.4.22/core/node/provider.go:24): failed to build *provider.Queue: function "github.com/ipfs/go-ipfs/core/node".ProviderQueue (pkg/mod/github.com/ipfs/go-ipfs@v0.4.22/core/node/provider.go:19) returned a non-nil error: strconv.ParseUint: parsing "1565442853283077000/b": value out of range
@Stebalien

This comment has been minimized.

Copy link
Contributor Author

commented Aug 15, 2019

@andrewheadricke you've downgraded from master to 0.4.22. Master includes some new patches (and probably needs an explicit repo migration).

@andrewheadricke

This comment has been minimized.

Copy link

commented Aug 15, 2019

Thanks @Stebalien I deleted my .ipfs directory and now its working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.