Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

uuid1/0 is slow because it gets the MAC address every time #40

Closed
whatyouhide opened this issue Oct 24, 2019 · 10 comments
Closed

uuid1/0 is slow because it gets the MAC address every time #40

whatyouhide opened this issue Oct 24, 2019 · 10 comments

Comments

@whatyouhide
Copy link
Contributor

Benchmarks suggest that uuid1/0 is slower than its counterparts by an order of magnitute of 100.

benchmark name     iterations   average time 
uuid5 dns             1000000   1.46 µs/op
uuid3 dns             1000000   1.51 µs/op
uuid4                 1000000   2.04 µs/op
uuid1                   10000   179.36 µs/op

Benchmarked why uuid1 is much slower than its counterparts:

benchmark name  iterations   average time 
uuid1_time        10000000   0.13 µs/op
uuid1_clockseq    10000000   0.78 µs/op
uuid1_node           10000   184.41 µs/op
uuid1                10000   192.78 µs/op

It's all :inet.getifaddrs/0's fault. I was thinking that we can easily cache this, by using ETS for OTP < 21.2 and persistent_term for OTP >= 21.2. This means getting the MAC address when the uuid application starts and then using that. This also means that if the address changes on the fly, it wouldn't get picked up. Are we fine with this? If so, I'll send a PR. Let me know your thoughts @zyro! 💟

@relistan
Copy link

relistan commented Oct 25, 2019

💯 We got bitten by how much slower uuid1/0 is than uuid4/0. IME it's exceedingly, exceedingly rare for a MAC address to change on a running system unless you are using e.g. VPN tunnels. In that event, do you really want to change the way UUIDs are generated on the fly? I at least would say no.

@whatyouhide
Copy link
Contributor Author

Also worth noting that we can probably add a poller to the elixir_uuid application that refreshes this value every 5 or 10 seconds or so. Won't add any overhead to a running system but solves this problem almost perfectly (if this is indeed a "problem").

@whatyouhide
Copy link
Contributor Author

Ping @zyro? 🙏

@zyro
Copy link
Owner

zyro commented Oct 29, 2019

I don't actively use this at the moment but I'm all for improving the performance here. 👍

If you open a PR I'll try to review/merge/release as soon as I can.

@zyro
Copy link
Owner

zyro commented Oct 29, 2019

As a side note I wouldn't expect the MAC address to change while the system is running. I guess it's possible but I'm not sure I've ever had a valid case of that happening on a production system without the application restarting - nodes being cycled out of/into a cluster for a rolling hardware upgrade, for example.

I'm fine with caching the address for the lifetime of the application unless there's very low overhead and complexity to refreshing periodically.

@whatyouhide
Copy link
Contributor Author

I don't actively use this at the moment but I'm all for improving the performance here.

We use this pretty heavily and this concerned me a bit. Are you still actively interested in maintaining it? For example, we had problems when the library was renamed from :uuid to :elixir_uuid between patch versions. Was that intentional and if so was there a reason for doing it?

@zyro
Copy link
Owner

zyro commented Oct 31, 2019

Discussion was on #24. I didn't imagine any breaking issues since the package name was changed so anything importing the previous uuid would be fine. The core issue if I recall correctly was mixing together applications/libraries that depended on this package and uuid_erl which also defines its app name as :uuid.

I'm not spending active time maintaining this but happy to merge in patches and release new versions if needed. I largely see this as feature-complete so I would only expect bug fixes and performance improvements going forward.

I had hoped Elixir would introduce a UUID package in its standard library and make this obsolete, but that hasn't happened yet and I'm not aware of any plans for it.

@whatyouhide
Copy link
Contributor Author

I didn't imagine any breaking issues since the package name was changed so anything importing the previous uuid would be fine.

I see.

I had hoped Elixir would introduce a UUID package in its standard library and make this obsolete, but that hasn't happened yet and I'm not aware of any plans for it.

We have no plans to introduce UUID in the standard library. This decision makes this library crucial to the ecosystem because it has the potential to become the UUID standard implementation.

I largely see this as feature-complete

I think there's still room for more features such as UUID validation, so I'm not sure I would call this feature-complete.

If you feel like this could become the standard Elixir UUID library used by the community, I'll be happy to give a hand maintaining this and moving it forward (since I have allocated OSS time to actively maintain community libraries). Let me know your thoughts :)

@zyro
Copy link
Owner

zyro commented Nov 1, 2019

I currently have minimal time to give this library but I can continue merging pull requests and pushing up releases to hex. If you're willing to contribute time I'm happy to talk about collaborator access to the repo.

On a related note I can also see that uuid is still getting more downloads than elixir_uuid so the shift has not been very fast and I expect app name conflicts with uuid_erl are still happening - which is what the name change was originally intended to help with. Would you suggest going back to uuid as the 'official' name and release for this library? We should move this to a separate discussion if that makes more sense.

@whatyouhide
Copy link
Contributor Author

Closing in favor of the PR (#45).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants