Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to FxHasher hash function in int64hash and bitmix #52496

Closed
wants to merge 10 commits into from

Conversation

Zentrik
Copy link
Contributor

@Zentrik Zentrik commented Dec 12, 2023

This pr switches the current hash function to FxHasher as discussed in #52440.
On the allinference benchmarks in BaseBenchmarks I didn't see any regressions and generally a 5% performance improvement.

@Zentrik
Copy link
Contributor Author

Zentrik commented Dec 12, 2023

To get doctests passing I can either make this code use the old hash function

julia/src/symbol.c

Lines 22 to 27 in eba10dd

static uintptr_t hash_symbol(const char *str, size_t len) JL_NOTSAFEPOINT
{
uintptr_t oid = memhash(str, len) ^ ~(uintptr_t)0/3*2;
// compute the same hash value as v1.6 and earlier, which used `hash_uint(3h - objectid(sym))`
return inthash(-oid);
}
or update the docs. Not sure if we care about the symbol hash changing.

@simeonschaub
Copy link
Member

simeonschaub commented Dec 13, 2023

We generally don't guarantee stable hashes across minor versions so just changing the doctests should be fine

@KristofferC
Copy link
Sponsor Member

@nanosoldier runbenchmarks("inference", vs=":master")

@Zentrik
Copy link
Contributor Author

Zentrik commented Dec 19, 2023

It seems performance has regressed significantly since the first commit, at least locally.

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here.

@vchuravy
Copy link
Sponsor Member

A good benchmark would be the time it takes to build core.ji and the sysimg. At least in my recent profile int64hash is 4% e.g. 20s on my machine.

@Zentrik
Copy link
Contributor Author

Zentrik commented Dec 21, 2023

I've had trouble getting reproducible benchmarks results but I'm fairly confident this pr should now be an improvement on the allinference benchmarks against master.

As for the time taken to build corecompiler.ji and the sysimage I didn't notice a difference when running make but this pr does make bitmix much faster so it should eliminate the int64hash hotspot from your profile. I'll have to see if I can get perf or some other tool working properly to confirm.

This reverts commit 841185a.
Using bitmix is incorrect, as on 64 bit systems bitmix uses the 64 bit version of FxHasher. Also possibly a performance regression.
This reverts commit 80ee622.
We have a perf regression since first commit, I've had trouble getting reliable benchmarking results so reverting this in case this caused a regression.
Zentrik and others added 2 commits January 9, 2024 16:28
Co-authored-by: Stefan Karpinski <stefan@karpinski.org>
@Zentrik
Copy link
Contributor Author

Zentrik commented Feb 2, 2024

Coming back to this, I only see small (~5%) regressions. Not sure how I ever saw improvements.
I suspect this is due to poor hash quality causing more rehashing and objectid still being slow due to the cold path and preventing in lining (like you probably see when hashing in rust). I'll try again with an alternate hash function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants