An awesome internet discovery button, for developers, tech and science lovers.
A browser extension that takes you to a random site from one of the awesome curated lists. Like good ol' StumbleUpon (which is now dead).
There are 45,787 unique sites from 554 awesome lists on Github from kind contributors. There's some hidden gems waiting in there
How to use it:
To stumble: Simply click on the
꩜ Introducing: The Rabbit Hole
We have all been down internet rabbit holes.
One minute you're casually reading the news, the next you've read so much about
random topic you might as well do a TED talk.
What just happened? The rabbit hole pulled you in and you lost track of time, but you also might have discovered something awesome.
So why not embrace it, by having a fancy button for it, obviously.
Stay stumblin' on the same topic, or exit back to random mode.
- Clone or fork this repository
- Open Chrome/Brave or other Chromium-based browser
- Open the extensions page at
- Enable developer mode
- Click "Load unpacked" and select the
Here's some of the things I'd like to build out for this extension. However the main one right now is simply to curate the links as good as I can, add more data sources and make sure the pages are a good mix of interesting, useful, fun and exciting.
- Feedback mechanism for good/bad links
- Favourite 'gems' to bookmark folder
- Basic stats
- awesome curated lists
- tech, science, software, startups, etc.
- Rabbit hole feature (stay on the same topic).
- Firefox support
- Safari support
A note about permissions
This extension requires the
<all_urls> permission, in order to show the overlay UI on every stumble page that you visit. It does not access data on these sites. There is no tracking, or analytics of any kind, and state is only stored locally.
Credit to the curators
This extension is made possible by awesome people curating the internet:
A note about the dataset
To make sure that every link works and is relevant, the dataset is cleaned. Any dead or broken links are removed, as well as links to CI pipelines, recursive links, donation links, etc. This is done with the cleanup functions in utils.py. Running this script can take a few hours on a slow connection.
After removing from the dataset, a record of dead or broken links (those with 404, SSL, other server errors) is saved in these text files after every scrape.