NOTE: The below intro is obsolete
Unfortunately the UDP based system described below proved unreliable enough in practice for me to discard it (at least temporarily) in favor of a few other network backend ideas.
And THEN google obsoleted their soap API in favor of json, rendering the original front-end client useless.
I recently rewrote the front end to use json as its transport and am fiddling with sctp on the backend.
If all you are looking for a command line search client, I highly recommend surfraw instead.
If instead, you want something written in C, integrated with emacs, that outputs stuff in org and markdown format, please feel free to participate in this project!
In the race to add feature after feature after freature, google and other search engines have lost sight of the original appeal of the service. A typical google page is now over 25K long, not counting the image, contains dozens of extra links, and takes 5-10 seconds to download and render over slow, international links.
I’ve always admired the DNS system, and felt that now that “search” is almost a commodity, that it would be possible to define a binary udp based protocol for search. There are plenty of unused, useless services in /etc/services - having a special port number for search makes sense.
Problem with that was that most query-response protocols don’t work well through a firewall.
Enter IPv6. Without NAT, you have end to end networking. Response problem solved. Billing problem solved, too.
The original version of this system used 2!! packets to get a query out and a response. If you are going through a 6in4 tunnel, it’s 4 packets. (2 very short ones, however)
This means that by the time you complete a tcp handshake vs the normal google, you’ve already got a response from this, on a long latency link.
The earliest prototype came in at half the time to transmit a query from australia and get the response that google.com.au did, best case. Due to the unreliable network I was on, it was actually often 5-10 times faster. I began to believe I was onto something.
Additional benefits - vastly reduced data traffic. Control of the formatting can be controlled on the phone or remote servers.
38 packets vs 4. Not bad. Some additional tweaking was in order.
The second nice thing about end to end networking is that it makes it possible -assuming a static ip address - to have for-pay search. I’d gladly pay a few bucks a month for faster - and ad-free searching!
As a C language tool gnugol is much faster than (for example) a python based one. Smaller, too.
Getting custom output
I write in org or markdown format. When I get a search back via surfraw, it’s in pure html which is both ugly and hard to reformat, so I have output formatters in the current release of gnugol that give me the output in a timesaving format.
The code is undergoing a major rewrite to be generalized and support more json backends (notably Xapian), and clean up the plugin idea and so on, so I can go back to exploring the original idea which was using some other transport than tcp to carry this sort of query, over international links.
There are all sorts of vestiges of the old code currently in here. It doesn’t even build without hand tweaking. The emacs code in this repo is totally obsolete and depends on another unreleased library anyway. And so on.