Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor the DomainProcessor to take advantage of the new crawl data format #69

Merged
merged 71 commits into from Jan 10, 2024

Conversation

vlofgren
Copy link
Contributor

@vlofgren vlofgren commented Dec 27, 2023

With the new crawler modifications, the crawl data comes in a slightly different order, and a result of this is that we can optimize the converter. This is a breaking change that will be incompatible with the old style of crawl data, hence it will linger as a branch for a while.

The processor normally retains the domain data in memory after processing to be able to do additional site-wide analysis. This works well, except there is a small number of outlier websites that have an absurd number of documents that can rapidly fill up the heap of the process. These websites now receive a simplified treatment that skips the topological analysis. This is similar to what is done when sideloading Wikipedia and StackExchange. This processing is executed in the converter batch writer thread, which is slower, but the documents will not be persisted in memory, thus lowering the overall memory requirement for the converter process.

The change also moves the link database out of MariaDB and into in-memory storage in the executor services. This is done because writing and reading from the link database is very slow, leading to long locks and timeouts. It's was one of the few remaining points that couldn't be rolled back and restored from backup. This is doable now. In doing this, the old "linkdb" is renamed "documents db", to avoid confusion in naming. An automatic migration is added to rename the associated files. This breaks old backups.

Other changes were committed to this branch while it worked as a temporary main branch, but should also have been cherry picked to master.

With the new crawler modifications, the crawl data comes in a slightly different order, and a result of this is that we can optimize the converter.  This is a breaking change that will be incompatible with the old style of crawl data, hence it will linger as a branch for a while.

The first step is to move stuff out of the domain processor into the document processor.
@vlofgren vlofgren changed the title (WIP) Refactor the DomainProcessor to take advantage of the new crawl data format Refactor the DomainProcessor to take advantage of the new crawl data format (WIP) Dec 27, 2023
The processor normally retains the domain data in memory after processing to be able to do additional site-wide analysis.   This works well, except there are a number of outlier websites that have an absurd number of documents that can rapidly fill up the heap of the process.

These websites now receive a simplified treatment.  This is executed in the converter batch writer thread.  This is slower, but the documents will not be persisted in memory.
This commit adds a safety check that the URL of the document is from the correct domain.

It also adds a sizeHint() method to SerializableCrawlDataStream which *may* provide an indication if the stream is very large and benefits from sideload-style processing (which is slow).

It furthermore addresses a bug where the ProcessedDomain.write() invoked the wrong method on ConverterBatchWriter and only wrote the domain metadata, not the rest...
Use ProcessingIterator to fan out processing of documents across more cores, instead of doing all of it in the writer thread blocking everything else with slow single-threaded processing.
Updated ProcessingIterator's queue polling from one second to 50 milliseconds for improved performance. This facilitates faster document processing across more cores, reducing bottlenecks and slow single-threaded processing.
Showing a total of 200 connected domains is not very informative.
The URI query string is now URL encoded in the WarcProtocolReconstructor. This change ensures proper encoding of special characters as per the standard URL encoding rules and improves URL validity during the crawling process.
Route the sizeHint from the input parquet file to SideloadProcessing, so that it can set sideloadSizeAdvice appropriately, instead of using a fixed "large" number.

This is necessary to populate the KNOWN_URL column in the domain data table, which is important as it is used in e.g. calculating how far to re-crawl the site in the future.
This function made some very flimsy-looking assumptions about the order of an iterable.  These are still made, but more explicitly so.
Modify processingiterator to be constructed via a factory, to enable re-use of its backing executor service.

This reduces thread churn in the converter sideloader style processing of regular crawl data.
A necessary step was accidentally deleted when cleaning up these tests previously.
This seems like it would make the wikipedia search result worse, but it drastically improves the result quality!

This is because wikipedia has a lot of articles that each talk about a lot of irrelevant concepts, and indexing the entire document means tangentially relevant results tend to displace the most relevant results.
There was as bug where if the input of ResultValuator.normalize() was negative, it was truncated to zero.  This meant that "bad" results always rank the same.  The penalty factor "overallPart" was moved outside of the function and was re-weighted to accomplish a better normalization.

Some of the weights were also re-adjusted based on what appears to produce better results.  Needs evaluation.
This is a test to evaluate how this impacts load times.
This dependency causes the executor service docker image to change when the index service docker image changes.
Settings for enabling reproducible builds for all subprojects were added to improve build consistency. This includes preserving file timestamps and ordering files reproducibly.

This is primarily of help for docker, since it uses hashes to determine if a file or image layer has changed.
The EC_DOMAIN_LINK MariaDB table stores links between domains.  This is problematic, as both updating and querying this table is very slow in relation to how small the data is (~10 GB).  This slowness is largely caused by the database enforcing ACID guarantees we don't particularly need.

This changeset replaces the EC_DOMAIN_LINK table with a file in each index node containing 32 bit integer pairs corresponding to links between two domains.  This file is loaded in memory in each node, and can be queried via the Query Service.

A migration step is needed before this file is created in each node.   Until that happens, the actual data is loaded from the EC_DOMAIN_LINK table, but accessed as though it was a file.

The changeset also migrates/renames the links.db file to documents.db to avoid naming confusion between the two.
The new converter logic assumes that the crawl data is ordered where the domain record comes first, and then a sequence of document records.  This is true for the new parquet format, but not for the old zstd/gson format.

To make the new converter compatible with the old format, a specialized reader is introduced that scans for the domain record before running through the sequence of document records; and presenting them in the new order.

This is slower than just reading the file beginning to end, so in order to retain performance when this ordering isn't necessary, a CompatibilityLevel flag is added to CrawledDomainReader, permitting the caller to decide how compatible the data needs to be.

Down the line when all the old data is purged, this should be removed, as it amounts to technical debt.
This facilitates switching between SQL and File-backed implementations on the fly while migrating from one to the other.
Rendering is very slow. Let's see if this has a measurable effect on latency.
Will by default show results from the last 2 years.  May need to tune this later.
Actually persist the value of the toggle between searches too...
Swipe right to show filter menu.

Fix CSS bug that caused parts of the menu to not have a background.
…plain text

This was caused by incorrect usage of the renderInto() function, which was always buggy and should never be used.  This method is removed with this change.
@vlofgren vlofgren changed the title Refactor the DomainProcessor to take advantage of the new crawl data format (WIP) Refactor the DomainProcessor to take advantage of the new crawl data format Jan 10, 2024
@vlofgren vlofgren marked this pull request as ready for review January 10, 2024 08:46
@vlofgren vlofgren merged commit fad9575 into master Jan 10, 2024
@vlofgren vlofgren deleted the converter-optimizations branch March 21, 2024 13:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant