You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 28, 2021. It is now read-only.
Now that we have partitions, we can parallelise the creation of indexes so it takes a lot less time to create them instead of doing so sequentially.
Challenges
The indexes are a big matrix of 1s and 0s, being the columns the database row column and the rows the unique value of the rows for the particular table field.
We also keep a mapping from value to the row id in the bitmap. So, for that you need to keep track of the row ids.
The problem comes from the columns. The column id is sequentially incremented.
Ideas
We could share a global counter protected by a mutex. That way, there would be no need to pass the colID and keep it sequential. This has one downside, though: index creation is not idempotent and the order in which the rows are stored (and thus, returned) will be different every time you create the same index.
Also, we would need to stop saving batches and fields in the driver structure, as multiple operations might be taking place at the same time.
UPDATE: this can't be done or indexes can't be combined.
The text was updated successfully, but these errors were encountered:
That way, we don't need colID to be passed and we can only store what's inside the partition. Then, we can parallelise safely all the partition creation.
Updates
With this, we can safely just invalidate the fields and their mappings and recreate them if they changed.
@erizocosmico - regarding the latest proposal, if I understand correctly - if we save colID in fields how do we sync-up the same locations across multiple partitions (fields) if they may have different colID per field.
Because the main problem is related to column IDs (which are global per index), I suggest 2 alternative scenarios:
S1
Replace iteration over columns by bucket.Sequence(). So far, in mapping we have one bucket per field (we keep mapping value - rowID) and bucket per index (colID - location).
In other words, we'll first save mapping (colID - location) which will utilise bucket sequencer (for performance and thread safety) and then set bit in bitmap.
I think it's better than wrapping a loop with colID with mutex. Moreover it will keep all the synchronization calls in mapping.
So our existing loop:
for {
...rowID, err:=idx.mapping.getRowID(field.Name(), values[i])
// which will do similar things as we do in getRowID - create a new if doesn't existscolID, err:=idx.mapping.getColID(idx.Name(), location)
b.bitBatches[i].Add(rowID, colID)
}
S2
The second scenario is totally different. If we want to populate N fields with data then:
We create N go routines. Every go routine will receive the value and location (maybe over the channel) and save it in mapping and in pilosa or in batch.
The main thread will iterate over columns (or get the next column ID from mapping).
Assign location to column ID and send tuples (value, location) over the channel to every go routine.
In other words, the main thread will be responsible for mapping location and generate columnID, but background go routines will save mapping and set bits in own fields.
Goals
Now that we have partitions, we can parallelise the creation of indexes so it takes a lot less time to create them instead of doing so sequentially.
Challenges
The indexes are a big matrix of 1s and 0s, being the columns the database row column and the rows the unique value of the rows for the particular table field.
We also keep a mapping from value to the row id in the bitmap. So, for that you need to keep track of the row ids.
The problem comes from the columns. The column id is sequentially incremented.
Ideas
We could share a global counter protected by a mutex. That way, there would be no need to pass the colID and keep it sequential. This has one downside, though: index creation is not idempotent and the order in which the rows are stored (and thus, returned) will be different every time you create the same index.
Also, we would need to stop saving batches and fields in the driver structure, as multiple operations might be taking place at the same time.
UPDATE: this can't be done or indexes can't be combined.
The text was updated successfully, but these errors were encountered: