You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the interests of saving space, we can encode the observation record keys instead of "[some column name]" we could use "1", etc.
This would also require mapping queries, to the coded keys, and mapping the coded keys to the column names on their way into a dframe (we can pass columns= in the dframe constructor).
This assumes we are not planning on moving to a column store (which seems to be true).
The text was updated successfully, but these errors were encountered:
this is a change to the way we store data that will require migrating or clearing any current databases
in observations we can remove the reserved key dataset_observation_id, it fulfills the same function as dataset_id
we can tie all our batch* functions closer to the Observation model, we do not expect anyone else to use these (with the potential exclusion of batch_read)
any encoding/decoding should happen at the batch level, to isolate it from the rest of the system (feels like a completely independent mix-in could be in order)
In the interests of saving space, we can encode the observation record keys instead of "[some column name]" we could use "1", etc.
This would also require mapping queries, to the coded keys, and mapping the coded keys to the column names on their way into a dframe (we can pass
columns=
in the dframe constructor).This assumes we are not planning on moving to a column store (which seems to be true).
The text was updated successfully, but these errors were encountered: