You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In scraperwiki this is the save(unique_keys=[], data={}) function.
It makes the following distinct processes:
create table if not exists
add columns to existing table [ done with _ensure_columns ]
create an index table for unique_keys [ method from scraperwiki, not webstore ]
update_row (or insert row) of the table
Unique keys are fetched from query_string by request.args.getlist('unique')
The scraperwiki version has an extra jargtypes={key:type} to define the types of the columns and avoid the derivation of variable types happening at the wrong side of the json interface.
Option 1:
Combine steps 1,2,3 into one table schema updating function, and then 4 can be a normal sql update function
Option 2:
Get the client to fetch the schema from webstore at the start and then perform all the computation go generate the schema updates there
The text was updated successfully, but these errors were encountered:
The current upsert function allows bulk uploading of records from a csv file directly.
Proposal is to keep this function but rewrite it to expose two further functions in the interface:
a schema update function, which is like making the following function writable
GET /{user-name}/{db-name}/{table-name}/schema
This is not the same as exposing the "CREATE TABLE IF NOT EXISTS" function because that function does not alter tables. It would provide a route towards extra altering functions, such as dropping columns or changing their type -- features which are not given by the underlying database.
This function takes the jargtypes thing as it stands.
An upsert() function taking a list of data rows (filling in missing elements with nulls).
[Discussion to follow of whether unique_keys information is part of the schema, or part of the upsert function.]
Function is here:
https://github.com/okfn/webstore/blob/master/webstore/views.py#L203
In scraperwiki this is the save(unique_keys=[], data={}) function.
It makes the following distinct processes:
Unique keys are fetched from query_string by request.args.getlist('unique')
The scraperwiki version has an extra jargtypes={key:type} to define the types of the columns and avoid the derivation of variable types happening at the wrong side of the json interface.
Option 1:
Combine steps 1,2,3 into one table schema updating function, and then 4 can be a normal sql update function
Option 2:
Get the client to fetch the schema from webstore at the start and then perform all the computation go generate the schema updates there
The text was updated successfully, but these errors were encountered: