Convert from a CSV containing page data to WorldBrain web extension compatible JSON data. The JSON data can be imported into the PouchDB via the web extension's user interface.
With yarn:
$ yarn global add worldbrain-data-converter
or with NPM:
$ npm i -g worldbrain-data-converter
To convert a CSV containing relevant page data fields:
$ worldbrain-data-converter -i /path/to/file.csv -o /path/to/output.txt
Or simply redirect IO files on stdout/stdin:
$ worldbrain-data-converter < /path/to/file.csv > /path/to/output.txt
Outputting to batches of 10000 docs/lines named: output_aa
, output_ab
, etc:
$ worldbrain-data-converter < /path/to/file.csv | split -l 10000 - output_
Full options:
$ worldbrain-data-converter --help
Usage: worldbrain-data-converter
-o, --output=ARG name of output file (default: stdout)
-i, --input=ARG name of input CSV file (default: stdin)
-v, --maxVisits=ARG max number of visit docs to generate per page (default: 10)
-b, --bookmarkChance=ARG percentage chance that a bookmark doc will be created for input row (default: 1)
-s, --imports schedule converted docs for imports for later filling out
-h, --help display this help
Should be a CSV containing columns url
, title
, and body
. From each row in the CSV, the following WorldBrain data will be produced:
- 1 page doc
- 0-
maxVisits
visit docs - 0-1 bookmark docs (change to generate specified via
bookmarkChance
)
body
is the page text content.
Maybe later have the option of being able to accept other fields, or miss them out (only url
really matters).
Should be a new-line delimited JSON file, which each line containing one JSON object representing a document in the web extension's PouchDB. Data will be derived from input file and also generated for unavailable fields.
The following data is generated:
_id
for all doc typescontent.keywords
for page docscontent.description
for page docs
Maybe will have a flag to disable data generation of these, and add support in the input file format.
As the extension is currently running entirely within the browser, things like memory management with file IO
are painful. It's recommended to pipe the output of this script to something like split
to split up the output files. Split should happen per line, as the output format is new-line delimited JSON. Splitting by bytes is not handled by the extension's import process.
Example:
$ worldbrain-data-converter < /path/to/file.csv | split -l ${DOCS_PER_FILE} - ${OUTPUT_FILE_PREFIX}