-
Notifications
You must be signed in to change notification settings - Fork 831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reading CSV files #14
Comments
I would really like to see something like this: while the csv and json importers are useful, they are not generic enough to import arbitrary data: I was playing around recently and wanted to do a bulk import of data, roughly 2GB. It would be nice to be able to load and process such a file immediately from within arango, but the File.read function in the It would be great to have a more versatile |
I pre-process my raw data to csv and then use the importer. Works fine. ;) What would the benefit be from moving this to ArangoDb? |
Hi, would like to chime in, but I am not sure what you mean by "What would the benefit be from moving this to ArangoDb?" 😄 Can you elaborate, on what you want to do, what you did, and what the last sentence means? 😃 |
Hi Frank, If I understood correctly, @a2800276 wants to be able to process his raw data and enter them into the db from withing arangosh. I was wandering why the devs should invest time to this feature since one can easily process his raw data into csv/json (via any language, say PHP or Python, or Bash) and use the already working importer. |
Oh, yes, didn't notice the different users 😄 . Yes of course, I totally agree with @rotatingJazz on that. |
To me it seems very elaborate to preprocess data, that may or may not be in a form suitable for CSV/JSON, transforming it to a different format, throwing that against a --functionally restricted-- import script which then uses HTTP to import individual records to the database. When instead: I could be reading and transforming arbitrarily formatted files from within DB and have a much more efficient workflow, both from the "programmer efficiency" point of view and in terms of performance. What I was trying to do concretely: re-implement a toy project to play with graph functionality that I have working for neo4j in arango. I'd like to importethe wikipedia inter-page links and play around with that dataset. The dump of that data is 4GB, (in the form of mysql INSERT statements). If I can avoid it, I don't want to preprocess 4GB of data into 3GB of some other data that I can import when I could import directly in ~half the time. More generally: Since arango wants to become a general purpose deployment platform with Foxx, then it will certainly need some rudimentary file io implementation. As it's currently implemented, File.read is utterly useless apart from reading tiny toy files. |
It might be interesting to have some reference data that one could try to
|
We'll eventually have an implementation of Buffer, which will allow us to read binary files and process them in chunks from JavaScript. Until that's available, I think there are two alternatives available at least for processing CSV and JSON files. Example invocation for CSV files:
And for p```
var internal = require("internal");
|
Closed because processCsvFile and processJsonFile are doing what I intended. |
Die zwei Methoden processCsvFile und processJsonFile sind sehr nützlich. Vielleicht braucht man später auch was zum schreiben.
Vielleicht macht es Sinn, die in ein Module zu packen? In Node heißt ein Ähnliches Module einfach fs, was sicher kein schlechter Name ist!
The text was updated successfully, but these errors were encountered: