Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make FAO parser more modular [should we do it now?] #52

Open
jacob-nooz opened this issue Jul 17, 2019 · 9 comments
Open

Make FAO parser more modular [should we do it now?] #52

jacob-nooz opened this issue Jul 17, 2019 · 9 comments

Comments

@jacob-nooz
Copy link
Contributor

Expected Behavior / Situation

One parser.js file that can dynamically convert csv files from any directory into a json file in that same directory.

Actual Behavior / Situation

One parser.js file that can convert csv files from its own directory into a json file in that same directory.

Modification Proposal

Let's discuss this, but one way forward would be to write a script for parser.js that takes an argument—the path to the directory containing the target csv file—and then outputs a json version of that file in the given directory.

@jacob-nooz jacob-nooz changed the title Make FOA parser more modular Make FAO parser more modular Jul 17, 2019
@jacob-nooz
Copy link
Contributor Author

@atherdon @AndrewSerra Here's a continuation from the last FAO issue.

@atherdon
Copy link
Collaborator

#27
Ok, so what our exact steps will be, in order to do it.

How I see it. We have few files with csv data inside of it. Some of them might be in one place, some of them at other. I think at this repo or at generator we have a method, that can grab all files inside of it.
I think we use it for JSON files, but it should work for us as well. maybe with some modifications.

do we need to pass an object/archive with paths for each file + name or we can do it dynamically?

next thing is - how they will be stored, when parsed and saved.
or we should also pass a path, where this exact file will be located? what is the best turnaround?

what about errors. Imagine that first file was parsed and headers works well, but second is crashed - what we'll need to do?

how we'll use this parser? I mean we have 3 different datasets. Do we need to create > 3 lines at package.json in order to call it or how we can make it work outside of this main module.

we should also discuss what methods we need to moveout from our parser.js files into our main src/... folder in order to make it more clean and easy to work with.

we also need to create tests for that new methods, because we have a lot of people here - always small changes and everything can go broken after 1 my mistake with one megre

@AndrewSerra
Copy link
Contributor

I'm a bit confused in general. Are we pursuing the single parser? If so, I think we can handle all concerns one by one. Eventually, it will work.

@atherdon
Copy link
Collaborator

right now we exploring what shall be done in order to make it, but right now it's just a discussion @AndrewSerra

@jacob-nooz
Copy link
Contributor Author

I'm also a bit confused. Just to clarify:

#27
Ok, so what our exact steps will be, in order to do it.

How I see it. We have few files with csv data inside of it. Some of them might be in one place, some of them at other. I think at this repo or at generator we have a method, that can grab all files inside of it.
I think we use it for JSON files, but it should work for us as well. maybe with some modifications.

do we need to pass an object/archive with paths for each file + name or we can do it dynamically?

This method will grab all of the csv (or json) files inside of the repository? I didn't see it in this repo, but I'll keep looking in food-static-files-generator.

next thing is - how they will be stored, when parsed and saved.
or we should also pass a path, where this exact file will be located? what is the best turnaround?

I think if we can access all the csv files dynamically, then it would be nice to write the json files dynamically as well (i.e. write them in their proper directories without passing any path arguments to the method). Otherwise, I think we should pass path arguments for the source and target directories.

what about errors. Imagine that first file was parsed and headers works well, but second is crashed - what we'll need to do?

I'll think on this and get back to you if I have any ideas before we try implementing.

how we'll use this parser? I mean we have 3 different datasets. Do we need to create > 3 lines at package.json in order to call it or how we can make it work outside of this main module.

One line in the package.json sounds preferable to me, but it would be helpful to have more context about how you see this parser being used as a node module. Could you please explain how you see this code being used by others?

we should also discuss what methods we need to moveout from our parser.js files into our main src/... folder in order to make it more clean and easy to work with.

I imagine we'll eventually want to move out getHeaders(), but this seems contingent upon how we proceed with all of the above.

we also need to create tests for that new methods, because we have a lot of people here - always small changes and everything can go broken after 1 my mistake with one megre

Definitely. I've never written tests for filesystem code, but it sounds fun.

@atherdon
Copy link
Collaborator

This method will grab all of the csv (or json) files inside of the repository? I didn't see it in this repo, but I'll keep looking in food-static-files-generator.

We have a method for JSON files, but I think with small modifications it can eat csv as well.


I think if we can access all the csv files dynamically, then it would be nice to write the json files dynamically as well (i.e. write them in their proper directories without passing any path arguments to the method). Otherwise, I think we should pass path arguments for the source and target directories.

it will be cool. Because our main goal is not just parse, but create json files as we need it(sometimes it require some modifications at structure, for example, adding ID or adding created_at datetime field)


I'll think on this and get back to you if I have any ideas before we try implementing.

We need to make a separated method, that will parse/check each file and if it will get errors - we will display them.

and we'll call this method inside the loop, that iterating "all file"s that we have


One line in the package.json sounds preferable to me, but it would be helpful to have more context about how you see this parser being used as a node module. Could you please explain how you see this code being used by others?

I think we'll soon or later have a "one line for parser at pkg.json" but there a lot of work to do before.
This module is published at npm and it can be easily installed as you install any other package into any nodejs project. this is why i like this configuration actually :)

I see it like: someone has a dataset. He creates a basic nodejs server and installs our csv-parser inside.
he calls our function once, and it generates him a JSON file that can be used as fileDB or later imported into a mongoose


I imagine we'll eventually want to move out getHeaders(), but this seems contingent upon how we proceed with all of the above.

We'll do it when you'll think it's ready. Because why to keep it at one FAO file a tool that can be used by other datasets as well


Definitely. I've never written tests for filesystem code, but it sounds fun.

Tests should be an important part of coders process. especially at this time when we have a lot of cool tools that help us to fight bugs.
We'll focus on this soon

@atherdon
Copy link
Collaborator

I think this method can be an example: https://github.com/GroceriStar/food-static-files-generator/blob/master/src/objects.js#L14
there a readall method that grab all files inside

atherdon added a commit that referenced this issue Jul 18, 2019
@atherdon atherdon reopened this Jul 18, 2019
@atherdon atherdon reopened this Oct 16, 2019
@atherdon atherdon changed the title Make FAO parser more modular Make FAO parser more modular [should we do it now?] Oct 16, 2019
@jacob-nooz
Copy link
Contributor Author

jacob-nooz commented Oct 16, 2019 via email

@atherdon
Copy link
Collaborator

atherdon commented Oct 16, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants