Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements and additions to gtools #30

Open
mcaceresb opened this issue Nov 20, 2017 · 6 comments
Open

Improvements and additions to gtools #30

mcaceresb opened this issue Nov 20, 2017 · 6 comments

Comments

@mcaceresb
Copy link
Owner

A lengthy discussion on improvements and additions to gtools started in issue #28, but it is more appropriate to have a sepparate thread for it. The main idea currently being discussed as a gtools API, which would consist of various wrappers to the core functionality of gtools.

I am not sure the Stata portion of the API will be as useful as the ftools analogue due to the way in which the Stata Plugin Interface works (which is that I have to use to interact with Stata via C). However, it might be useful in ways I have not considered, hence this thread (and I am also thinking of creating a C library based on this plugin, which would be useful for people who aim to write C plugins in the future).

Feel free to make any suggestions or comments on what you would like to see in a gtools API here, as well as any other features and suggestions that you don't think merit their own thread. This issue will remain open past version 1.0, since an API won't make it to that release.

@mcaceresb
Copy link
Owner Author

Here are some ideas I have for an API. All the functions (except gisid, which executes 1-4 but is different onward) have a commonality to them:

  1. Read the data
  2. Determine hashing strategy (this includes an "is sorted" check)
  3. Hash
  4. Sort hash (keeping track of how it maps to Stata row numbers)
  5. Panel setup
  6. Check for collisions
  7. Sort the groups (with index)
  8. Map sorted group index to sorted hash index
  9. Function-specific stuff

Steps 2, 3, 6, and 7 require a copy of the data be available for C in memory. Saving the results in steps 3, 4, 5, or 6-8 would require creating variables in Stata in addition to allocating memory in C. To interact with Stata, there is an inefficiency throughout in casting doubles to and from 64-bit integers. To call from C directly, there would have to be a generic way to load the data into memory. Some stuff I could write:

  • Is the data sorted? Executes 1 and checks if sorted. Gives yes or no.

  • Is the bijection OK? Executes 1 and checks if can biject. Gives yes or no.

  • Hash. Creates 3 variables from Stata and executes 1-3. The first two variables need to be double and store either the bijection and empty or the two parts of the spookyhash. The third variable can be long or double and is the index of the hash to the Stata observations (in case the user passes [if] [in] or drops missing rows).

  • Hash sort. Either creates 3 variables from Stata and executes 1-4 or picks up from the hash step above and executes 4. It sorts the hash and stores the sorted hash along with the index.

  • Panel setup. Either creates 2 variables from Stata and executes 1-5 or it creates 1 variable and picks up form the hash sort step above and executes 5. This step creates the index to Stata observations, if it does not exist, and it stores in the first J observations the start points of the grouped data.

  • Check for collisions + sort groups + map sorted group index to hash index. This can pick up from the panel setup step by creating one extra variable (which will be the sort order of the groups); it would re-read the observations into memory, check for collisions, sort them, and store the sort index. It can also do steps 1-8 directly after creating 3 variables from Stata.

  • Various mathematical functions that I use internally (e.g. various functions to compute quantiles).

@wbuchanan
Copy link

Have you checked out the ReadStat library? It is the underlying C library used for the haven package in R that reads/writes R, SPSS, SAS, and Stata datasets. Perhaps that would be a way to load data into memory? I’m not sure how garbage collection works with the C API, but if the objects can persist beyond a single call it might make it possible to load multiple datasets simultaneously. I’m not familiar with C at all or I would offer to try helping when I can.

@mcaceresb
Copy link
Owner Author

mcaceresb commented Feb 8, 2019

I have this on my list of things to check out. Not sure if it will drastically improve gcollapse or greshape (the main issue there is the inability to create/drop observations and variables in memory). However, I am planning to implement gmerge at some point, and I think the way to go is to try to read the using data via ReadStat, if I can manage.

EDIT: Actually, it should improve it a lot, now that I think about it. If i can save the characteristics of the dataset in memory, save the results from gcollapse/greshape to disk, then do use results, clear and apply the chars/labels/etc. it should be way faster than calling the C API twice, now that I think about it.

@wbuchanan
Copy link

If you were using Java I might be able to help a bit more since that is what I’m more familiar with, but I’ve also been experimenting with trying to do some of this directly in Mata.

@mcaceresb
Copy link
Owner Author

@wbuchanan Do you know if it is possible to read data directly from disk when using Java?

@wbuchanan
Copy link

@mcaceresb
I had started working on some Java based dta parsers a while ago but didn’t get too far and haven’t been able to put too much work into it since then. I do know that there is a project at Harvard that I’ve starred that has Java parsers that they use for their project (I think IQSS is the user account and it is for their data repository project).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants