-
-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read and write data by cases (RFC). Close #43 #49
Conversation
… the header instead of fully re-write it
Thanks for your pull request, I'll check it as soon as possible. Could you check why tests was not passed? |
I think there are a difference in the generated data.sav file. The file that generate the new procedure is not the same as the existing file of the example. Anyway, if i regenerate the file using the current procedure, i get the same file that i get using the new procedure. So, my theory is one of this two possibilities:
|
Ok,I found the cause of the problem. Is the issue #46. The file data.sav was generated before exclude the In fact, if in the master version you execute the write.php example and then you try to execute the read.php example, you will see that you can not read what you are generating. |
So, please merge #47 and then we can evaluate if this continues failing after that. It's to me difficult see if there are any other problem if this is not fixed first. |
@lestcape Wow, Is this the spss appending mode? |
@IMlcl it's not properly this feature, but i think that you can convert it easy in that if it's what you want. Your appening mode implementation seem to be something that should work with an arbitrary number of cases (rows) and with this implementation you can only append one case at time. Anyway if you ask me, you implementation complicate a lot the appending mode, because you are playing with more than one sav file at time and my vision is not append one file to another, but instead append data (a case) to the file. |
This add the possibility to read and write data by cases, without to add much more complexity.
The good think is that you don't need to have the whole data loaded at once in RAM, you can just have your file metadata and the current case only.
In my opinion, the problem with this implementation is that is not possible move the buffer to the case you want. So. you are forced to read/write the data in the order that they come from your provider. Resolve the problematic of move the buffer to the case you want it's complex and will add much more memory to you RAM and also will slow down the performance to read/write the data.
Some code that are not landed yet:
We don' t need to override the whole header to add a new case, is just only needed override he number of records in the header session as this information is in the same position always.
We don' t need to have both ways to read and write files as this create confusion.