-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugfix: Databook Write #18
Comments
I remember encountering this problem years ago and deciding that our team would just use a different delimiter for run matrix keys, e.g |
Should be fixed with commit There's an update to |
Thanks Derek. Now a list of strings, when read from a CSV, is encoded in CAPE as "['one', 'two']", which writes correctly to CSV format. However, a list of values would be encoded as as '[1, 2]'; this is not recognized as a string in CSV, and is still delimited (split) by the comma. What we'd want is something like "[1, 2]". With that in mind, line 3481 of cfdx/dataBook.py might benefit from this change: The downside is that this will break if the number of apostrophes isn't 2. So maybe there's a more robust way to code it. What do you think? |
This makes me a bit uncomfortable b/c there's a silent transition from a list to a string. So in your example writing out |
Ok, I pushed a new commit |
Just to be clear, all of the lists in my run matrix are indeed written as strings. I've been using |
Ok, so I now understand the issue to be that single quotes from Python's |
I noticed that when I have custom columns in my run matrix that use strings with a comma, CAPE cannot write databook files correctly. In cape/cfdx/dataBook.py/write(), the run matrix cells are correctly read and interpreted, but on the write command the comma inside the string is recognized as a delimiter. The result is that the strings are split into multiple columns. This breaks anything that needs to process databooks.
I wonder if there's a way to make sure that strings are recognized and not split up? Another approach could be to use pandas to read/write csv files, but that's probably a longer term solution. I'm sure it would allow you to eliminate some code.
The text was updated successfully, but these errors were encountered: