New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add the option for json.dumps to return newline delimited json #78710
Comments
Many service providers such as Google BigQuery do not accept Json. https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-json#limitations please allow to receive this format directly from the dump. |
So this format is just a series of JSON, delimited by a newline. def ndjsondump(objects):
return '\n'.join(json.dumps(obj) for obj in objects) Conversely, first you Does it satisfy your needs? |
Would this need be fulfilled by the *separators* option? >>> from json import dumps
>>> query = dict(system='primary', action='load')
>>> print(dumps(query, separators=(',\n', ': ')))
{"system": "primary",
"action": "load"} |
Raymond Hettinger answer is incorrect. The main difference between Json and new line delimited json is that new line contains valid json in each line. Meaning you can do to line #47 and what you will have in this line is a valid json. Unlike the regular json where if one bracket is wrong the while file is unreadable. You can not just add /n after one "object". You need also to change the brackets. Keep in mind that not all Jsons are simple.. some contains huge amount of nested objects inside of them. You must identify where Json start and where it ends without being confused by nesting jsons. There are many programming solutions to this issue. My point is that this is a new format which is going to be widely accepted since Google adopted it for BigQuery. flipping strings can also be easily implemented yet Python still build a function to do that for the user. I think it's wise to allow support for this with in the Json library.. saving the trouble for programmer from thinking how to implement it. |
It is up to Bob to decide whether this feature request is within the scope of the module. |
I think the best start would be to add a bit of documentation with an example of how you could work with newline delimited json using the existing module as-is. On the encoding side you need to ensure that it's a compact representation without embedded newlines, e.g.: for obj in objs:
yield json.dumps(obj, separators=(',', ':')) + '\n' I don't think it would make sense to support this directly from dumps, as it's really multiple documents rather than the single document that every other form of dumps will output. On the read side it would be something like: for doc in lines:
yield json.loads(doc) I'm not sure if this is common enough (and separable enough from I/O and error handling constraints) to be worth adding the functionality directly into json module. I think it would be more appropriate in the short to medium term that the each service (e.g. BigQuery) would have its own module with helper functions or framework that encapsulates the protocol that the particular service speaks. |
This format is known as JSON Lines: http://jsonlines.org/. Its support in the user code is trivial -- just one or two lines of code. Writing: for item in items:
json.dump(item, file) or
Reading: items = [json.loads(line) for line in file] or items = [json.loads(line) for line in jsondata.splitlines()] See also bpo-31553 and bpo-34393. I think all these propositions should be rejected. |
I'm a bit confused here. On one hand you say it's two lines of code. On other hand you suggest that each service provider will implement it's own functions. What's the harm from adding - small , unbreakable functionality? Your points for small code could have also been raised against implementing reverse() - yet Python still implemented it - saved the 2 line code from the developer. At the end.. small change or not.. This is a new format. |
I suggested that each module would likely implement its own functions tailored to that project's IO and error handling requirements. The implementation may differ slightly depending on the protocol. This is consistent with how JSON is typically dealt with from a web framework, for example. |
Well... when handling GBs of data - it's preferred to generate the file directly in the required format rather than doing conversions. The new line is a format... protocols don't matter here... Lets get out of the scope of Google or others... The new line is a great format it allows you to take a "row" in the middle of the file and see it. You don't need to read 1gb file into parser in order to see it you can use copy 1 row... We are adopting this format for all our jsons, so it would be nice to get the service directly from the library. |
This has been sleeping for quite some time, but I happened upon it looking for something else, so I thought I'd put in a wrap-up comment / suggestion: As @serhiy-storchaka pointed out, jsonlines is a different format -- it is not JSON with particular placement of newlines. Anyway, the key difference from JSON is that it's a sequence of json blobs, rather than one, and there are no newlines inside each blob. To support this, there would be a new function similar to @serhiy-storchaka and @etrepum have already provided some prototype options. So: would the core devs consider a PR with these functions? If not, then a PR for, as @etrepum suggested, an addendum to the docs for how to read/write jsonlines? |
Not sure why I didn't do this yesterday, but, of course, there is (at least one) a package on PyPi for JSONLines: https://pypi.org/project/jsonlines/ So the solution to this issue becomes one of:
Personally, I think if it's added to the stdlib, it should use a simpler API than the jsonlines package, more similar to the existing json API -- which may be an argument for simply pointing people to a third party package. Maybe a decision could be made by a core dev and this could be closed :-) |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: