Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Explosion Using Parser #57

Closed
raeldor opened this issue Sep 10, 2021 · 11 comments
Closed

Memory Explosion Using Parser #57

raeldor opened this issue Sep 10, 2021 · 11 comments
Labels
question Further information is requested

Comments

@raeldor
Copy link

raeldor commented Sep 10, 2021

Hi,

I am using this module because parsing using json.loads() results in my already large json string (about 900MB) memory usage going up by about 10x (over 9GB). I was expecting to be able to parse the JSON line by line. It works, but I was a little surprised that when I call ijson.parse() it grabs about 3GB of memory. May I ask why the memory usage is so large? More conversion to dictionaries behind the scenes?

Thanks
Ray

@raeldor raeldor added the question Further information is requested label Sep 10, 2021
@rtobar
Copy link

rtobar commented Sep 10, 2021

@raeldor unfortunately without a code example and the document (or part of) you are trying to load there's little help that can be provided. Please provide more details, it could be that the memory explosion is happening somewhere else.

@rtobar
Copy link

rtobar commented Sep 10, 2021

@raeldor to answer your original question: no, there's no dictionary construction or anything like that going on at the level of ijson.parse, and in principle it shouldn't do any accumulation of data in memory (if it did it would be a bug). But again, only seeing some code will help answer your question more precisely.

@raeldor
Copy link
Author

raeldor commented Sep 10, 2021

Thank for the reply. Not sure code will help, since the line is literally just...
parser=ijason.parse(cellset_string)

I suspect you would need to have the same data to replicate.

@raeldor
Copy link
Author

raeldor commented Sep 10, 2021

After reading the FAQ, I suspect this is happening...

However if a text-mode file object is given then the library will automatically encode the strings into UTF-8 bytes

@raeldor
Copy link
Author

raeldor commented Sep 10, 2021

Regardless of the memory taken, even just running through the parser like...

parser = ijson.parse(cellset_string)
for prefix, event, value in parser:
pass

Takes about 10 minutes for my roughly 320MB string of JSON. I feel like I'm doing something wrong here.

@rtobar
Copy link

rtobar commented Sep 10, 2021

@raeldor thanks for giving more details. Even though the code was simple, it actually helped me figure out what's going on.

When a str instance is given as input, ijson uses an io.StringIO object to wrap it to make it look like a file. I always assumed io.StringIO wouldn't internally copy the input data, but it actually does:

$> python
Python 3.9.5 (default, May 11 2021, 08:20:37) 
[GCC 10.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tracemalloc
>>> import io
>>> tracemalloc.start()
>>> tracemalloc.get_traced_memory()
(14106, 36242)
>>> x = ' ' * 10**6
>>> len(x)
1000000
>>> tracemalloc.get_traced_memory()
(1014943, 1024853)
>>> i = io.StringIO(x)
>>> tracemalloc.get_traced_memory()
(5015725, 5025635)

We could certainly simplify this on ijson to use a simpler file-like object wrapper that doesn't require copying the input string. I'll create an issue to remember doing that.

As you found out in the FAQ, the best input you can give ijson is binary data, not textual data. Also, where is your in-memory string coming from? You must be loading it from a file, the network, or some other external source. In that case it's always better to just give a file object to ijson so it reads the data for you, instead of you reading the whole data and giving it to ijson.

@raeldor
Copy link
Author

raeldor commented Sep 10, 2021

Thanks for the quick response. It's the response.text from an HTML call. Is there a way to wrap the string to prevent the conversion and improve the performance. For some reason the performance is also very slow. Or maybe it's because I'm in debug mode?

@rtobar
Copy link

rtobar commented Sep 10, 2021

@raeldor It seems you're using the requests library? I'm no expert in it, so take this with a grain of salt.

If you are using the requests library you can access the response body as bytes via response.content. That should already give slightly better performance because you'll save a roundtrip of encoding/decoding. Internally ijson will then use a io.BytesIO object, which apparently doesn't suffers from the extra memory usage problem that io.StringIO does.

However, the best would be to find out how to use the requests library to get a file-like object that you can pass to ijson directly. In that case your memory usage should stay really low, as you would never need to load the whole response in memory. Like I said, I'm no expert on requests, but it would seem like creating something around Response.iter_content would work.

Takes about 10 minutes for my roughly 320MB string of JSON. I feel like I'm doing something wrong here.

Are those 10 minutes spent only in ijson? Please check the performance section of the documentation. In particular make sure you have a fast backend available. 320 MB of JSON shouldn't take that long to parse (but who knows, maybe you have a particularly difficult JSON document to parse...)

@raeldor
Copy link
Author

raeldor commented Sep 10, 2021

Thank you. I'll investigate the response options further. Writing the string out to a physical file as binary and opening removed the memory issue as suspected.
Turning debugging off went from 10 minutes down to 1 minute for a read through the parser with no action. It's using the yajl2_c backend. Not sure how that stacks up performance-wise with what you'd expect.

@rtobar
Copy link

rtobar commented Sep 10, 2021

Yes, 1 minute for reading and parsing sounds much better (still a bit high, but probably because of the extra I/O to disk). Note that you should be able to skip the writing to file though; just pass reponse.content to ijson, and it will internally use io.BytesIO, which unlike its sibling io.StringIO doesn't require more memory.

In any case, please close this issue if you're happy with the responses. I'll additionally deal with replacing our internal usage of io.StringIO to avoid unexpected memory increases.

@raeldor
Copy link
Author

raeldor commented Sep 11, 2021

Using response.content worked great without impacting memory. Really appreciate your fast assistance, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants