Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

json_normalize gives KeyError in 0.23 #21158

Closed
mivade opened this Issue May 21, 2018 · 7 comments

Comments

Projects
None yet
7 participants
@mivade
Copy link

mivade commented May 21, 2018

Code Sample, a copy-pastable example if possible

import json
from pandas import show_versions
from pandas.io.json import json_normalize

print(show_versions())

with open('test.json', 'r') as infile:
    d = json.load(infile)

normed = json_normalize(d)

The test.json file is rather lengthy, with a structure similar to:

{
  "subject": {
    "pairs": {
      "A1-A2": {
        "atlases": {
          "avg.corrected": {
            "region": null,
            "x": 49.151580810546875,
            "y": -33.148521423339844,
            "z": 27.572303771972656
          }
        }
      }
    }
  }
}

This minimal version is enough to show the error below.

Problem description

This problem is new in pandas 0.23. I get the following traceback:

Traceback:

Traceback (most recent call last):
  File "test.py", line 10, in <module>
    normed = json_normalize(d)
  File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 203, in json_normalize
    data = nested_to_record(data, sep=sep)
  File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 88, in nested_to_record
    new_d.update(nested_to_record(v, newkey, sep, level + 1))
  File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 88, in nested_to_record
    new_d.update(nested_to_record(v, newkey, sep, level + 1))
  File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 88, in nested_to_record
    new_d.update(nested_to_record(v, newkey, sep, level + 1))
  File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 84, in nested_to_record
    new_d.pop(k)
KeyError: 'region'

Note that running the same code on pandas 0.22 does not result in any errors. I suspect this could be related to #20399.

Expected Output

Expected output is a flattened DataFrame without any errors.

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Darwin
OS-release: 16.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8

pandas: 0.23.0
pytest: 3.5.1
pip: 9.0.1
setuptools: 38.4.0
Cython: None
numpy: 1.14.2
scipy: None
pyarrow: None
xarray: 0.10.3
IPython: 6.3.1
sphinx: 1.7.2
patsy: None
dateutil: 2.7.2
pytz: 2018.3
blosc: None
bottleneck: None
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.2.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.7
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None

mivade added a commit to mivade/cmlreaders that referenced this issue May 21, 2018

@WillAyd

This comment has been minimized.

Copy link
Member

WillAyd commented May 21, 2018

I couldn't reproduce your error with the information provide (was getting others) - can you please update it so the example can be fully copy/pasted to reproduce?

@mivade

This comment has been minimized.

Copy link
Author

mivade commented May 21, 2018

I'm not sure what errors you are getting. Here's a version with the JSON contents directly in the Python file as a dict:

from pandas import show_versions
from pandas.io.json import json_normalize

print(show_versions())

d = {
  "subject": {
    "pairs": {
      "A1-A2": {
        "atlases": {
          "avg.corrected": {
            "region": None,
            "x": 49.151580810546875,
            "y": -33.148521423339844,
            "z": 27.572303771972656
          }
        }
      }
    }
  }
}

normed = json_normalize(d)
print(normed)

This results in the same error.

@ssikdar1

This comment has been minimized.

Copy link
Contributor

ssikdar1 commented May 22, 2018

Running his code I have the same error as well.

https://github.com/pandas-dev/pandas/blob/master/pandas/io/json/normalize.py#L79

I think the problem is here.

If I add two print statements:

             if not isinstance(v, dict):
                 print("cond1: %s" % (level != 0))
                 print("cond2: %s" % (v is None))

                  if level != 0:  # so we skip copying for top level, common case
                      print(new_d)
                      v = new_d.pop(k)
                      new_d[newkey] = v
                  if v is None:  # pop the key if the value is None
                      print(new_d)
                      new_d.pop(k)
                  continue

I get the following printout:

cond1: True
cond2: True
{'region': None, 'x': 49.151580810546875, 'y': -33.148521423339844, 'z': 27.572303771972656}
{'x': 49.151580810546875, 'y': -33.148521423339844, 'z': 27.572303771972656, 'subject.pairs.A1-A2.atlases.avg.corrected.region': None}

So new_d is getting popped twice which causes an error on the second time?

I added a continue in the code:

if level != 0: # so we skip copying for top level, common case
v = new_d.pop(k)
new_d[newkey] = v
continue

and the code looks to run fine.

@ssikdar1

This comment has been minimized.

Copy link
Contributor

ssikdar1 commented May 22, 2018

@lauraathena

This comment has been minimized.

Copy link

lauraathena commented May 23, 2018

I ran into this same error, when the level != 0 and v is None, the code tries to pop k twice.

@ssikdar1 Do you know what the purpose is of popping k in the first place is? If the intention was to not include keys whose value is None then your update undermines that intention. It will only remove keys with None values at the first level of the dictionary. That inconsistency may be confusing to users.

I would either make sure to exclude all keys whose value is None, e.g.:

               if level != 0:  # so we skip copying for top level, common case
                    v = new_d.pop(k)
                    if v is None: # don't keep the item if the value is None
                        continue
                    new_d[newkey] = v
                if v is None:  # pop the key if the value is None
                    new_d.pop(k)

or perhaps not bother popping k when v is None in the first place. I don't know what the intention was in removing key's with None values is, so this is my guess.

@jreback jreback added this to the 0.23.1 milestone May 23, 2018

@ssikdar1

This comment has been minimized.

Copy link
Contributor

ssikdar1 commented May 24, 2018

@lauraathena I see your point. Based on the test cases the previous commit is using it looks like they were only considering None values at the first level of the dictionary.

   def test_missing_field(self, author_missing_data):
        # GH20030: Checks for robustness of json_normalize - should
        # unnest records where only the first record has a None value
 # GH20030: Checks that None values are dropped in nested_to_record
 +        # to prevent additional columns of nans when passed to DataFrame
 +        data = [
 +            {'info': None,
 +             'author_name':
 +             {'first': 'Smith', 'last_name': 'Appleseed'}
 +             },
 +            {'info':
 +                {'created_at': '11/08/1993', 'last_updated': '26/05/2012'},
 +             'author_name':
 +                {'first': 'Jane', 'last_name': 'Doe'}
 +             }
 +        ]

#20030
[{k: {'alpha': 'foo', 'beta': 'bar'}}, {k: None}],

But I'm also not sure what they wanted the expected behavior to be for when level != 0.

@Pdubbs

This comment has been minimized.

Copy link

Pdubbs commented May 25, 2018

@lauraathena @ssikdar1
I believe there intention was to drop keys where the value is everywhere none, as it will for level = 0:

my_json = [{'a':1,'b':2,'c':None,'d':None},{'a':1,'b':None,'c':3,'d':None}]
json_normalize(my_json)
	a	b	c
0	1	2.0	NaN
1	1	NaN	3.0

but would strongly like to keep an option to preserve the 0.22.0 behavior (for level = 0 and level != 0):

my_json = [{'a':1,'b':2,'c':None,'d':None},{'a':1,'b':None,'c':3,'d':None}]
json_normalize(my_json)
	a	b	c	d
0	1	2.0	NaN	None
1	1	NaN	3.0	None

My use case involve processing batches of jsons from an api and then storing them with others. Handling exceptions for when a given batch happens to be everywhere None and is suddenly missing one or more columns would add a large headache. I imagine this is relatively common.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.