Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faulty parsing of JSON template/file lookups (aka JSON double-quotes to single-quotes conversion problem) #15603

Closed
sgnn7 opened this issue Apr 26, 2016 · 11 comments · Fixed by ansible/ansible-modules-extras#2126
Assignees
Labels
bug

Comments

@sgnn7
Copy link

@sgnn7 sgnn7 commented Apr 26, 2016

ISSUE TYPE
  • Bug Report
ANSIBLE VERSION
2.1.0
CONFIGURATION

Default from Git master

OS / ENVIRONMENT

Debian Jessie x86_64

SUMMARY

JSON lookup('template', foobar) content gets returned as variable that gets reparsed as dict (afaik), which somewhere gets serialized to string before it gets to the module and converts valid-JSON double-quotes to single-quotes so services or modules that expect valid JSON will fail with such data.

The problem is that Jinja2 templating is done recursively and I could not find any combination of convert_* arguments, Jinja2 filters, nor #jinja2 overrides that would return the correct JSON from the template to the module and the classic override of putting a space before the string start isn't allowed when contacting AWS services. I believe the same issue exists with lookup('file') and any other lookup that would return JSON.


Problem code:


- Bug fix PR that was fixing the symptom in `iam_policy` module with `policy_json` arg: https://github.com/ansible/ansible-modules-core/pull/3019 - Another new (unreported yet) bug caused by this logic breaks `policy` argument in `s3_bucket` module here if you use a lookup: https://github.com/ansible/ansible-modules-extras/blob/devel/cloud/amazon/s3_bucket.py#L211

I'd probably fix this myself but I just don't have the time to do it :(

STEPS TO REPRODUCE
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AddPerm",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::{{ item }}"
    }
  ]
}

---

- hosts: all
  connection: local
  gather_facts: False

  tasks:
    - name: Setup S3 Bucket
      s3_bucket:
        name: "{{ item }}"
        region: "us-west-2"
        policy: "{{ lookup('template','./bucket-public.json.j2', convert_data=False) }}"
        with_item:
          - some_bucket_name
EXPECTED RESULTS

JSON posted to AWS is valid and bucket is created

ACTUAL RESULTS
PLAYBOOK: deploy.yml ***********************************************************
1 plays in deploy.yml

PLAY [all] *********************************************************************

TASK [Setup S3 Bucket] *********************************************************
task path: /home/username/checkout/git-repo-name/deploy.yml:12
File lookup using /home/username/checkout/git-repo-name/bucket-public.json.j2 as file
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: username
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1461682432.3-159666311309506 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1461682432.3-159666311309506 `" )'
<localhost> PUT /tmp/tmph4kIsC TO /home/username/.ansible/tmp/ansible-tmp-1461682432.3-159666311309506/s3_bucket
<localhost> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /home/username/.ansible/tmp/ansible-tmp-1461682432.3-159666311309506/s3_bucket; rm -rf "/home/username/.ansible/tmp/ansible-tmp-1461682432.3-159666311309506/" > /dev/null 2>&1'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "ec2_url": null, "force": false, "name": "some_bucket_name", "policy": "{'Version': '2012-10-17', 'Statement': [{'Action': 's3:GetObject', 'Principal': '*', 'Resource': 'arn:aws:s3:::some_bucket_name', 'Effect': 'Allow', 'Sid': 'AddPerm'}]}", "profile": null, "region": "us-west-2", "requester_pays": false, "s3_url": null, "security_token": null, "state": "present", "tags": {}, "validate_certs": true, "versioning": false}, "module_name": "s3_bucket"}, "msg": "Policies must be valid JSON and the first byte must be '{'"}


NO MORE HOSTS LEFT *************************************************************
    to retry, use: --limit @deploy.retry

PLAY RECAP *********************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1
@sgnn7

This comment has been minimized.

Copy link
Author

@sgnn7 sgnn7 commented Apr 26, 2016

Also has issues in s3_module where the resulting arg is a string (not a dict) with singlequotes:

- hosts: all
  connection: local
  gather_facts: False

  tasks:
    - name: Setup S3 Bucket
      s3_bucket:
        name: "{{ item }}"
        region: "us-west-2"
        policy:
          Version: '2012-10-17'
          Statement:
            - Sid: AddPerm
              Effect: Allow
              Principal: '*'
              Action: s3:GetObject
              Resource: {{ item }}
        with_items:
          - some_bucket_name
@abadger abadger added the bug_report label Apr 28, 2016
@abadger

This comment has been minimized.

Copy link
Member

@abadger abadger commented Apr 28, 2016

There was also a similar bug with the uri module (which takes a json parameter) earlier. Looking at the fix for that bug I see that we added code to the module so that if it received a string it would pass it through but if it received a python dict or list it would turn that into a json string. That would probably work here or for any other json string as well. There are some conceptual problems to it which could be confusing -- for instance, the use of to_json in your original playbook is probably doing the wrong thing, running a json string through to_json (which means things are double escaped) but it would at least make a workaround for things. We could ease the burden for module writers by giving them a new arg_spec type to use. If type is jsonstring, then we could do the json.dumps(param) if the type is not a string already.

I'd like to get more input from other core team members as to whether this is the right way to go before merging a fix like that.

@bcoca

This comment has been minimized.

Copy link
Member

@bcoca bcoca commented Apr 28, 2016

@sgnn7

This comment has been minimized.

Copy link
Author

@sgnn7 sgnn7 commented Apr 28, 2016

@bcoca The issue you linked is showing a symptom of this problem so it's the other way around imo.

@bcoca

This comment has been minimized.

Copy link
Member

@bcoca bcoca commented Apr 28, 2016

@sgnn7 the problem seems a bad assumption or bad documentation on what modules take, if you look at my fix I just changed the module to deal with whatever is passed in.

@sgnn7

This comment has been minimized.

Copy link
Author

@sgnn7 sgnn7 commented Apr 28, 2016

@bcoca Seems like a good fix for s3_bucket but this is the third module so far (and I'm doubtful it will be the last) that I've encountered having the same issue and it feels like this is a problem with Ansible's treatment of args that end up as a single-quoted string given a dict or json str arg in the playbook that I think is in error but I could be wrong here. It would be nice for Ansible to have some flag at least to specify that an arg shouldn't be turned into a string representation of a dict or maybe it should have a bit better object passed to module than a single-quoted string.

@bcoca

This comment has been minimized.

Copy link
Member

@bcoca bcoca commented Apr 28, 2016

@sgnn7 the issue is how module authors define the expected input, not much we can do on the Ansible side but blindly guess.

We do have those flags (and filters), they would still not help with 1/2 of those modules as they are not dealing with the input consistently.

@bcoca

This comment has been minimized.

Copy link
Member

@bcoca bcoca commented Apr 28, 2016

The long term fix is to catch this in review, probably supply a type='json' that can take either input type and guarantee a JSON string, which is NOT what this module expected, it took as input a data structure and internally converted to JSON.

@hidekiy

This comment has been minimized.

Copy link

@hidekiy hidekiy commented Jun 6, 2016

@sgnn7
Try this workaround:

policy: "{{ lookup('template','./bucket-public.json.j2', convert_data=False) |string}}"

I also commented to #14292

@hidekiy

This comment has been minimized.

Copy link

@hidekiy hidekiy commented Jun 15, 2016

Sorry I tried s3_bucket module and the workaround doesn't work with correct string policy parameter.
@bcoca 's PR ansible/ansible-modules-extras#2126 looks correct fix for this.

@abondis

This comment has been minimized.

Copy link
Contributor

@abondis abondis commented Jul 7, 2016

I've had a similar issue and using policy: "{{ lookup('template','./bucket-public.json.j2') |to_json }}" fixed it for me, hope that helps.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
8 participants
You can’t perform that action at this time.