Skip to content
This repository has been archived by the owner on Dec 9, 2020. It is now read-only.

The error was: ImportError: No module named ipaddress #941

Closed
ghost opened this issue Feb 19, 2018 · 15 comments
Closed

The error was: ImportError: No module named ipaddress #941

ghost opened this issue Feb 19, 2018 · 15 comments
Labels
aws lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ghost
Copy link

ghost commented Feb 19, 2018

Local Workstation RedHat 7.4
Ansible version 2.4

./ose-on-aws.py --keypair= --public-hosted-zone= --deployment-type=origin --ami=ami-6d1c2007 --github-client-secret= --github-organization= --github-client-id=

TASK [non-atomic-docker-storage-setup : Gather facts] ********************************************************************************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named ipaddress
fatal: [ose-app-node02]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File "/tmp/ansible_tJ6HB4/ansible_module_openshift_facts.py", line 19, in \n import ipaddress\nImportError: No module named ipaddress\n", "modul

@cooktheryan
Copy link
Contributor

@james-knott which branch of openshift-ansible are you using?

@dav1x
Copy link
Contributor

dav1x commented Feb 19, 2018

@james-knott try running the deploy-host with the provider of aws:

ansible-playbook playbooks/deploy-host.yaml -e provider=aws

This should properly prepare the required packages.

@cooktheryan
Copy link
Contributor

@dav1x i believe the python-ipaddress needs patched into all providers as a prereq

@ghost
Copy link
Author

ghost commented Feb 19, 2018

@dav1x Thanks for the help. I already added a role for epel, and pip to install the missing requirements.

@ghost ghost closed this as completed Feb 19, 2018
@tdudgeon
Copy link

I think I hit this problem running the openshift-ansible installer from the release-3.9 branch. This issue is marked as closed, but its not clear how to fix the problem.

@dav1x
Copy link
Contributor

dav1x commented Mar 20, 2018

@tdudgeon Are you still having an issue?

@dav1x dav1x reopened this Mar 20, 2018
@tdudgeon
Copy link

@dav1x
Can't tell at present. Can't get Origin 3.9 to install at present as I'm not clear what params to try. I've tried this (and a few variations) but can't get a working combination:

openshift_deployment_type=origin
openshift_release=v3.9
openshift_image_tag=v3.9.0
openshift_pkg_version=-3.9.0

Not sure whether the fact generation comes before this. If so then maybe its fixed.

@dav1x
Copy link
Contributor

dav1x commented Mar 20, 2018

Hey @tdudgeon

Try this combination:

openshift_release="3.9"
deployment_type=openshift-enterprise

The release and image version should be set via the release.

@mikecali
Copy link

mikecali commented Apr 6, 2018

also having an issue on OCP 3.7.
ansible 2.4.2.0
RHEL 7.4
openshift-ansible-playbooks-3.7.42

[non-atomic-docker-storage-setup : Gather facts] *************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named ipaddress
fatal: [ose-master03.makeaangayon.com]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File "/tmp/ansible_JeaHJg/ansible_module_openshift_facts.py", line 19, in \n import ipaddress\nImportError: No module named ipaddress\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}

=====================================

Istack_name: openshift-infra
ami: ami-0b1e356e
region: us-east-2
master_instance_type: m4.xlarge
node_instance_type: m4.xlarge
app_instance_type: t2.large
app_node_count: 3
keypair: aws_ocp_contrib
create_key: yes
key_path: /root/.ssh/aws_ocp_contrib.pub
create_vpc: yes
vpc_id: None
private_subnet_id1: None
private_subnet_id2: None
private_subnet_id3: None
public_subnet_id1: None
public_subnet_id2: None
public_subnet_id3: None
byo_bastion: no
bastion_sg: /dev/null
console port: 443
deployment_type: openshift-enterprise
openshift_sdn: redhat/openshift-ovs-multitenant
public_hosted_zone: makeaangayon.com
app_dns_prefix: apps
apps_dns: apps.makeaangayon.com
containerized: False
s3_bucket_name: openshift-infra-ocp-registry-makeaangayon
s3_username: openshift-infra-s3-openshift-user

@cooktheryan
Copy link
Contributor

@mikecali you will need to install python-ipaddress as one of the first steps in the deployment either in its own pre_task play or in cloud-init

@mikecali
Copy link

mikecali commented Apr 7, 2018

@cooktheryan, that fix the problem. Thanks!
So to be clear for future reference, python-ipaddress needs to be installed in all nodes. What I did is I created a simple role (adding-python-ipaddress) and add it on playbooks/openshift-install.yaml.

- hosts: nodes
  gather_facts: yes
  become: yes
  vars_files:
  - vars/main.yaml
  roles:
  - adding-python-ipaddress
  - non-atomic-docker-storage-setup
  - openshift-versions

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 20, 2020
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 19, 2020
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
aws lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants