Skip to content
This repository has been archived by the owner on Mar 16, 2021. It is now read-only.

Python is not installed correctly: E: Package 'python' has no installation candidate #32

Open
adborden opened this issue Sep 20, 2019 · 2 comments

Comments

@adborden
Copy link
Contributor

Our provisioner fails to install python, with apt complaining "E: Package 'python' has no installation candidate". This happens right after an apt-get update, so it is very strange. If you ssh onto the host, and then sudo apt-get update && sudo apt-get install -y python, it installs correctly.

module.web.aws_instance.web: Still creating... (10s elapsed)
module.web.aws_instance.web: Provisioning with 'remote-exec'...
module.web.aws_instance.web (remote-exec): Connecting to remote host via SSH...
module.web.aws_instance.web (remote-exec):   Host: 10.0.1.128
module.web.aws_instance.web (remote-exec):   User: ubuntu
module.web.aws_instance.web (remote-exec):   Password: false
module.web.aws_instance.web (remote-exec):   Private key: false
module.web.aws_instance.web (remote-exec):   SSH Agent: true
module.web.aws_instance.web (remote-exec):   Checking Host Key: false
module.web.aws_instance.web (remote-exec): Using configured bastion host...
module.web.aws_instance.web (remote-exec):   Host: jump.bionic.datagov.us
module.web.aws_instance.web (remote-exec):   User: ubuntu
module.web.aws_instance.web (remote-exec):   Password: false
module.web.aws_instance.web (remote-exec):   Private key: false
module.web.aws_instance.web (remote-exec):   SSH Agent: true
module.web.aws_instance.web (remote-exec):   Checking Host Key: false
module.web.aws_instance.web: Still creating... (20s elapsed)
module.web.aws_instance.web: Still creating... (30s elapsed)
module.web.aws_instance.web (remote-exec): Connected!
module.web.aws_instance.web (remote-exec): 0% [Working]
module.web.aws_instance.web (remote-exec): Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
module.web.aws_instance.web (remote-exec): 0% [Waiting for headers]
module.web.aws_instance.web (remote-exec): 0% [1 InRelease gpgv 242 kB] [Connectin
module.web.aws_instance.web (remote-exec): Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
module.web.aws_instance.web (remote-exec): 0% [1 InRelease gpgv 242 kB] [Connectin
module.web.aws_instance.web (remote-exec): 0% [Waiting for headers] [2 InRelease 1
module.web.aws_instance.web (remote-exec): Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
module.web.aws_instance.web (remote-exec): 0% [3 InRelease 8375 B/88.7 kB 9%] [2 I
module.web.aws_instance.web (remote-exec): 0% [3 InRelease 43.1 kB/88.7 kB 49%]
module.web.aws_instance.web (remote-exec): 0% [2 InRelease gpgv 88.7 kB] [3 InRele
module.web.aws_instance.web (remote-exec): 0% [3 InRelease 75.0 kB/88.7 kB 85%]
module.web.aws_instance.web (remote-exec): 0% [Connecting to security.ubuntu.com (
module.web.aws_instance.web (remote-exec): 0% [3 InRelease gpgv 88.7 kB] [Connecti
module.web.aws_instance.web (remote-exec): 0% [Connecting to archive.ubuntu.com (9
module.web.aws_instance.web (remote-exec): Get:4 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [607 kB]
module.web.aws_instance.web (remote-exec): 0% [Waiting for headers] [4 Packages 14
module.web.aws_instance.web (remote-exec): Get:5 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
module.web.aws_instance.web (remote-exec): 0% [5 InRelease 14.2 kB/74.6 kB 19%] [4
module.web.aws_instance.web (remote-exec): 0% [4 Packages 72.2 kB/607 kB 12%]
module.web.aws_instance.web (remote-exec): 0% [5 InRelease gpgv 74.6 kB] [4 Packag
module.web.aws_instance.web (remote-exec): 18% [Connecting to archive.ubuntu.com (
module.web.aws_instance.web (remote-exec): Get:6 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [8570 kB]
module.web.aws_instance.web (remote-exec): 19% [6 Packages 14.2 kB/8570 kB 0%] [4
module.web.aws_instance.web (remote-exec): 20% [6 Packages 43.2 kB/8570 kB 1%]
module.web.aws_instance.web (remote-exec): 20% [4 Packages store 0 B] [6 Packages
module.web.aws_instance.web (remote-exec): 21% [6 Packages 101 kB/8570 kB 1%] [Wai
module.web.aws_instance.web (remote-exec): Get:7 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [202 kB]
module.web.aws_instance.web (remote-exec): 21% [6 Packages 117 kB/8570 kB 1%] [7 T
module.web.aws_instance.web (remote-exec): 29% [6 Packages 1721 kB/8570 kB 20%]
module.web.aws_instance.web (remote-exec): 29% [7 Translation-en store 0 B] [6 Pac
module.web.aws_instance.web (remote-exec): 33% [6 Packages 2393 kB/8570 kB 28%] [C
module.web.aws_instance.web (remote-exec): Get:8 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [4904 B]
module.web.aws_instance.web (remote-exec): 44% [6 Packages 4826 kB/8570 kB 56%] [8
module.web.aws_instance.web (remote-exec): 44% [6 Packages 4826 kB/8570 kB 56%]
module.web.aws_instance.web (remote-exec): 44% [8 Packages store 0 B] [6 Packages
module.web.aws_instance.web (remote-exec): 45% [6 Packages 4826 kB/8570 kB 56%] [C
module.web.aws_instance.web (remote-exec): Get:9 http://security.ubuntu.com/ubuntu bionic-security/multiverse Translation-en [2396 B]
module.web.aws_instance.web: Still creating... (40s elapsed)
module.web.aws_instance.web (remote-exec): 56% [6 Packages 7209 kB/8570 kB 84%] [9
module.web.aws_instance.web (remote-exec): 56% [6 Packages 7209 kB/8570 kB 84%]
module.web.aws_instance.web (remote-exec): 56% [9 Translation-en store 0 B] [6 Pac
module.web.aws_instance.web (remote-exec): 56% [6 Packages 7209 kB/8570 kB 84%]
module.web.aws_instance.web (remote-exec): 62% [Working]
module.web.aws_instance.web (remote-exec): 62% [6 Packages store 0 B] [Connecting
module.web.aws_instance.web (remote-exec): Get:10 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en [4941 kB]
module.web.aws_instance.web (remote-exec): 63% [6 Packages store 0 B] [10 Translat
module.web.aws_instance.web (remote-exec): 65% [6 Packages store 0 B] [10 Translat
module.web.aws_instance.web (remote-exec): 86% [6 Packages store 0 B] [10 Translat
module.web.aws_instance.web (remote-exec): 86% [6 Packages store 0 B] [Connecting
module.web.aws_instance.web (remote-exec): Get:11 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [151 kB]
module.web.aws_instance.web (remote-exec): 86% [6 Packages store 0 B] [11 Packages
module.web.aws_instance.web (remote-exec): 87% [6 Packages store 0 B]
module.web.aws_instance.web (remote-exec): Get:12 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en [108 kB]
module.web.aws_instance.web (remote-exec): 87% [6 Packages store 0 B] [12 Translat
module.web.aws_instance.web (remote-exec): 87% [12 Translation-en 43.2 kB/108 kB 4
module.web.aws_instance.web (remote-exec): 87% [10 Translation-en store 0 B] [12 T
module.web.aws_instance.web (remote-exec): 87% [10 Translation-en store 0 B]
module.web.aws_instance.web (remote-exec): Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [739 kB]
module.web.aws_instance.web (remote-exec): 87% [10 Translation-en store 0 B] [13 P
module.web.aws_instance.web (remote-exec): 89% [10 Translation-en store 0 B] [13 P
module.web.aws_instance.web (remote-exec): 91% [13 Packages 719 kB/739 kB 97%]
module.web.aws_instance.web (remote-exec): 91% [11 Packages store 0 B] [13 Package
module.web.aws_instance.web (remote-exec): 91% [11 Packages store 0 B] [Connecting
module.web.aws_instance.web (remote-exec): 91% [Connecting to archive.ubuntu.com (
module.web.aws_instance.web (remote-exec): 91% [12 Translation-en store 0 B] [Conn
module.web.aws_instance.web (remote-exec): 92% [Connecting to archive.ubuntu.com (
module.web.aws_instance.web (remote-exec): 92% [13 Packages store 0 B] [Connecting
module.web.aws_instance.web (remote-exec): Get:14 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1006 kB]
module.web.aws_instance.web (remote-exec): 92% [13 Packages store 0 B] [14 Package
module.web.aws_instance.web (remote-exec): 92% [14 Packages 14.3 kB/1006 kB 1%]
module.web.aws_instance.web (remote-exec): 95% [14 Packages 612 kB/1006 kB 61%]
module.web.aws_instance.web (remote-exec): 97% [Working]              2573 kB/s 0s
module.web.aws_instance.web (remote-exec): 97% [14 Packages store 0 B] [Connecting
module.web.aws_instance.web (remote-exec): 97% [Waiting for headers]  2573 kB/s 0s
module.web.aws_instance.web (remote-exec): Get:15 http://archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [309 kB]
module.web.aws_instance.web (remote-exec): 97% [15 Translation-en 14.3 kB/309 kB 5
module.web.aws_instance.web (remote-exec): 98% [Working]              2573 kB/s 0s
module.web.aws_instance.web (remote-exec): 98% [15 Translation-en store 0 B] [Conn
module.web.aws_instance.web (remote-exec): 98% [Connecting to archive.ubuntu.com (
module.web.aws_instance.web (remote-exec): Get:16 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [7528 B]
module.web.aws_instance.web (remote-exec): 99% [16 Packages 7528 B/7528 B 100%]
module.web.aws_instance.web (remote-exec): 99% [Working]              2573 kB/s 0s
module.web.aws_instance.web (remote-exec): 99% [16 Packages store 0 B] [Connecting
module.web.aws_instance.web (remote-exec): 99% [Connecting to archive.ubuntu.com (
module.web.aws_instance.web (remote-exec): Get:17 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse Translation-en [3868 B]
module.web.aws_instance.web (remote-exec): 99% [17 Translation-en 3868 B/3868 B 10
module.web.aws_instance.web (remote-exec): 99% [Working]              2573 kB/s 0s
module.web.aws_instance.web (remote-exec): 99% [17 Translation-en store 0 B] [Conn
module.web.aws_instance.web (remote-exec): 99% [Connecting to archive.ubuntu.com (
module.web.aws_instance.web (remote-exec): Get:18 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [2512 B]
module.web.aws_instance.web (remote-exec): 99% [18 Packages 2512 B/2512 B 100%]
module.web.aws_instance.web (remote-exec): 99% [Working]              2573 kB/s 0s
module.web.aws_instance.web (remote-exec): 99% [18 Packages store 0 B] [Connecting
module.web.aws_instance.web (remote-exec): 99% [Connecting to archive.ubuntu.com (
module.web.aws_instance.web (remote-exec): Get:19 http://archive.ubuntu.com/ubuntu bionic-backports/main Translation-en [1644 B]
module.web.aws_instance.web (remote-exec): 99% [19 Translation-en 1644 B/1644 B 10
module.web.aws_instance.web (remote-exec): 99% [Working]              2573 kB/s 0s
module.web.aws_instance.web (remote-exec): 99% [19 Translation-en store 0 B] [Conn
module.web.aws_instance.web (remote-exec): 99% [Connecting to archive.ubuntu.com (
module.web.aws_instance.web (remote-exec): Get:20 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [4020 B]
module.web.aws_instance.web (remote-exec): 100% [20 Packages 4020 B/4020 B 100%]
module.web.aws_instance.web (remote-exec): 100% [Working]             2573 kB/s 0s
module.web.aws_instance.web (remote-exec): 100% [20 Packages store 0 B] [Connectin
module.web.aws_instance.web (remote-exec): 100% [Connecting to archive.ubuntu.com
module.web.aws_instance.web (remote-exec): Get:21 http://archive.ubuntu.com/ubuntu bionic-backports/universe Translation-en [1856 B]
module.web.aws_instance.web (remote-exec): 100% [21 Translation-en 1856 B/1856 B 1
module.web.aws_instance.web (remote-exec): 100% [Working]             2573 kB/s 0s
module.web.aws_instance.web (remote-exec): 100% [21 Translation-en store 0 B]
module.web.aws_instance.web (remote-exec): 100% [Working]             2573 kB/s 0s
module.web.aws_instance.web (remote-exec): Fetched 16.9 MB in 8s (2064 kB/s)
module.web.aws_instance.web (remote-exec): Reading package lists... 0%
module.web.aws_instance.web (remote-exec): Reading package lists... 0%
module.web.aws_instance.web (remote-exec): Reading package lists... 0%
module.web.aws_instance.web (remote-exec): Reading package lists... 46%
module.web.aws_instance.web (remote-exec): Reading package lists... 46%
module.web.aws_instance.web (remote-exec): Reading package lists... 67%
module.web.aws_instance.web (remote-exec): Reading package lists... 71%
module.web.aws_instance.web (remote-exec): Reading package lists... 71%
module.web.aws_instance.web (remote-exec): Reading package lists... 72%
module.web.aws_instance.web (remote-exec): Reading package lists... 72%
module.web.aws_instance.web (remote-exec): Reading package lists... 72%
module.web.aws_instance.web (remote-exec): Reading package lists... 72%
module.web.aws_instance.web (remote-exec): Reading package lists... 77%
module.web.aws_instance.web (remote-exec): Reading package lists... 77%
module.web.aws_instance.web (remote-exec): Reading package lists... 84%
module.web.aws_instance.web (remote-exec): Reading package lists... 84%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 87%
module.web.aws_instance.web (remote-exec): Reading package lists... 90%
module.web.aws_instance.web (remote-exec): Reading package lists... 90%
module.web.aws_instance.web (remote-exec): Reading package lists... 93%
module.web.aws_instance.web (remote-exec): Reading package lists... 93%
module.web.aws_instance.web (remote-exec): Reading package lists... 93%
module.web.aws_instance.web (remote-exec): Reading package lists... 93%
module.web.aws_instance.web (remote-exec): Reading package lists... 93%
module.web.aws_instance.web (remote-exec): Reading package lists... 93%
module.web.aws_instance.web (remote-exec): Reading package lists... 97%
module.web.aws_instance.web (remote-exec): Reading package lists... 97%
module.web.aws_instance.web (remote-exec): Reading package lists... 99%
module.web.aws_instance.web (remote-exec): Reading package lists... 99%
module.web.aws_instance.web (remote-exec): Reading package lists... 99%
module.web.aws_instance.web (remote-exec): Reading package lists... 99%
module.web.aws_instance.web (remote-exec): Reading package lists... 99%
module.web.aws_instance.web (remote-exec): Reading package lists... 99%
module.web.aws_instance.web (remote-exec): Reading package lists... Done
module.web.aws_instance.web (remote-exec): Reading package lists... 0%
module.web.aws_instance.web (remote-exec): Reading package lists... 0%
module.web.aws_instance.web (remote-exec): Reading package lists... 0%
module.web.aws_instance.web (remote-exec): Reading package lists... 26%
module.web.aws_instance.web (remote-exec): Reading package lists... 26%
module.web.aws_instance.web (remote-exec): Reading package lists... 47%
module.web.aws_instance.web (remote-exec): Reading package lists... 47%
module.web.aws_instance.web (remote-exec): Reading package lists... 47%
module.web.aws_instance.web (remote-exec): Reading package lists... 47%
module.web.aws_instance.web (remote-exec): Reading package lists... 47%
module.web.aws_instance.web (remote-exec): Reading package lists... 47%
module.web.aws_instance.web (remote-exec): Reading package lists... 81%
module.web.aws_instance.web (remote-exec): Reading package lists... 81%
module.web.aws_instance.web (remote-exec): Reading package lists... 96%
module.web.aws_instance.web (remote-exec): Reading package lists... 96%
module.web.aws_instance.web (remote-exec): Reading package lists... 96%
module.web.aws_instance.web (remote-exec): Reading package lists... 96%
module.web.aws_instance.web (remote-exec): Reading package lists... 96%
module.web.aws_instance.web (remote-exec): Reading package lists... 96%
module.web.aws_instance.web (remote-exec): Reading package lists... Done
module.web.aws_instance.web (remote-exec): Building dependency tree... 0%
module.web.aws_instance.web (remote-exec): Building dependency tree... 0%
module.web.aws_instance.web (remote-exec): Building dependency tree... 50%
module.web.aws_instance.web (remote-exec): Building dependency tree... 50%
module.web.aws_instance.web (remote-exec): Building dependency tree
module.web.aws_instance.web (remote-exec): Reading state information... 0%
module.web.aws_instance.web (remote-exec): Reading state information... 0%
module.web.aws_instance.web (remote-exec): Reading state information... Done
module.web.aws_instance.web (remote-exec): Package python is not available, but is referred to by another package.
module.web.aws_instance.web (remote-exec): This may mean that the package is missing, has been obsoleted, or
module.web.aws_instance.web (remote-exec): is only available from another source

module.web.aws_instance.web (remote-exec): E: Package 'python' has no installation candidate
module.web.aws_instance.web: Still creating... (50s elapsed)
Releasing state lock. This may take a few moments...

Error: Error applying plan:

1 error occurred:
        * module.web.aws_instance.web: error executing "/tmp/terraform_877938795.sh": Process exited with status 100
@adborden
Copy link
Contributor Author

Specifying python2.7 seems to work more consistently 🤷‍♂️

adborden added a commit to GSA/data.gov that referenced this issue Apr 29, 2020
We made a partial fix for #32 by using python3 instead of python, so on bionic
hosts where python is not installed by default, we need to explicitly state
python3 for the Ansible.

GSA/datagov-infrastructure-modules#32
adborden added a commit to GSA/data.gov that referenced this issue Apr 29, 2020
We made a partial fix for #32 by using python3 instead of python, so on bionic
hosts where python is not installed by default, we need to explicitly state
python3 for the Ansible.

GSA/datagov-infrastructure-modules#32
adborden added a commit to GSA/data.gov that referenced this issue May 1, 2020
We made a partial fix for #32 by using python3 instead of python, so on bionic
hosts where python is not installed by default, we need to explicitly state
python3 for the Ansible.

GSA/datagov-infrastructure-modules#32
@adborden
Copy link
Contributor Author

adborden commented May 6, 2020

Specifying apt-get update twice seems to work around the issue.

adborden added a commit that referenced this issue May 6, 2020
mogul added a commit to GSA/datagov-infrastructure-live that referenced this issue Mar 9, 2021
* Update config.yml

* Use ec2 instances instead of ASGs

Auto-scaling groups are unavailable in BSP, so by using ec2 instances, we can
manage our test environment similar to how we manage BSP.

* Example of ansible dynamic inventory

* [jumpbox] filter ami by environment

* Note about terraform fmt

* [jumpbox] add group tag for ansible inventory

* Use vpc-style security groups

Fixes issue of security groups disappearing, causing instances to be re-created on
every run.

* [jumpbox] update hostname

* Don't check for required variables in tests

The variables would be test variables anyway. This creates less work for
ourselves.

* Move jumpbox to its own module

* Add postgresdb module

A standard way to create postgresql databases

* Add simple provision script for jumpbox

* web modules for lb + ec2 hosts

* Solr module

* Catalog module

* catalog database

* [vpc] output availability zones as azs

* [catalog] add pycsw database

* [inventory] add inventory web instances

* Update README

* Make names unique for databases

* Make lb target groups unique

* [vpc] create an internal DNS zone for VPC

* [vpc] add public and private dns zones

* [solr] add internal dns names

* [jumpbox] add public and private dns names

* [stateless] create stateless ec2 instances

* [web] add public and private dns

* [catalog] add public and private dns names

* [inventory] add public and private dns

* Add solr security groups

* Use single RDS for catalog and inventory

* [jumpbox] fix provisioning

- Specify connection for provisioner.
- Use sudo for provision script which is executed as ubuntu.
- Update hostname for public and private dns. Environment is redundant.

* [app] remove the app module

* [catalog] allow http/https egress on harvester

* [jumpbox] unique name for jumpbox policy

* [catalog] add outputs

* [inventory] add outputs

* [web] missing target group attachments

* [jumpbox] add build-essential

Needed for building ansible dependencies.

* [jumpbox] provision with git

* [jumpbox] add zlib for building python in pyenv

* Add crm

* dashboard

* Outputs for database

* wordpress

* More outputs

* Use variable instead of hardcoded region

* Add provisioners

* terraform fmt

* More missing dependencies for python build

https://github.com/pyenv/pyenv/wiki#suggested-build-environment

* [jumpbox] simpler public dns name

* Update variables.tf

* Use HTTPS for nginx apps

* [web] typo in resource name

* [catalog] add bastion_host for harvester

* Fix CI

Pin terraform to 0.11

* Fix module name

We were pulling the module from github master, which would have been the 2.x
series which is not compatible with terraform v0.11.

* Bump instance type for catalog

With all the services starting, they run out of memory during a catalog deploy
and hang.

* Add Jenkins role

* Add IAM instance profile to Jenkins

Allow Jenkins to query for EC2 inventory in order to run ansible playbooks in
the environment.

* Refactor vpc into a module

* Refactor jumpbox to proper module

* Remove provider config

Avoids "region required" error.

* [stateful] fix attachment when instance_count > 1

* [jumpbox] remove extra script

This was moved to modules/jumpbox

* Add ckan-cloud module

* Add missing validations

* Ignore AMI updates on jumpbox

Avoid destroy/recreate for new AMI images.

* Restrict egress

* [ckan-cloud] EKSFullAccess is a custom policy

* [ckan-cloud] fix nat gateway limit issue

We're hitting the 5 nat gateways per availability zone limit. Allow configuring
AZs, and single nat gateway creation in the vpc module to work around this.

* [ckan-cloud] update EKS custom policy

The EKS policy is a custom policy. Let's create it for each environment and
hopefully we can tailor the policy to restrict access to only that environment.

* [solr] refactor to terraform module

* [jenkins] refactor to terraform role

* [wordpress] move to terraform module

* [crm] refactor into terraform module

* [dashboard] refactor into terraform module

* [dashboard] update health check

* [catalog] refactor into terraform module

* [catalog] update health check url

* [inventory] refactor into terraform module

* [inventory] update health check url

* Default to Ubuntu Bionic 18.04

* make fmt

Add a fmt task to run `terraform fmt` on all modules.

* [stateful] fix ebs attachment destroy

Fixes GSA/datagov-infrastructure-modules#17

* [db] allow egress for database_port

* [solr] allow egress to solr

* Move egress rules to default security group

* Update main.tf

* Typo

Fixes type error with catalog security groups (list vs string).

* Pass ami_filter_name through to modules

* [inventory] add web_instance_type variable

Use t3.small, because CKAN dependency compilation requires quite a few
resources.

* [jenkins] security groups for Jenkins SSH

* [solr] add egress rule

With the default egress restrictions, allow solr consumers to egress to solr.

* [solr] remove tomcat port

We don't have any more older versions of solr relying on tomcat.

* Remove CRM resources

Removing all the actual resources that terraform would create. We don't want to
delete all the files, because we still need to run terraform to remove the
actual resources. Once that's done, we can remove the actual modules/files.

* Remove crm modules

* inventory-2-8 working modifications

* [inventory] set default for ansible_group

Avoid breaking change by specifying a default.

* Updates for terraform 0.12

* Add clean make target

Remove terraform files for a clean `terraform init`.

* Auto-update terraform to v0.12

Use `terraform 0.12upgrade` to automatically update the terraform files.

```
(
set -e
for module in $(find . -maxdepth 2 -type d -path './modules/*'); do
  pushd $module
  terraform init
  terraform 0.12upgrade -yes
  popd
done
)
```

* Update alb module for terraform v0.12

* [catalog] update for web module

* [vpc] bump vpc module for terraform v0.12

* Remove terragrunt modules

* [solr] update output to match others

* Fix TF-UPGRADE-TODOs

* [jenkins] fix dns record type

stateful.instance_public_ip is an array.

* Ignore AMI changes

When a new AMI is available, we don't want terraform to replace the instance.
This matches how the BSP environments work where we update-in-place.

* [db] security_group_ids variable

Allow security groups to be passed in via variable. This avoids a bootstrap
dependency where the default VPC security group does not exist until after the
VPC is created. Until then, any `data` sources looking for the security group
will fail. If you're provisioning from nothing, you won't be able to.

Instead, by using a variable, Terraform will be able to calculate that all the
databases depend on the VPC module and its default security group being created
first.

* [jumpbox] add security_groups variable

* [web] add loadbalancer_security_groups

* terraform fmt

* [web] add egress rules to lb

Allow load balancer outbound traffic to public subnets.

* Add ansible_group to all modules

Specify stack version (v1/v2) with each component.

* [db] add variable for db allocated storage

Bump default to 20GB

* Add redis to catalog/inventory

* [catalog] add db name and web name

Allows us to provision multiple versions of catalog without conflicting names
e.g. catalog-2-8

* [redis] add security group for redis

Move redis to a module and add security groups.

* add s3 bucket for inventory

* [catalog] ensure unique harvester names

* add sandbox in the s3 bucket name

* Work around python install

GSA/datagov-infrastructure-modules#32

* no default bucket name; to be provided by root module.

* [redis] add subnets and security groups

Required for VPC access.

* [redis] refactor security groups

Hitting the 5 security group limit on harvester. Need to refactor security
groups to reduce the number of required "access" security groups for harvester.
If this works out, we should do a similar refactor for database and solr.

* [inventory] typo specifying security groups

* name role more specific; run terraform fmt

* fix policy syntax error; avoid profile name conflict

* Refactor Ansible/SSH security groups

Instead of having a special SG that must be applied to all instances, modify the
"default" vpc-wide security group to allow for this access.

* [web] instances should be on public subnet

* Revert "[web] instances should be on public subnet"

This reverts commit 1ee2ea1.

Rather than move web instances to the public subnet, we'll allow the LB to talk
to the private subnet. web instances with LBs don't need to be on the public
subnet.

* [web] allow ALBs to talk to private subnet

* [redis] fix enable_redis

Only create redis when enable/enable_redis is true.

* [redis] add auth_token

Not sure why aws_elasticache_cluster does not support auth_token, so switching
to aws_replication_group in non-cluster mode which does.

* [redis] add auth_token as output

* Note on Ansible groups

* Use a single tag for Ansible group

As long as the tag is unique, it can easily be mapped to multiple groups within
the Ansible inventory.

* [inventory] pass instance profile to web module

Fix inventory starting up and able to use the IAM role for s3 access.

* Egress port for Redis

Since we can't use our <service>_access security group trick (like we do for
solr and db due to 5 sg limit per ec2 instance), we have to explicitly add the
egress rule to any security group we pass to the redis allow_security_groups
variable.

* [redis] variable for transit_encryption_enabled

Allow encryption in transit to be disabled for testing.

* Add web instance to web security group

* add lb to ci

* [stateful] fix fstab on pre-existing EBS volume

* add aws_lb_target_group_attachment to jenkins

* add fgdc2iso

* no need for port 80 for fgdc2iso

* Revert "no need for port 80 for fgdc2iso"

This reverts commit 3be1c98.

* update docs for inventory-next

* Update catalog storage size

* bump ci

* Revert "update docs for inventory-next"

* Update variables.tf

* Update variables.tf

* Update variables.tf

* [web] redirect HTTP -> HTTPS

* Specify provider requirements

Instead of declaring a provider, specify the provider requirements. Works around
issue with aws provider v3.x and the alb resource[1].

[1]: GSA/data.gov#2032

* Add security group to inventory web

* Adding comma

* [jenkins] add name variable

Allows for uniquely identifying separate jenkins instances within a single
environment. This allows you create multiple, individual jenkins instances in
an environment.

* make fmt

* change lb backend to https & 443

* Revert "change lb backend to https & 443"

* change lb backend to https & 443

* [jenkins] rename security group identifier

This avoids the DependencyViolation error, where the SG needs to be recreated,
but it is still attached to an EC2 instance. Rename the SG identifier to trigger
terraform to remove the SG.

* fixup merge conflict

* terraform fmt

* Update source references for relative modules

* Update CI workflow for module tasks

* [db] avoid downgrades

Ignore the db version, since AWS will automatically upgrade minor versions
during maintenance windows.

* Include jumpbox in make fmt

Co-authored-by: Bret Mogilefsky <bret.mogilefsky@gsa.gov>
Co-authored-by: James Brown <james.c.brown@gsa.gov>
Co-authored-by: jbrown-xentity <jbrown@xentity.com>
Co-authored-by: Fuhu Xia <fxia@reisystems.com>
Co-authored-by: Tom Wood <tom.wood@civicactions.com>
Co-authored-by: Preston Sharpe <psharpe@xentity.com>
Co-authored-by: Chris MacDermaid <64213093+chris-macdermaid@users.noreply.github.com>
mogul added a commit to GSA/datagov-infrastructure-live that referenced this issue Mar 9, 2021
* Use ec2 instances instead of ASGs

Auto-scaling groups are unavailable in BSP, so by using ec2 instances, we can
manage our test environment similar to how we manage BSP.

* Example of ansible dynamic inventory

* [jumpbox] filter ami by environment

* Note about terraform fmt

* [jumpbox] add group tag for ansible inventory

* Use vpc-style security groups

Fixes issue of security groups disappearing, causing instances to be re-created on
every run.

* [jumpbox] update hostname

* Don't check for required variables in tests

The variables would be test variables anyway. This creates less work for
ourselves.

* Move jumpbox to its own module

* Add postgresdb module

A standard way to create postgresql databases

* Add simple provision script for jumpbox

* web modules for lb + ec2 hosts

* Solr module

* Catalog module

* catalog database

* [vpc] output availability zones as azs

* [catalog] add pycsw database

* [inventory] add inventory web instances

* Update README

* Make names unique for databases

* Make lb target groups unique

* [vpc] create an internal DNS zone for VPC

* [vpc] add public and private dns zones

* [solr] add internal dns names

* [jumpbox] add public and private dns names

* [stateless] create stateless ec2 instances

* [web] add public and private dns

* [catalog] add public and private dns names

* [inventory] add public and private dns

* Add solr security groups

* Use single RDS for catalog and inventory

* [jumpbox] fix provisioning

- Specify connection for provisioner.
- Use sudo for provision script which is executed as ubuntu.
- Update hostname for public and private dns. Environment is redundant.

* [app] remove the app module

* [catalog] allow http/https egress on harvester

* [jumpbox] unique name for jumpbox policy

* [catalog] add outputs

* [inventory] add outputs

* [web] missing target group attachments

* [jumpbox] add build-essential

Needed for building ansible dependencies.

* [jumpbox] provision with git

* [jumpbox] add zlib for building python in pyenv

* Add crm

* dashboard

* Outputs for database

* wordpress

* More outputs

* Use variable instead of hardcoded region

* Add provisioners

* terraform fmt

* More missing dependencies for python build

https://github.com/pyenv/pyenv/wiki#suggested-build-environment

* [jumpbox] simpler public dns name

* Update variables.tf

* Use HTTPS for nginx apps

* [web] typo in resource name

* [catalog] add bastion_host for harvester

* Fix CI

Pin terraform to 0.11

* Fix module name

We were pulling the module from github master, which would have been the 2.x
series which is not compatible with terraform v0.11.

* Bump instance type for catalog

With all the services starting, they run out of memory during a catalog deploy
and hang.

* Add Jenkins role

* Add IAM instance profile to Jenkins

Allow Jenkins to query for EC2 inventory in order to run ansible playbooks in
the environment.

* Refactor vpc into a module

* Refactor jumpbox to proper module

* Remove provider config

Avoids "region required" error.

* [stateful] fix attachment when instance_count > 1

* [jumpbox] remove extra script

This was moved to modules/jumpbox

* Add ckan-cloud module

* Add missing validations

* Ignore AMI updates on jumpbox

Avoid destroy/recreate for new AMI images.

* Restrict egress

* [ckan-cloud] EKSFullAccess is a custom policy

* [ckan-cloud] fix nat gateway limit issue

We're hitting the 5 nat gateways per availability zone limit. Allow configuring
AZs, and single nat gateway creation in the vpc module to work around this.

* [ckan-cloud] update EKS custom policy

The EKS policy is a custom policy. Let's create it for each environment and
hopefully we can tailor the policy to restrict access to only that environment.

* [solr] refactor to terraform module

* [jenkins] refactor to terraform role

* [wordpress] move to terraform module

* [crm] refactor into terraform module

* [dashboard] refactor into terraform module

* [dashboard] update health check

* [catalog] refactor into terraform module

* [catalog] update health check url

* [inventory] refactor into terraform module

* [inventory] update health check url

* Default to Ubuntu Bionic 18.04

* make fmt

Add a fmt task to run `terraform fmt` on all modules.

* [stateful] fix ebs attachment destroy

Fixes GSA/datagov-infrastructure-modules#17

* [db] allow egress for database_port

* [solr] allow egress to solr

* Move egress rules to default security group

* Update main.tf

* Typo

Fixes type error with catalog security groups (list vs string).

* Pass ami_filter_name through to modules

* [inventory] add web_instance_type variable

Use t3.small, because CKAN dependency compilation requires quite a few
resources.

* [jenkins] security groups for Jenkins SSH

* [solr] add egress rule

With the default egress restrictions, allow solr consumers to egress to solr.

* [solr] remove tomcat port

We don't have any more older versions of solr relying on tomcat.

* Remove CRM resources

Removing all the actual resources that terraform would create. We don't want to
delete all the files, because we still need to run terraform to remove the
actual resources. Once that's done, we can remove the actual modules/files.

* Remove crm modules

* inventory-2-8 working modifications

* [inventory] set default for ansible_group

Avoid breaking change by specifying a default.

* Updates for terraform 0.12

* Add clean make target

Remove terraform files for a clean `terraform init`.

* Auto-update terraform to v0.12

Use `terraform 0.12upgrade` to automatically update the terraform files.

```
(
set -e
for module in $(find . -maxdepth 2 -type d -path './modules/*'); do
  pushd $module
  terraform init
  terraform 0.12upgrade -yes
  popd
done
)
```

* Update alb module for terraform v0.12

* [catalog] update for web module

* [vpc] bump vpc module for terraform v0.12

* Remove terragrunt modules

* [solr] update output to match others

* Fix TF-UPGRADE-TODOs

* [jenkins] fix dns record type

stateful.instance_public_ip is an array.

* Ignore AMI changes

When a new AMI is available, we don't want terraform to replace the instance.
This matches how the BSP environments work where we update-in-place.

* [db] security_group_ids variable

Allow security groups to be passed in via variable. This avoids a bootstrap
dependency where the default VPC security group does not exist until after the
VPC is created. Until then, any `data` sources looking for the security group
will fail. If you're provisioning from nothing, you won't be able to.

Instead, by using a variable, Terraform will be able to calculate that all the
databases depend on the VPC module and its default security group being created
first.

* [jumpbox] add security_groups variable

* [web] add loadbalancer_security_groups

* terraform fmt

* [web] add egress rules to lb

Allow load balancer outbound traffic to public subnets.

* Add ansible_group to all modules

Specify stack version (v1/v2) with each component.

* [db] add variable for db allocated storage

Bump default to 20GB

* Add redis to catalog/inventory

* [catalog] add db name and web name

Allows us to provision multiple versions of catalog without conflicting names
e.g. catalog-2-8

* [redis] add security group for redis

Move redis to a module and add security groups.

* add s3 bucket for inventory

* [catalog] ensure unique harvester names

* add sandbox in the s3 bucket name

* Work around python install

GSA/datagov-infrastructure-modules#32

* no default bucket name; to be provided by root module.

* [redis] add subnets and security groups

Required for VPC access.

* [redis] refactor security groups

Hitting the 5 security group limit on harvester. Need to refactor security
groups to reduce the number of required "access" security groups for harvester.
If this works out, we should do a similar refactor for database and solr.

* [inventory] typo specifying security groups

* name role more specific; run terraform fmt

* fix policy syntax error; avoid profile name conflict

* Refactor Ansible/SSH security groups

Instead of having a special SG that must be applied to all instances, modify the
"default" vpc-wide security group to allow for this access.

* [web] instances should be on public subnet

* Revert "[web] instances should be on public subnet"

This reverts commit 1ee2ea1.

Rather than move web instances to the public subnet, we'll allow the LB to talk
to the private subnet. web instances with LBs don't need to be on the public
subnet.

* [web] allow ALBs to talk to private subnet

* [redis] fix enable_redis

Only create redis when enable/enable_redis is true.

* [redis] add auth_token

Not sure why aws_elasticache_cluster does not support auth_token, so switching
to aws_replication_group in non-cluster mode which does.

* [redis] add auth_token as output

* Note on Ansible groups

* Use a single tag for Ansible group

As long as the tag is unique, it can easily be mapped to multiple groups within
the Ansible inventory.

* [inventory] pass instance profile to web module

Fix inventory starting up and able to use the IAM role for s3 access.

* Egress port for Redis

Since we can't use our <service>_access security group trick (like we do for
solr and db due to 5 sg limit per ec2 instance), we have to explicitly add the
egress rule to any security group we pass to the redis allow_security_groups
variable.

* [redis] variable for transit_encryption_enabled

Allow encryption in transit to be disabled for testing.

* Add web instance to web security group

* add lb to ci

* [stateful] fix fstab on pre-existing EBS volume

* add aws_lb_target_group_attachment to jenkins

* add fgdc2iso

* no need for port 80 for fgdc2iso

* Revert "no need for port 80 for fgdc2iso"

This reverts commit 3be1c98.

* update docs for inventory-next

* Update catalog storage size

* bump ci

* Revert "update docs for inventory-next"

* Update variables.tf

* Update variables.tf

* Update variables.tf

* [web] redirect HTTP -> HTTPS

* Specify provider requirements

Instead of declaring a provider, specify the provider requirements. Works around
issue with aws provider v3.x and the alb resource[1].

[1]: GSA/data.gov#2032

* Add security group to inventory web

* Adding comma

* [jenkins] add name variable

Allows for uniquely identifying separate jenkins instances within a single
environment. This allows you create multiple, individual jenkins instances in
an environment.

* make fmt

* change lb backend to https & 443

* Revert "change lb backend to https & 443"

* change lb backend to https & 443

* [jenkins] rename security group identifier

This avoids the DependencyViolation error, where the SG needs to be recreated,
but it is still attached to an EC2 instance. Rename the SG identifier to trigger
terraform to remove the SG.

* fixup merge conflict

* terraform fmt

* Update source references for relative modules

* Update CI workflow for module tasks

* [db] avoid downgrades

Ignore the db version, since AWS will automatically upgrade minor versions
during maintenance windows.

* Include jumpbox in make fmt

* Use third-party actions for terraform workflow

Co-authored-by: Bret Mogilefsky <bret.mogilefsky@gsa.gov>
Co-authored-by: James Brown <james.c.brown@gsa.gov>
Co-authored-by: jbrown-xentity <jbrown@xentity.com>
Co-authored-by: Fuhu Xia <fxia@reisystems.com>
Co-authored-by: Tom Wood <tom.wood@civicactions.com>
Co-authored-by: Preston Sharpe <psharpe@xentity.com>
Co-authored-by: Chris MacDermaid <64213093+chris-macdermaid@users.noreply.github.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant