This repository has been archived by the owner on Mar 16, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 8
Python is not installed correctly: E: Package 'python' has no installation candidate #32
Comments
Specifying |
adborden
added a commit
to GSA/data.gov
that referenced
this issue
Apr 29, 2020
We made a partial fix for #32 by using python3 instead of python, so on bionic hosts where python is not installed by default, we need to explicitly state python3 for the Ansible. GSA/datagov-infrastructure-modules#32
adborden
added a commit
to GSA/data.gov
that referenced
this issue
Apr 29, 2020
We made a partial fix for #32 by using python3 instead of python, so on bionic hosts where python is not installed by default, we need to explicitly state python3 for the Ansible. GSA/datagov-infrastructure-modules#32
adborden
added a commit
to GSA/data.gov
that referenced
this issue
May 1, 2020
We made a partial fix for #32 by using python3 instead of python, so on bionic hosts where python is not installed by default, we need to explicitly state python3 for the Ansible. GSA/datagov-infrastructure-modules#32
Specifying |
adborden
added a commit
that referenced
this issue
May 6, 2020
mogul
added a commit
to GSA/datagov-infrastructure-live
that referenced
this issue
Mar 9, 2021
* Update config.yml * Use ec2 instances instead of ASGs Auto-scaling groups are unavailable in BSP, so by using ec2 instances, we can manage our test environment similar to how we manage BSP. * Example of ansible dynamic inventory * [jumpbox] filter ami by environment * Note about terraform fmt * [jumpbox] add group tag for ansible inventory * Use vpc-style security groups Fixes issue of security groups disappearing, causing instances to be re-created on every run. * [jumpbox] update hostname * Don't check for required variables in tests The variables would be test variables anyway. This creates less work for ourselves. * Move jumpbox to its own module * Add postgresdb module A standard way to create postgresql databases * Add simple provision script for jumpbox * web modules for lb + ec2 hosts * Solr module * Catalog module * catalog database * [vpc] output availability zones as azs * [catalog] add pycsw database * [inventory] add inventory web instances * Update README * Make names unique for databases * Make lb target groups unique * [vpc] create an internal DNS zone for VPC * [vpc] add public and private dns zones * [solr] add internal dns names * [jumpbox] add public and private dns names * [stateless] create stateless ec2 instances * [web] add public and private dns * [catalog] add public and private dns names * [inventory] add public and private dns * Add solr security groups * Use single RDS for catalog and inventory * [jumpbox] fix provisioning - Specify connection for provisioner. - Use sudo for provision script which is executed as ubuntu. - Update hostname for public and private dns. Environment is redundant. * [app] remove the app module * [catalog] allow http/https egress on harvester * [jumpbox] unique name for jumpbox policy * [catalog] add outputs * [inventory] add outputs * [web] missing target group attachments * [jumpbox] add build-essential Needed for building ansible dependencies. * [jumpbox] provision with git * [jumpbox] add zlib for building python in pyenv * Add crm * dashboard * Outputs for database * wordpress * More outputs * Use variable instead of hardcoded region * Add provisioners * terraform fmt * More missing dependencies for python build https://github.com/pyenv/pyenv/wiki#suggested-build-environment * [jumpbox] simpler public dns name * Update variables.tf * Use HTTPS for nginx apps * [web] typo in resource name * [catalog] add bastion_host for harvester * Fix CI Pin terraform to 0.11 * Fix module name We were pulling the module from github master, which would have been the 2.x series which is not compatible with terraform v0.11. * Bump instance type for catalog With all the services starting, they run out of memory during a catalog deploy and hang. * Add Jenkins role * Add IAM instance profile to Jenkins Allow Jenkins to query for EC2 inventory in order to run ansible playbooks in the environment. * Refactor vpc into a module * Refactor jumpbox to proper module * Remove provider config Avoids "region required" error. * [stateful] fix attachment when instance_count > 1 * [jumpbox] remove extra script This was moved to modules/jumpbox * Add ckan-cloud module * Add missing validations * Ignore AMI updates on jumpbox Avoid destroy/recreate for new AMI images. * Restrict egress * [ckan-cloud] EKSFullAccess is a custom policy * [ckan-cloud] fix nat gateway limit issue We're hitting the 5 nat gateways per availability zone limit. Allow configuring AZs, and single nat gateway creation in the vpc module to work around this. * [ckan-cloud] update EKS custom policy The EKS policy is a custom policy. Let's create it for each environment and hopefully we can tailor the policy to restrict access to only that environment. * [solr] refactor to terraform module * [jenkins] refactor to terraform role * [wordpress] move to terraform module * [crm] refactor into terraform module * [dashboard] refactor into terraform module * [dashboard] update health check * [catalog] refactor into terraform module * [catalog] update health check url * [inventory] refactor into terraform module * [inventory] update health check url * Default to Ubuntu Bionic 18.04 * make fmt Add a fmt task to run `terraform fmt` on all modules. * [stateful] fix ebs attachment destroy Fixes GSA/datagov-infrastructure-modules#17 * [db] allow egress for database_port * [solr] allow egress to solr * Move egress rules to default security group * Update main.tf * Typo Fixes type error with catalog security groups (list vs string). * Pass ami_filter_name through to modules * [inventory] add web_instance_type variable Use t3.small, because CKAN dependency compilation requires quite a few resources. * [jenkins] security groups for Jenkins SSH * [solr] add egress rule With the default egress restrictions, allow solr consumers to egress to solr. * [solr] remove tomcat port We don't have any more older versions of solr relying on tomcat. * Remove CRM resources Removing all the actual resources that terraform would create. We don't want to delete all the files, because we still need to run terraform to remove the actual resources. Once that's done, we can remove the actual modules/files. * Remove crm modules * inventory-2-8 working modifications * [inventory] set default for ansible_group Avoid breaking change by specifying a default. * Updates for terraform 0.12 * Add clean make target Remove terraform files for a clean `terraform init`. * Auto-update terraform to v0.12 Use `terraform 0.12upgrade` to automatically update the terraform files. ``` ( set -e for module in $(find . -maxdepth 2 -type d -path './modules/*'); do pushd $module terraform init terraform 0.12upgrade -yes popd done ) ``` * Update alb module for terraform v0.12 * [catalog] update for web module * [vpc] bump vpc module for terraform v0.12 * Remove terragrunt modules * [solr] update output to match others * Fix TF-UPGRADE-TODOs * [jenkins] fix dns record type stateful.instance_public_ip is an array. * Ignore AMI changes When a new AMI is available, we don't want terraform to replace the instance. This matches how the BSP environments work where we update-in-place. * [db] security_group_ids variable Allow security groups to be passed in via variable. This avoids a bootstrap dependency where the default VPC security group does not exist until after the VPC is created. Until then, any `data` sources looking for the security group will fail. If you're provisioning from nothing, you won't be able to. Instead, by using a variable, Terraform will be able to calculate that all the databases depend on the VPC module and its default security group being created first. * [jumpbox] add security_groups variable * [web] add loadbalancer_security_groups * terraform fmt * [web] add egress rules to lb Allow load balancer outbound traffic to public subnets. * Add ansible_group to all modules Specify stack version (v1/v2) with each component. * [db] add variable for db allocated storage Bump default to 20GB * Add redis to catalog/inventory * [catalog] add db name and web name Allows us to provision multiple versions of catalog without conflicting names e.g. catalog-2-8 * [redis] add security group for redis Move redis to a module and add security groups. * add s3 bucket for inventory * [catalog] ensure unique harvester names * add sandbox in the s3 bucket name * Work around python install GSA/datagov-infrastructure-modules#32 * no default bucket name; to be provided by root module. * [redis] add subnets and security groups Required for VPC access. * [redis] refactor security groups Hitting the 5 security group limit on harvester. Need to refactor security groups to reduce the number of required "access" security groups for harvester. If this works out, we should do a similar refactor for database and solr. * [inventory] typo specifying security groups * name role more specific; run terraform fmt * fix policy syntax error; avoid profile name conflict * Refactor Ansible/SSH security groups Instead of having a special SG that must be applied to all instances, modify the "default" vpc-wide security group to allow for this access. * [web] instances should be on public subnet * Revert "[web] instances should be on public subnet" This reverts commit 1ee2ea1. Rather than move web instances to the public subnet, we'll allow the LB to talk to the private subnet. web instances with LBs don't need to be on the public subnet. * [web] allow ALBs to talk to private subnet * [redis] fix enable_redis Only create redis when enable/enable_redis is true. * [redis] add auth_token Not sure why aws_elasticache_cluster does not support auth_token, so switching to aws_replication_group in non-cluster mode which does. * [redis] add auth_token as output * Note on Ansible groups * Use a single tag for Ansible group As long as the tag is unique, it can easily be mapped to multiple groups within the Ansible inventory. * [inventory] pass instance profile to web module Fix inventory starting up and able to use the IAM role for s3 access. * Egress port for Redis Since we can't use our <service>_access security group trick (like we do for solr and db due to 5 sg limit per ec2 instance), we have to explicitly add the egress rule to any security group we pass to the redis allow_security_groups variable. * [redis] variable for transit_encryption_enabled Allow encryption in transit to be disabled for testing. * Add web instance to web security group * add lb to ci * [stateful] fix fstab on pre-existing EBS volume * add aws_lb_target_group_attachment to jenkins * add fgdc2iso * no need for port 80 for fgdc2iso * Revert "no need for port 80 for fgdc2iso" This reverts commit 3be1c98. * update docs for inventory-next * Update catalog storage size * bump ci * Revert "update docs for inventory-next" * Update variables.tf * Update variables.tf * Update variables.tf * [web] redirect HTTP -> HTTPS * Specify provider requirements Instead of declaring a provider, specify the provider requirements. Works around issue with aws provider v3.x and the alb resource[1]. [1]: GSA/data.gov#2032 * Add security group to inventory web * Adding comma * [jenkins] add name variable Allows for uniquely identifying separate jenkins instances within a single environment. This allows you create multiple, individual jenkins instances in an environment. * make fmt * change lb backend to https & 443 * Revert "change lb backend to https & 443" * change lb backend to https & 443 * [jenkins] rename security group identifier This avoids the DependencyViolation error, where the SG needs to be recreated, but it is still attached to an EC2 instance. Rename the SG identifier to trigger terraform to remove the SG. * fixup merge conflict * terraform fmt * Update source references for relative modules * Update CI workflow for module tasks * [db] avoid downgrades Ignore the db version, since AWS will automatically upgrade minor versions during maintenance windows. * Include jumpbox in make fmt Co-authored-by: Bret Mogilefsky <bret.mogilefsky@gsa.gov> Co-authored-by: James Brown <james.c.brown@gsa.gov> Co-authored-by: jbrown-xentity <jbrown@xentity.com> Co-authored-by: Fuhu Xia <fxia@reisystems.com> Co-authored-by: Tom Wood <tom.wood@civicactions.com> Co-authored-by: Preston Sharpe <psharpe@xentity.com> Co-authored-by: Chris MacDermaid <64213093+chris-macdermaid@users.noreply.github.com>
mogul
added a commit
to GSA/datagov-infrastructure-live
that referenced
this issue
Mar 9, 2021
* Use ec2 instances instead of ASGs Auto-scaling groups are unavailable in BSP, so by using ec2 instances, we can manage our test environment similar to how we manage BSP. * Example of ansible dynamic inventory * [jumpbox] filter ami by environment * Note about terraform fmt * [jumpbox] add group tag for ansible inventory * Use vpc-style security groups Fixes issue of security groups disappearing, causing instances to be re-created on every run. * [jumpbox] update hostname * Don't check for required variables in tests The variables would be test variables anyway. This creates less work for ourselves. * Move jumpbox to its own module * Add postgresdb module A standard way to create postgresql databases * Add simple provision script for jumpbox * web modules for lb + ec2 hosts * Solr module * Catalog module * catalog database * [vpc] output availability zones as azs * [catalog] add pycsw database * [inventory] add inventory web instances * Update README * Make names unique for databases * Make lb target groups unique * [vpc] create an internal DNS zone for VPC * [vpc] add public and private dns zones * [solr] add internal dns names * [jumpbox] add public and private dns names * [stateless] create stateless ec2 instances * [web] add public and private dns * [catalog] add public and private dns names * [inventory] add public and private dns * Add solr security groups * Use single RDS for catalog and inventory * [jumpbox] fix provisioning - Specify connection for provisioner. - Use sudo for provision script which is executed as ubuntu. - Update hostname for public and private dns. Environment is redundant. * [app] remove the app module * [catalog] allow http/https egress on harvester * [jumpbox] unique name for jumpbox policy * [catalog] add outputs * [inventory] add outputs * [web] missing target group attachments * [jumpbox] add build-essential Needed for building ansible dependencies. * [jumpbox] provision with git * [jumpbox] add zlib for building python in pyenv * Add crm * dashboard * Outputs for database * wordpress * More outputs * Use variable instead of hardcoded region * Add provisioners * terraform fmt * More missing dependencies for python build https://github.com/pyenv/pyenv/wiki#suggested-build-environment * [jumpbox] simpler public dns name * Update variables.tf * Use HTTPS for nginx apps * [web] typo in resource name * [catalog] add bastion_host for harvester * Fix CI Pin terraform to 0.11 * Fix module name We were pulling the module from github master, which would have been the 2.x series which is not compatible with terraform v0.11. * Bump instance type for catalog With all the services starting, they run out of memory during a catalog deploy and hang. * Add Jenkins role * Add IAM instance profile to Jenkins Allow Jenkins to query for EC2 inventory in order to run ansible playbooks in the environment. * Refactor vpc into a module * Refactor jumpbox to proper module * Remove provider config Avoids "region required" error. * [stateful] fix attachment when instance_count > 1 * [jumpbox] remove extra script This was moved to modules/jumpbox * Add ckan-cloud module * Add missing validations * Ignore AMI updates on jumpbox Avoid destroy/recreate for new AMI images. * Restrict egress * [ckan-cloud] EKSFullAccess is a custom policy * [ckan-cloud] fix nat gateway limit issue We're hitting the 5 nat gateways per availability zone limit. Allow configuring AZs, and single nat gateway creation in the vpc module to work around this. * [ckan-cloud] update EKS custom policy The EKS policy is a custom policy. Let's create it for each environment and hopefully we can tailor the policy to restrict access to only that environment. * [solr] refactor to terraform module * [jenkins] refactor to terraform role * [wordpress] move to terraform module * [crm] refactor into terraform module * [dashboard] refactor into terraform module * [dashboard] update health check * [catalog] refactor into terraform module * [catalog] update health check url * [inventory] refactor into terraform module * [inventory] update health check url * Default to Ubuntu Bionic 18.04 * make fmt Add a fmt task to run `terraform fmt` on all modules. * [stateful] fix ebs attachment destroy Fixes GSA/datagov-infrastructure-modules#17 * [db] allow egress for database_port * [solr] allow egress to solr * Move egress rules to default security group * Update main.tf * Typo Fixes type error with catalog security groups (list vs string). * Pass ami_filter_name through to modules * [inventory] add web_instance_type variable Use t3.small, because CKAN dependency compilation requires quite a few resources. * [jenkins] security groups for Jenkins SSH * [solr] add egress rule With the default egress restrictions, allow solr consumers to egress to solr. * [solr] remove tomcat port We don't have any more older versions of solr relying on tomcat. * Remove CRM resources Removing all the actual resources that terraform would create. We don't want to delete all the files, because we still need to run terraform to remove the actual resources. Once that's done, we can remove the actual modules/files. * Remove crm modules * inventory-2-8 working modifications * [inventory] set default for ansible_group Avoid breaking change by specifying a default. * Updates for terraform 0.12 * Add clean make target Remove terraform files for a clean `terraform init`. * Auto-update terraform to v0.12 Use `terraform 0.12upgrade` to automatically update the terraform files. ``` ( set -e for module in $(find . -maxdepth 2 -type d -path './modules/*'); do pushd $module terraform init terraform 0.12upgrade -yes popd done ) ``` * Update alb module for terraform v0.12 * [catalog] update for web module * [vpc] bump vpc module for terraform v0.12 * Remove terragrunt modules * [solr] update output to match others * Fix TF-UPGRADE-TODOs * [jenkins] fix dns record type stateful.instance_public_ip is an array. * Ignore AMI changes When a new AMI is available, we don't want terraform to replace the instance. This matches how the BSP environments work where we update-in-place. * [db] security_group_ids variable Allow security groups to be passed in via variable. This avoids a bootstrap dependency where the default VPC security group does not exist until after the VPC is created. Until then, any `data` sources looking for the security group will fail. If you're provisioning from nothing, you won't be able to. Instead, by using a variable, Terraform will be able to calculate that all the databases depend on the VPC module and its default security group being created first. * [jumpbox] add security_groups variable * [web] add loadbalancer_security_groups * terraform fmt * [web] add egress rules to lb Allow load balancer outbound traffic to public subnets. * Add ansible_group to all modules Specify stack version (v1/v2) with each component. * [db] add variable for db allocated storage Bump default to 20GB * Add redis to catalog/inventory * [catalog] add db name and web name Allows us to provision multiple versions of catalog without conflicting names e.g. catalog-2-8 * [redis] add security group for redis Move redis to a module and add security groups. * add s3 bucket for inventory * [catalog] ensure unique harvester names * add sandbox in the s3 bucket name * Work around python install GSA/datagov-infrastructure-modules#32 * no default bucket name; to be provided by root module. * [redis] add subnets and security groups Required for VPC access. * [redis] refactor security groups Hitting the 5 security group limit on harvester. Need to refactor security groups to reduce the number of required "access" security groups for harvester. If this works out, we should do a similar refactor for database and solr. * [inventory] typo specifying security groups * name role more specific; run terraform fmt * fix policy syntax error; avoid profile name conflict * Refactor Ansible/SSH security groups Instead of having a special SG that must be applied to all instances, modify the "default" vpc-wide security group to allow for this access. * [web] instances should be on public subnet * Revert "[web] instances should be on public subnet" This reverts commit 1ee2ea1. Rather than move web instances to the public subnet, we'll allow the LB to talk to the private subnet. web instances with LBs don't need to be on the public subnet. * [web] allow ALBs to talk to private subnet * [redis] fix enable_redis Only create redis when enable/enable_redis is true. * [redis] add auth_token Not sure why aws_elasticache_cluster does not support auth_token, so switching to aws_replication_group in non-cluster mode which does. * [redis] add auth_token as output * Note on Ansible groups * Use a single tag for Ansible group As long as the tag is unique, it can easily be mapped to multiple groups within the Ansible inventory. * [inventory] pass instance profile to web module Fix inventory starting up and able to use the IAM role for s3 access. * Egress port for Redis Since we can't use our <service>_access security group trick (like we do for solr and db due to 5 sg limit per ec2 instance), we have to explicitly add the egress rule to any security group we pass to the redis allow_security_groups variable. * [redis] variable for transit_encryption_enabled Allow encryption in transit to be disabled for testing. * Add web instance to web security group * add lb to ci * [stateful] fix fstab on pre-existing EBS volume * add aws_lb_target_group_attachment to jenkins * add fgdc2iso * no need for port 80 for fgdc2iso * Revert "no need for port 80 for fgdc2iso" This reverts commit 3be1c98. * update docs for inventory-next * Update catalog storage size * bump ci * Revert "update docs for inventory-next" * Update variables.tf * Update variables.tf * Update variables.tf * [web] redirect HTTP -> HTTPS * Specify provider requirements Instead of declaring a provider, specify the provider requirements. Works around issue with aws provider v3.x and the alb resource[1]. [1]: GSA/data.gov#2032 * Add security group to inventory web * Adding comma * [jenkins] add name variable Allows for uniquely identifying separate jenkins instances within a single environment. This allows you create multiple, individual jenkins instances in an environment. * make fmt * change lb backend to https & 443 * Revert "change lb backend to https & 443" * change lb backend to https & 443 * [jenkins] rename security group identifier This avoids the DependencyViolation error, where the SG needs to be recreated, but it is still attached to an EC2 instance. Rename the SG identifier to trigger terraform to remove the SG. * fixup merge conflict * terraform fmt * Update source references for relative modules * Update CI workflow for module tasks * [db] avoid downgrades Ignore the db version, since AWS will automatically upgrade minor versions during maintenance windows. * Include jumpbox in make fmt * Use third-party actions for terraform workflow Co-authored-by: Bret Mogilefsky <bret.mogilefsky@gsa.gov> Co-authored-by: James Brown <james.c.brown@gsa.gov> Co-authored-by: jbrown-xentity <jbrown@xentity.com> Co-authored-by: Fuhu Xia <fxia@reisystems.com> Co-authored-by: Tom Wood <tom.wood@civicactions.com> Co-authored-by: Preston Sharpe <psharpe@xentity.com> Co-authored-by: Chris MacDermaid <64213093+chris-macdermaid@users.noreply.github.com>
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Our provisioner fails to install python, with apt complaining "E: Package 'python' has no installation candidate". This happens right after an
apt-get update
, so it is very strange. If you ssh onto the host, and thensudo apt-get update && sudo apt-get install -y python
, it installs correctly.The text was updated successfully, but these errors were encountered: