Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add environment variables to guest with simple vagrantfile command #7015

Closed
poshest opened this issue Feb 11, 2016 · 6 comments
Closed

Add environment variables to guest with simple vagrantfile command #7015

poshest opened this issue Feb 11, 2016 · 6 comments

Comments

@poshest
Copy link

poshest commented Feb 11, 2016

I want to put something simple in the vagrantfile, like

config.vm.env.MY_FIRST_ENV_VAR = "first_env_var_value"
config.vm.env.MY_SECOND_ENV_VAR = "second_env_var_value"
...

which creates an environment variables on the guest machine based on their current values in the vagrantfile upon vagrant up and not during provisioning.

There are provisioning hacks like this, but I can't get it working on my Windows 7 (host), precise32 (guest), maybe because of this? And I use PuTTY to ssh in, not vagrant ssh so I don't think config.ssh.forward_env is possible in my case either.

This seems to be a very common use case for anyone doing a 12 factor app. Is there a chance you could provide this functionality?

Or if there's a better pattern to achieve this, please explain! :)

@sethvargo
Copy link
Contributor

Hi @poshest

There are a number of ways you can set environment variables on the host. First, you can use a shell provisioner with an env hash. You can also use the shell or file provisioners to write a file in /etc/profile.d with the envvars.

@poshest
Copy link
Author

poshest commented Feb 12, 2016

Hey @sethvargo, thanks for your help!

The "env hash" documententation says "List of key-value pairs to pass in as environment variables to the script..." I understand that this will not add environment variables to every new session upon vagrant up, but rather simply pass env vars "to the script" for use only while it's provisioning. Please correct me if I'm wrong.

Also, the second method you mentioned was similar to the hack I already linked in my question.

In the meantime, I've abandoned altogether using env variables as config for these reasons

  • sudo can't access the user's environment variables without special settings.
  • you have to double escape the kind of JSON strings I wanted to put into the env variables, which makes the config horrible to read.
  • this method only works during provisioning, requiring an extra step of vagrant provision every time I want to change the variables. What I really want is 1) change Vagrantfile 2) vagrant reload 3) vagrant ssh (or equivalent in PuTTY as I use)

If Vagrant would add the functionality I'm calling for above, and in addition remedy the first two dot points, I think it would be very welcome by many developers.

@starikovs
Copy link

starikovs commented Jul 24, 2016

Hey @poshest, I've just tried to do the same. And based on this SO question I came up with the solution:

  1. config.vm.provision "shell", inline: "> /etc/profile.d/myvars.sh", run: "always"

  2. config.vm.provision "shell", inline: "echo "export http_proxy=http://proxy.somedomain.com:3128\" >> /etc/profile.d/myvars.sh", run: "always"

  3. config.vm.provision "shell", inline: "echo "export https_proxy=https://proxy.somedomain.com:3128\" >> /etc/profile.d/myvars.sh", run: "always"

  4. /etc/profile.d/*.sh scripts are run at the startup of the bash shell

  5. removes everything from myvars.sh

  6. sets the first variable

  7. sets the second variable

Because of run: "always" you can add/remove variables from Vagrantfile :)

@kitforbes
Copy link

I took @starikovs solution and (in my mind) simplified it a little. This example does the same, but with fewer steps. It also shows how to read environment variables from the host to pass to the guest.

$set_environment_variables = <<SCRIPT
tee "/etc/profile.d/myvars.sh" > "/dev/null" <<EOF
# Ansible environment variables.
export ANSIBLE_STDOUT_CALLBACK=debug

# AWS environment variables.
export AWS_DEFAULT_REGION=#{ENV['AWS_DEFAULT_REGION']}
export AWS_ACCESS_KEY_ID=#{ENV['AWS_ACCESS_KEY_ID']}
export AWS_SECRET_ACCESS_KEY=#{ENV['AWS_SECRET_ACCESS_KEY']}
EOF
SCRIPT

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.provider "hyperv" do |machine|
    machine.vmname = "centos7"
  end

  config.vm.provision "shell", inline: $set_environment_variables, run: "always"

  config.vm.provision "ansible_local" do |ansible|
    ansible.playbook = "playbook.yml"
    ansible.verbose = true
    ansible.install_mode = "pip"
    ansible.version = "2.2.1.0"
  end
end

@vgorloff
Copy link

Vagrant.configure("2") do |config|
   config.vm.box = "ubuntu/bionic64"
   config.vm.provision :shell, inline: "echo 'source /vagrant/Scripts/bootstrap.sh' > /etc/profile.d/sa-environment.sh", :run => 'always'

   config.vm.provider "virtualbox" do |vb|
      vb.memory = "2560" # Customize the amount of memory on the VM
   end
end

Where file (on Host) Scripts/bootstrap.sh is:

## Sources
export SA_SOURCES_ROOT=/vagrant/Sources

export SA_SOURCES_ANDK=$SA_SOURCES_ROOT/android-ndk-r18b
export SA_SOURCES_ICU=$SA_SOURCES_ROOT/icu
export SA_SOURCES_SWIFT=$SA_SOURCES_ROOT/swift

## Build
export SA_BUILD_ROOT=/vagrant/Build

export SA_BUILD_ROOT_ANDK=$SA_BUILD_ROOT/android-ndk
export SA_BUILD_ROOT_ICU=$SA_BUILD_ROOT/icu
export SA_BUILD_ROOT_SWIFT=$SA_BUILD_ROOT/swift

## Path
export PATH=$PATH:$SA_SOURCES_ANDK

As result:

vagrant up
vagrant ssh

$ more /etc/profile.d/sa-environment.sh
# Outputs: source /vagrant/Scripts/bootstrap.sh

$ env | sort
# Outputs:
SA_BUILD_ROOT_ANDK=/vagrant/Build/android-ndk
SA_BUILD_ROOT_ICU=/vagrant/Build/icu
SA_BUILD_ROOT_SWIFT=/vagrant/Build/swift
SA_BUILD_ROOT=/vagrant/Build
SA_SOURCES_ANDK=/vagrant/Sources/android-ndk-r18b
SA_SOURCES_ICU=/vagrant/Sources/icu
SA_SOURCES_ROOT=/vagrant/Sources
SA_SOURCES_SWIFT=/vagrant/Sources/swift

@ghost
Copy link

ghost commented Mar 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Mar 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants