Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(cloud): the role 'pkgrepo' was not found in magma/orc8r/cloud/roles #14

Closed
chrisburkegg opened this issue Feb 27, 2019 · 2 comments
Closed

Comments

@chrisburkegg
Copy link

chrisburkegg commented Feb 27, 2019

Environment (let me know if anything else is needed):

OS X 10.14.2
$ vagrant version
Installed Version: 2.2.3
Latest Version: 2.2.3
$ ansible --version
ansible 2.7.8

Same as #13

To reproduce:

  1. clone the repository
  2. follow instructions; in terminal window 2:
  3. cd orc8r/cloud
  4. vagrant up datastore (await completion)
  5. vagrant up cloud

Output:

[ ... truncated vagrant output ...]
==> cloud: Running provisioner: ansible...
Vagrant has automatically selected the compatibility mode '2.0'
according to the Ansible version installed (2.7.8).

Alternatively, the compatibility mode can be specified in your Vagrantfile:
https://www.vagrantup.com/docs/provisioning/ansible_common.html#compatibility_mode

    cloud: Running ansible-playbook...
 [WARNING]: Found both group and host with same name: magma-cloud-dev

ERROR! the role 'pkgrepo' was not found in /Users/christopherburke/magma/orc8r/cloud/deploy/roles:/Users/christopherburke/roles:/Users/christopherburke/magma/orc8r/cloud/deploy

The error appears to have been in '/Users/christopherburke/magma/orc8r/cloud/deploy/cloud.dev.yml': line 42, column 7, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

    - { role: apt_package_cache, when: ansible_user == 'vagrant'}
    - { role: pkgrepo, vars: { distribution: "xenial" } }
      ^ here

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

Note:

$ ls deploy/roles
apt_package_cache dev               golang            modules           pkg_repo          proxy
aws_setup         disk_metrics      kafka             ocs               postgres          zookeeper
controller        dynamodb          kafka_prod        osquery           prometheus
@xjtian
Copy link
Contributor

xjtian commented Feb 27, 2019

Hi @krslynx, we set a roles path in ansible.cfg which should allow ansible to pick up the common roles like pkg_repo: https://github.com/facebookincubator/magma/blob/e309948fc53d273fd94289222efd7b856ca8a8a9/orc8r/cloud/ansible.cfg#L7

I'm unable to repro on a fresh clone of the repository on the same ansible version unfortunately.

Is it possible that you have environment variables which are overriding this? Specifically maybe an ANSIBLE_ROLES_PATH: https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-roles-path

@chrisburkegg
Copy link
Author

Good find, unsetting the environment variable fixed the issue for me. Thank you!

facebook-github-bot pushed a commit that referenced this issue Jul 15, 2020
Summary:
Pull Request resolved: facebookarchive/prometheus-configmanager#14

Rename the go.mod since the code is hosted at github.com/facebookincubator/prometheus-configmanager

Reviewed By: julianchr

Differential Revision: D22555374

fbshipit-source-id: 0d93630f15fa89161185387ccf6f2f53bf4b037f
facebook-github-bot pushed a commit that referenced this issue Jul 23, 2020
Summary:
Add CI tests from Symphony + add missing dependencies for fbcnms-ui

Pull Request resolved: magma/fbc-js-core#14

Test Plan: Tested on my fork

Reviewed By: dlvhdr

Differential Revision: D22691449

Pulled By: a8m

fbshipit-source-id: 6243932af4ce1a0d8ac8a768a8f3bc53683086c6
amarpad pushed a commit that referenced this issue Nov 28, 2020
* Add T3489 tests

Introduce a new test to validate T3489 expiry.

Credit to ulaskozat for the diff

Testing done:
Verified that an ASAN use after free occurs on timer expiry

=7031==ERROR: AddressSanitizer: heap-use-after-free on address 0x603000093460 at pc 0x555807545462 bp 0x7f87093fd2b0 sp 0x7f87093fd2a8
WRITE of size 8 at 0x603000093460 thread T16
    #0 0x555807545461 in nas_stop_T3489 /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_data_context.c:101
    #1 0x5558075c47c5 in esm_proc_esm_information_response /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:119
    #2 0x55580759339b in esm_recv_information_response /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_recv.c:575
    #3 0x555807551fba in _esm_sap_recv /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_sap.c:679
    #4 0x555807550f33 in esm_sap_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_sap.c:283
    #5 0x5558075195a0 in lowerlayer_data_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/LowerLayer.c:276
    #6 0x55580757848f in _emm_as_data_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_as.c:688
    #7 0x555807574ec4 in emm_as_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_as.c:180
    #8 0x55580753147f in emm_sap_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_sap.c:105
    #9 0x5558074d74fc in nas_proc_ul_transfer_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/nas_proc.c:326
    #10 0x5558071bd634 in handle_message /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:97
    #11 0x7f871bb277bd in zloop_start (/usr/lib/x86_64-linux-gnu/libczmq.so.4+0x287bd)
    #12 0x5558071bf169 in mme_app_thread /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:447
    #13 0x7f871e11f4a3 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x74a3)
    #14 0x7f871a494d0e in __clone (/lib/x86_64-linux-gnu/libc.so.6+0xe8d0e)

0x603000093460 is located 0 bytes inside of 32-byte region [0x603000093460,0x603000093480)
freed by thread T16 here:
    #0 0x7f871e602a10 in free (/usr/lib/x86_64-linux-gnu/libasan.so.3+0xc1a10)
    #1 0x5558070dc054 in free_wrapper /home/vagrant/magma/lte/gateway/c/oai/common/dynamic_memory_check.c:47
    #2 0x555807545496 in nas_stop_T3489 /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_data_context.c:103
    #3 0x5558075c517a in _esm_information /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:269
    #4 0x5558075c4e15 in _esm_information_t3489_handler /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:199
    #5 0x5558074e2e8a in mme_app_nas_timer_handle_signal_expiry /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/util/nas_timer.c:100
    #6 0x5558071be2d2 in handle_message /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:235
    #7 0x7f871bb277bd in zloop_start (/usr/lib/x86_64-linux-gnu/libczmq.so.4+0x287bd)

Signed-off-by: Amar Padmanabhan <amarpadmanabhan@fb.com>

* Invalidate the T3849 timer id while processing esm information retransmit

The _esm_information function stops the existing T3849 timer as referenced
by the esm_ctxt datastructure timer before rescheduling a new T3849 timer
when it requests for the esm info from a UE.
Stopping the timer has a side effect of freeing up the UE related
retransmission data associated with it. This causes issues during
the T3849 timer expiry handling as the cancelled timer and the rescheduled
one reuse the same retransmission data datastructure.

Fix this by unsetting the T3849 timer in the handling of the timer expiry
as the esm_ctxt is not associated with any valid timers anymore. Further
as the timer is a oneshot timer it will be cleaned up after the processing
of the timer callback.

Signed-off-by: Amar Padmanabhan <amarpadmanabhan@fb.com>
themarwhal pushed a commit that referenced this issue Nov 30, 2020
* Add T3489 tests

Introduce a new test to validate T3489 expiry.

Credit to ulaskozat for the diff

Testing done:
Verified that an ASAN use after free occurs on timer expiry

=7031==ERROR: AddressSanitizer: heap-use-after-free on address 0x603000093460 at pc 0x555807545462 bp 0x7f87093fd2b0 sp 0x7f87093fd2a8
WRITE of size 8 at 0x603000093460 thread T16
    #0 0x555807545461 in nas_stop_T3489 /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_data_context.c:101
    #1 0x5558075c47c5 in esm_proc_esm_information_response /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:119
    #2 0x55580759339b in esm_recv_information_response /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_recv.c:575
    #3 0x555807551fba in _esm_sap_recv /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_sap.c:679
    #4 0x555807550f33 in esm_sap_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_sap.c:283
    #5 0x5558075195a0 in lowerlayer_data_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/LowerLayer.c:276
    #6 0x55580757848f in _emm_as_data_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_as.c:688
    #7 0x555807574ec4 in emm_as_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_as.c:180
    #8 0x55580753147f in emm_sap_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_sap.c:105
    #9 0x5558074d74fc in nas_proc_ul_transfer_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/nas_proc.c:326
    #10 0x5558071bd634 in handle_message /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:97
    #11 0x7f871bb277bd in zloop_start (/usr/lib/x86_64-linux-gnu/libczmq.so.4+0x287bd)
    #12 0x5558071bf169 in mme_app_thread /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:447
    #13 0x7f871e11f4a3 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x74a3)
    #14 0x7f871a494d0e in __clone (/lib/x86_64-linux-gnu/libc.so.6+0xe8d0e)

0x603000093460 is located 0 bytes inside of 32-byte region [0x603000093460,0x603000093480)
freed by thread T16 here:
    #0 0x7f871e602a10 in free (/usr/lib/x86_64-linux-gnu/libasan.so.3+0xc1a10)
    #1 0x5558070dc054 in free_wrapper /home/vagrant/magma/lte/gateway/c/oai/common/dynamic_memory_check.c:47
    #2 0x555807545496 in nas_stop_T3489 /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_data_context.c:103
    #3 0x5558075c517a in _esm_information /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:269
    #4 0x5558075c4e15 in _esm_information_t3489_handler /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:199
    #5 0x5558074e2e8a in mme_app_nas_timer_handle_signal_expiry /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/util/nas_timer.c:100
    #6 0x5558071be2d2 in handle_message /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:235
    #7 0x7f871bb277bd in zloop_start (/usr/lib/x86_64-linux-gnu/libczmq.so.4+0x287bd)

Signed-off-by: Amar Padmanabhan <amarpadmanabhan@fb.com>

* Invalidate the T3849 timer id while processing esm information retransmit

The _esm_information function stops the existing T3849 timer as referenced
by the esm_ctxt datastructure timer before rescheduling a new T3849 timer
when it requests for the esm info from a UE.
Stopping the timer has a side effect of freeing up the UE related
retransmission data associated with it. This causes issues during
the T3849 timer expiry handling as the cancelled timer and the rescheduled
one reuse the same retransmission data datastructure.

Fix this by unsetting the T3849 timer in the handling of the timer expiry
as the esm_ctxt is not associated with any valid timers anymore. Further
as the timer is a oneshot timer it will be cleaned up after the processing
of the timer callback.

Signed-off-by: Amar Padmanabhan <amarpadmanabhan@fb.com>
This was referenced Jan 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants