Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(gateway): the role 'gateway_dev' was not found in magma/lte/gateway/deploy #13

Closed
chrisburkegg opened this issue Feb 27, 2019 · 2 comments

Comments

@chrisburkegg
Copy link

chrisburkegg commented Feb 27, 2019

Environment (let me know if anything else is needed):

OS X 10.14.2
$ vagrant version
Installed Version: 2.2.3
Latest Version: 2.2.3
$ ansible --version
ansible 2.7.8

To reproduce:

  1. clone the repository
  2. follow instructions; in terminal window 1:
  3. cd lte/gateway
  4. vagrant up

Output:

[ ... truncated vagrant output ...]
==> magma: Running provisioner: ansible...
Vagrant has automatically selected the compatibility mode '2.0'
according to the Ansible version installed (2.7.8).

Alternatively, the compatibility mode can be specified in your Vagrantfile:
https://www.vagrantup.com/docs/provisioning/ansible_common.html#compatibility_mode

    magma: Running ansible-playbook...
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o IdentityFile=/Users/christopherburke/magma/lte/gateway/.vagrant/machines/magma/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --extra-vars=ansible_user\=\'vagrant\' --limit="magma" --inventory-file=deploy/hosts -v --timeout=30 deploy/magma_dev.yml
Using /Users/christopherburke/magma/lte/gateway/ansible.cfg as config file
/Users/christopherburke/magma/lte/gateway/deploy/hosts did not meet host_list requirements, check plugin documentation if this is unexpected
/Users/christopherburke/magma/lte/gateway/deploy/hosts did not meet script requirements, check plugin documentation if this is unexpected
ERROR! the role 'gateway_dev' was not found in /Users/christopherburke/magma/lte/gateway/deploy/roles:/Users/christopherburke/roles:/Users/christopherburke/magma/lte/gateway/deploy

The error appears to have been in '/Users/christopherburke/magma/lte/gateway/deploy/magma_dev.yml': line 12, column 7, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  roles:
    - role: gateway_dev
      ^ here

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

Note:

$ ls deploy/roles
dev_common magma      magma_oai  magma_test trfserver
@chrisburkegg chrisburkegg changed the title (gateway): the role 'gateway_dev' was not found inmagma/lte/gateway/deploy (gateway): the role 'gateway_dev' was not found in magma/lte/gateway/deploy Feb 27, 2019
@rpraveen
Copy link
Contributor

Do you have ANSIBLE_ROLES_PATH environment variable set by any chance?

magma/lte/gateway/ansible.cfg adds magma/orc8r/tools/ansible/roles as a roles directory, but looks like thats getting overridden.

@chrisburkegg
Copy link
Author

Good find, unsetting the environment variable fixed the issue for me. Thank you!

facebook-github-bot pushed a commit that referenced this issue Jul 23, 2020
Summary: Pull Request resolved: magma/fbc-js-core#13

Test Plan: Tested on my fork

Reviewed By: dlvhdr

Differential Revision: D22690869

Pulled By: a8m

fbshipit-source-id: 95b9baec24e72697e18ffd56f93b1662bae3f749
amarpad pushed a commit that referenced this issue Nov 28, 2020
* Add T3489 tests

Introduce a new test to validate T3489 expiry.

Credit to ulaskozat for the diff

Testing done:
Verified that an ASAN use after free occurs on timer expiry

=7031==ERROR: AddressSanitizer: heap-use-after-free on address 0x603000093460 at pc 0x555807545462 bp 0x7f87093fd2b0 sp 0x7f87093fd2a8
WRITE of size 8 at 0x603000093460 thread T16
    #0 0x555807545461 in nas_stop_T3489 /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_data_context.c:101
    #1 0x5558075c47c5 in esm_proc_esm_information_response /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:119
    #2 0x55580759339b in esm_recv_information_response /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_recv.c:575
    #3 0x555807551fba in _esm_sap_recv /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_sap.c:679
    #4 0x555807550f33 in esm_sap_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_sap.c:283
    #5 0x5558075195a0 in lowerlayer_data_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/LowerLayer.c:276
    #6 0x55580757848f in _emm_as_data_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_as.c:688
    #7 0x555807574ec4 in emm_as_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_as.c:180
    #8 0x55580753147f in emm_sap_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_sap.c:105
    #9 0x5558074d74fc in nas_proc_ul_transfer_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/nas_proc.c:326
    #10 0x5558071bd634 in handle_message /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:97
    #11 0x7f871bb277bd in zloop_start (/usr/lib/x86_64-linux-gnu/libczmq.so.4+0x287bd)
    #12 0x5558071bf169 in mme_app_thread /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:447
    #13 0x7f871e11f4a3 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x74a3)
    #14 0x7f871a494d0e in __clone (/lib/x86_64-linux-gnu/libc.so.6+0xe8d0e)

0x603000093460 is located 0 bytes inside of 32-byte region [0x603000093460,0x603000093480)
freed by thread T16 here:
    #0 0x7f871e602a10 in free (/usr/lib/x86_64-linux-gnu/libasan.so.3+0xc1a10)
    #1 0x5558070dc054 in free_wrapper /home/vagrant/magma/lte/gateway/c/oai/common/dynamic_memory_check.c:47
    #2 0x555807545496 in nas_stop_T3489 /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_data_context.c:103
    #3 0x5558075c517a in _esm_information /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:269
    #4 0x5558075c4e15 in _esm_information_t3489_handler /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:199
    #5 0x5558074e2e8a in mme_app_nas_timer_handle_signal_expiry /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/util/nas_timer.c:100
    #6 0x5558071be2d2 in handle_message /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:235
    #7 0x7f871bb277bd in zloop_start (/usr/lib/x86_64-linux-gnu/libczmq.so.4+0x287bd)

Signed-off-by: Amar Padmanabhan <amarpadmanabhan@fb.com>

* Invalidate the T3849 timer id while processing esm information retransmit

The _esm_information function stops the existing T3849 timer as referenced
by the esm_ctxt datastructure timer before rescheduling a new T3849 timer
when it requests for the esm info from a UE.
Stopping the timer has a side effect of freeing up the UE related
retransmission data associated with it. This causes issues during
the T3849 timer expiry handling as the cancelled timer and the rescheduled
one reuse the same retransmission data datastructure.

Fix this by unsetting the T3849 timer in the handling of the timer expiry
as the esm_ctxt is not associated with any valid timers anymore. Further
as the timer is a oneshot timer it will be cleaned up after the processing
of the timer callback.

Signed-off-by: Amar Padmanabhan <amarpadmanabhan@fb.com>
themarwhal pushed a commit that referenced this issue Nov 30, 2020
* Add T3489 tests

Introduce a new test to validate T3489 expiry.

Credit to ulaskozat for the diff

Testing done:
Verified that an ASAN use after free occurs on timer expiry

=7031==ERROR: AddressSanitizer: heap-use-after-free on address 0x603000093460 at pc 0x555807545462 bp 0x7f87093fd2b0 sp 0x7f87093fd2a8
WRITE of size 8 at 0x603000093460 thread T16
    #0 0x555807545461 in nas_stop_T3489 /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_data_context.c:101
    #1 0x5558075c47c5 in esm_proc_esm_information_response /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:119
    #2 0x55580759339b in esm_recv_information_response /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_recv.c:575
    #3 0x555807551fba in _esm_sap_recv /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_sap.c:679
    #4 0x555807550f33 in esm_sap_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/sap/esm_sap.c:283
    #5 0x5558075195a0 in lowerlayer_data_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/LowerLayer.c:276
    #6 0x55580757848f in _emm_as_data_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_as.c:688
    #7 0x555807574ec4 in emm_as_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_as.c:180
    #8 0x55580753147f in emm_sap_send /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/sap/emm_sap.c:105
    #9 0x5558074d74fc in nas_proc_ul_transfer_ind /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/nas_proc.c:326
    #10 0x5558071bd634 in handle_message /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:97
    #11 0x7f871bb277bd in zloop_start (/usr/lib/x86_64-linux-gnu/libczmq.so.4+0x287bd)
    #12 0x5558071bf169 in mme_app_thread /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:447
    #13 0x7f871e11f4a3 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x74a3)
    #14 0x7f871a494d0e in __clone (/lib/x86_64-linux-gnu/libc.so.6+0xe8d0e)

0x603000093460 is located 0 bytes inside of 32-byte region [0x603000093460,0x603000093480)
freed by thread T16 here:
    #0 0x7f871e602a10 in free (/usr/lib/x86_64-linux-gnu/libasan.so.3+0xc1a10)
    #1 0x5558070dc054 in free_wrapper /home/vagrant/magma/lte/gateway/c/oai/common/dynamic_memory_check.c:47
    #2 0x555807545496 in nas_stop_T3489 /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_data_context.c:103
    #3 0x5558075c517a in _esm_information /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:269
    #4 0x5558075c4e15 in _esm_information_t3489_handler /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/esm/esm_information.c:199
    #5 0x5558074e2e8a in mme_app_nas_timer_handle_signal_expiry /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/util/nas_timer.c:100
    #6 0x5558071be2d2 in handle_message /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:235
    #7 0x7f871bb277bd in zloop_start (/usr/lib/x86_64-linux-gnu/libczmq.so.4+0x287bd)

Signed-off-by: Amar Padmanabhan <amarpadmanabhan@fb.com>

* Invalidate the T3849 timer id while processing esm information retransmit

The _esm_information function stops the existing T3849 timer as referenced
by the esm_ctxt datastructure timer before rescheduling a new T3849 timer
when it requests for the esm info from a UE.
Stopping the timer has a side effect of freeing up the UE related
retransmission data associated with it. This causes issues during
the T3849 timer expiry handling as the cancelled timer and the rescheduled
one reuse the same retransmission data datastructure.

Fix this by unsetting the T3849 timer in the handling of the timer expiry
as the esm_ctxt is not associated with any valid timers anymore. Further
as the timer is a oneshot timer it will be cleaned up after the processing
of the timer callback.

Signed-off-by: Amar Padmanabhan <amarpadmanabhan@fb.com>
This was referenced Jan 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants