Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed: ip6_fib_dump #1

Closed
wants to merge 1 commit into from
Closed

Fixed: ip6_fib_dump #1

wants to merge 1 commit into from

Conversation

vdv18
Copy link

@vdv18 vdv18 commented Mar 16, 2018

Fields is_unreach and is_prohitbit are not correct in ip6_fib_dump function.

Fields is_unreach and is_prohitbit are not correct in ip6_fib_dump function.
@chrisy
Copy link
Contributor

chrisy commented Mar 19, 2018

Thanks for your contribution! Note that we do not accept PR's on github. Instead please use https://gerrit.fd.io and see https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code for details.

@vdv18
Copy link
Author

vdv18 commented Mar 21, 2018

Ok. Thank you.

@vdv18 vdv18 closed this Mar 21, 2018
@vdv18 vdv18 deleted the patch-1 branch March 21, 2018 09:23
artem-belov pushed a commit to artem-belov/vpp that referenced this pull request Apr 12, 2019
Adding an initial protobuf v3 template
artem-belov pushed a commit to artem-belov/vpp that referenced this pull request Apr 12, 2019
* Addressing TODOs FDio#1

Signed-off-by: Serguei Bezverkhi <sbezverk@cisco.com>

* Switch to use tools package

Signed-off-by: Serguei Bezverkhi <sbezverk@cisco.com>
fdio-github pushed a commit that referenced this pull request Jun 24, 2020
Prevent overflow if input network prefix is too small and crash on
packet #1 due to vector not being allocated/initialized.

Type: fix
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Change-Id: I3494cc62ce889df48cc59cc9340b5dd70338c3a8
fdio-github pushed a commit that referenced this pull request Aug 19, 2020
Prevent overflow if input network prefix is too small and crash on
packet #1 due to vector not being allocated/initialized.

Type: fix
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Change-Id: I3494cc62ce889df48cc59cc9340b5dd70338c3a8
(cherry picked from commit f3d7bd9)
fdio-github pushed a commit that referenced this pull request Feb 23, 2021
Coverify complains deref_ptr before null check.

    	deref_ptr: Directly dereferencing pointer reg.
1214              vl_reg = vl_api_client_index_to_registration (reg->client_index);
1215              ALWAYS_ASSERT (vl_reg != NULL);
1216

CID 216104 (#1 of 1): Dereference before null check (REVERSE_INULL)
check_after_deref: Null-checking reg suggests that it may be null, but it
 has already been dereferenced on all paths leading to the check.
1217              if (reg && vl_api_can_send_msg (vl_reg))

I believe the check is for vl_reg instead of reg because vl_reg may be NULL
after the call vl_api_client_index_to_registration.

Type: fix

Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: Ic4eb2284e65c48396f20d5024a4241c80c70c886
fdio-github pushed a commit that referenced this pull request Feb 26, 2021
Fixes coverity issue CID 218445 (#1 of 1): Logically dead code
(DEADCODE) dead_error_line: Execution cannot reach this statement:
return 4294967295U;.

Type: fix

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: Ibf8ee0458320d20c3adca2efa2a4bfad7c190dbe
fdio-github pushed a commit that referenced this pull request Mar 19, 2021
This issue happens if:
- the API client connects via Unix socket
- the client issues the *_dump API call and immediately disconnects

What happens after is that the API handler keeps sending the *_details
messages, however at some point the write fails, and the socket is
deleted.

The attempt of a use of the registration pointer results in interpreting
the socket as a shared memory socket. This results in a crash, because
the data in this structure then does not make sense, like the below:

|
|Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
|__GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:67
|67      ../nptl/pthread_mutex_lock.c: No such file or directory.
|(gdb) bt
|#0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:67
|#1  0x00007ffff500f957 in svm_queue_lock (q=0x0) at /home/ubuntu/vpp/src/svm/queue.c:101
|#2  svm_queue_add (q=0x0, elem=0x7fffa76c2de0 "\210\365\006\060\001", nowait=0) at /home/ubuntu/vpp/src/svm/queue.c:274
|#3  0x00007ffff6e131e3 in vl_api_send_msg (rp=<optimized out>, elem=<optimized out>) at /home/ubuntu/vpp/src/vlibmemory/api.h:43
|#4  send_sw_interface_details (am=<optimized out>, rp=<optimized out>, swif=0x7fffb957a0bc, interface_name=<optimized out>, context=<optimized out>)
|    at /home/ubuntu/vpp/src/vnet/interface_api.c:353
|#5  0x00007ffff6e0edeb in vl_api_sw_interface_dump_t_handler (mp=<optimized out>) at /home/ubuntu/vpp/src/vnet/interface_api.c:412
|#6  0x00007ffff7daeb48 in msg_handler_internal (am=<optimized out>, the_msg=0x7fffb839a5e0, trace_it=<optimized out>, do_it=1, free_it=0)
|    at /home/ubuntu/vpp/src/vlibapi/api_shared.c:501
|#7  vl_msg_api_socket_handler (the_msg=0x7fffb839a5e0) at /home/ubuntu/vpp/src/vlibapi/api_shared.c:790
|#8  0x00007ffff7d7c608 in vl_socket_process_api_msg (rp=<optimized out>, input_v=0x7fffa76c2de0 "\210\365\006\060\001") at /home/ubuntu/vpp/src/vlibmemory/socket_api.c:212
|#9  0x00007ffff7d89ff1 in vl_api_clnt_process (vm=<optimized out>, node=<optimized out>, f=<optimized out>) at /home/ubuntu/vpp/src/vlibmemory/vlib_api.c:405
|#10 0x00007ffff53bf9a7 in vlib_process_bootstrap (_a=<optimized out>) at /home/ubuntu/vpp/src/vlib/main.c:1490
|#11 0x00007ffff4da0b2c in clib_calljmp () from /home/ayourtch/vpp/build-root/install-vpp-native/vpp/lib/libvppinfra.so.21.06
|#12 0x00007fffa99a4d90 in ?? ()
|#13 0x00007ffff53b6cb2 in vlib_process_startup (vm=0x7ffff56a9880 <vlib_global_main>, p=0x7fffb5d41380, f=0x0) at /home/ubuntu/vpp/src/vlib/main.c:1515
|#14 dispatch_process (vm=0x7ffff56a9880 <vlib_global_main>, p=0x7fffb5d41380, f=0x0, last_time_stamp=<optimized out>) at /home/ubuntu/vpp/src/vlib/main.c:1571
|#15 0x0000000000000000 in ?? ()
|(gdb) frame 3
|#3  0x00007ffff6e131e3 in vl_api_send_msg (rp=<optimized out>, elem=<optimized out>) at /home/ubuntu/vpp/src/vlibmemory/api.h:43
|43            vl_msg_api_send_shmem (rp->vl_input_queue, (u8 *) & elem);
|(gdb) l
|38          {
|39            vl_socket_api_send (rp, elem);
|40          }
|41        else
|42          {
|43            vl_msg_api_send_shmem (rp->vl_input_queue, (u8 *) & elem);
|44          }
|45      }
|46
|47      always_inline int
|(gdb)
|

The approach in this change is to avoid the closing operations "here and
now", but instead mark the the registration as a zombie and place
a forced RPC towards a callback that does the actual cleanup work.

Forced RPC is handled via the API processing loop with barrier sync,
so we are guaranteed not to have any API processing in-process.

Type: fix
Change-Id: I1972d42da620bdb4fd773c83262863c2781d9005
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
fdio-github pushed a commit that referenced this pull request Jun 24, 2021
==283032==AddressSanitizer CHECK failed: compiler-rt/lib/asan/asan_mapping.h:366
          "((AddrIsInMem(p))) != (0)" (0x0, 0x0)
    #0 0x49c128 in __asan::AsanCheckFailed
    #1 0x4ae8dc in __sanitizer::CheckFailed
    #2 0x495dec in __asan::ShadowSegmentEndpoint::ShadowSegmentEndpoint
    #3 0x495e48 in __asan_unpoison_memory_region
    #4 0xfffff4e851f8 in svm_map_region /home/vpp/src/svm/svm.c:611:7
    #5 0xfffff4e86d9c in svm_region_init_internal /home/vpp/src/svm/svm.c:797:8
    #6 0xfffff4e87ce4 in svm_region_init_args /home/vpp/src/svm/svm.c:880:3
    #7 0xfffff7f30d30 in vlibmemory_init /home/vpp/src/vlibmemory/memory_api.c:974:3
    #8 0xfffff4fd5368 in vlib_main /home/vpp/src/vlib/main.c:1986:16

svm_global_region_base_va 0x200000000000 is not in the aarch64 mapping range,
leading check failure and vpp cannot start.

aarch64 asan mapping
|| `[0x201000000000, 0xffffffffffff]` || HighMem    ||
|| `[0x041200000000, 0x200fffffffff]` || HighShadow ||
|| `[0x001200000000, 0x0411ffffffff]` || ShadowGap  ||
|| `[0x001000000000, 0x0011ffffffff]` || LowShadow  ||
|| `[0x000000000000, 0x000fffffffff]` || LowMem     ||

x86 asan mapping
|| `[0x10007fff8000, 0x7fffffffffff]` || HighMem    ||
|| `[0x02008fff7000, 0x10007fff7fff]` || HighShadow ||
|| `[0x00008fff7000, 0x02008fff6fff]` || ShadowGap  ||
|| `[0x00007fff8000, 0x00008fff6fff]` || LowShadow  ||
|| `[0x000000000000, 0x00007fff7fff]` || LowMem     ||

Type: fix

Signed-off-by: Tianyu Li <tianyu.li@arm.com>
Change-Id: I55ddbdcd361d66d4cfaf6459b2fa20fd8b64af37
fjccc referenced this pull request Feb 6, 2024
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I3d44ff851da00573343e15712284af3b9c3912e3
fdio-github pushed a commit that referenced this pull request Feb 19, 2024
For the openssl crypto engine based cipher encrypt/decrypt and HMAC IPSec
use cases, the openssl API calls of doing ctx init and key expansion are
moved to initialization stage.

In current implementation , the ctx is initialized with "key" and "iv" in
EVP_EncryptInit_ex (ctx, 0, 0, key->data, op->iv)
in data plane, while the ctx can be initialized with 'key' and 'iv' separately,
which means there could be two API calls:
 1. EVP_EncryptInit_ex (ctx, 0, 0, key->data, 0)
 2. EVP_EncryptInit_ex (ctx, 0, 0, 0, op->iv)

As the 'key' for certain IPSec SA is fixed and known, so call #1 can
be placed in IPSec SA initialization stage.
While call #2 should be kept in data plane for each packet, as the "iv"
is random for each packet.

Type: feature
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
Change-Id: Ided4462c1d4a38addc3078b03d618209e040a07a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants