Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem: stream_engine is vulnerable to downgrade attacks #1190

Closed
hintjens opened this issue Sep 20, 2014 · 16 comments
Closed

Problem: stream_engine is vulnerable to downgrade attacks #1190

hintjens opened this issue Sep 20, 2014 · 16 comments

Comments

@hintjens
Copy link
Member

When accepting a connection as client or server, the engine takes the mechanism from the peer and implements that, without checking that it matches the mechanism set on the socket.

Solution: the peer's mechanism must match the options.mechanism, or else the connection must be closed.

Note that this affects ZeroMQ v4.0.4 stable. Fixed in libzmq master, backporting for v4.0.5.

@hintjens hintjens changed the title Problem: stream_engine is vulnerably to downgrade attacks Problem: stream_engine is vulnerable to downgrade attacks Sep 20, 2014
@k0da
Copy link

k0da commented Jan 27, 2015

This change makes test_security_curve stuck on PowerPC in 4.0.5 version.

d73b2408808375af2a6a2ac8e211db429aac71f2 is the first bad commit
commit d73b2408808375af2a6a2ac8e211db429aac71f2
Author: Pieter Hintjens ph@imatix.com
Date: Fri Sep 19 19:24:45 2014 +0200

Merged patch for #1190

:040000 040000 3820e8c9bf16423abce419e42469f7b26fcf670e 59972c00867e37635e32bc51eda425c84d26987d M src
:040000 040000 d6f896c25cd08cf59d067015888102b3ab5ee3f0 be0ed20cd2b6a8aa653925a49a6aedd04a1f116c M tests
bisect run success.

@hintjens
Copy link
Member Author

Can we figure out where it's stuck?
On Jan 27, 2015 5:57 PM, "Dinar Valeev" notifications@github.com wrote:

This change makes test_security_curve stuck on PowerPC in 4.0.5 version.

d73b2408808375af2a6a2ac8e211db429aac71f2 is the first bad commit
commit d73b2408808375af2a6a2ac8e211db429aac71f2
Author: Pieter Hintjens ph@imatix.com
Date: Fri Sep 19 19:24:45 2014 +0200

Merged patch for #1190

:040000 040000 3820e8c9bf16423abce419e42469f7b26fcf670e
59972c00867e37635e32bc51eda425c84d26987d M src
:040000 040000 d6f896c25cd08cf59d067015888102b3ab5ee3f0
be0ed20cd2b6a8aa653925a49a6aedd04a1f116c M tests
bisect run success.


Reply to this email directly or view it on GitHub
#1190 (comment).

@k0da
Copy link

k0da commented Jan 27, 2015

This is what I see so far by attaching to "stuck process"

#0 0x00003fff7d522568 in lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00003fff7d51b774 in pthread_mutex_lock () from /lib64/libpthread.so.0
#2 0x00003fff7d58d600 in lock (this=0x1000d1e5a58) at mutex.hpp:98
#3 add (increment
=1, this=0x1000d1e5a50) at atomic_counter.hpp:111
#4 zmq::own_t::inc_seqnum (this=0x1000d1e5820) at own.cpp:58
#5 0x00003fff7d58b174 in zmq::object_t::send_own (this=0x1000d1e5820, destination
=, object_=) at object.cpp:198
#6 0x00003fff7d58d744 in zmq::own_t::launch_child (this=0x1000d1e5820, object_=0x1000d1e6c20) at own.cpp:79
#7 0x00003fff7d59c450 in zmq::socket_base_t::add_endpoint (this=, addr_=0x100036d8 "tcp://localhost:9998", endpoint_=,
pipe=) at socket_base.cpp:623
#8 0x00003fff7d59eab0 in zmq::socket_base_t::connect (this=0x1000d1e5820, addr_=0x100036d8 "tcp://localhost:9998") at socket_base.cpp:616
#9 0x00003fff7d5b769c in zmq_connect (s_=0x1000d1e5820, addr_=0x100036d8 "tcp://localhost:9998") at zmq.cpp:320
#10 0x00000000100011bc in main () at test_security_curve.cpp:130

@rodgert
Copy link
Contributor

rodgert commented Jan 27, 2015

(gdb) thread apply all bt

backtrace from all threads, who else is sitting in ___lll_lock_wait() ?

On Tue, Jan 27, 2015 at 4:45 PM, Dinar Valeev notifications@github.com
wrote:

This is what I see so far by attaching to "stuck process"

#0 0x00003fff7d522568 in lll_lock_wait () from /lib64/libpthread.so.0
#1 #1 0x00003fff7d51b774 in
pthread_mutex_lock () from /lib64/libpthread.so.0
#2 #2 0x00003fff7d58d600 in lock
(this=0x1000d1e5a58) at mutex.hpp:98
#3 #3 add (increment
=1,
this=0x1000d1e5a50) at atomic_counter.hpp:111
#4 #4 zmq::own_t::inc_seqnum
(this=0x1000d1e5820) at own.cpp:58
#5 #5 0x00003fff7d58b174 in
zmq::object_t::send_own (this=0x1000d1e5820, destination
=, object_=) at
object.cpp:198
#6 #6 0x00003fff7d58d744 in
zmq::own_t::launch_child (this=0x1000d1e5820, object_=0x1000d1e6c20) at
own.cpp:79
#7 #7 0x00003fff7d59c450 in
zmq::socket_base_t::add_endpoint (this=, addr_=0x100036d8
"tcp://localhost:9998", endpoint_=,
pipe=) at socket_base.cpp:623
#8 #8 0x00003fff7d59eab0 in
zmq::socket_base_t::connect (this=0x1000d1e5820, addr_=0x100036d8
"tcp://localhost:9998") at socket_base.cpp:616
#9 #9 0x00003fff7d5b769c in
zmq_connect (s_=0x1000d1e5820, addr_=0x100036d8 "tcp://localhost:9998") at
zmq.cpp:320
#10 #10 0x00000000100011bc in
main () at test_security_curve.cpp:130


Reply to this email directly or view it on GitHub
#1190 (comment).

@k0da
Copy link

k0da commented Jan 27, 2015

Thread 4 (Thread 0x3fff7d04f180 (LWP 151843)):
#0 0x00003fff7d4562ec in epoll_wait () from /lib64/libc.so.6
#1 0x00003fff7d57d990 in zmq::epoll_t::loop (this=0x1000d1e2710) at epoll.cpp:145
#2 0x00003fff7d5ad0d0 in thread_routine (arg_=0x1000d1e27b0) at thread.cpp:81
#3 0x00003fff7d518cd4 in start_thread () from /lib64/libpthread.so.0
#4 0x00003fff7d455b00 in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x3fff7c84f180 (LWP 151844)):
#0 0x00003fff7d4562ec in epoll_wait () from /lib64/libc.so.6
#1 0x00003fff7d57d990 in zmq::epoll_t::loop (this=0x1000d1e2c10) at epoll.cpp:145
#2 0x00003fff7d5ad0d0 in thread_routine (arg_=0x1000d1e2cb0) at thread.cpp:81
#3 0x00003fff7d518cd4 in start_thread () from /lib64/libpthread.so.0
#4 0x00003fff7d455b00 in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x3fff7c04f180 (LWP 151845)):
#0 0x00003fff7d449c78 in poll () from /lib64/libc.so.6
#1 0x00003fff7d59adac in poll (timeout=, nfds=1, fds=) at /usr/include/bits/poll2.h:46
#2 zmq::signaler_t::wait (this=0x1000d1e3240, timeout
=) at signaler.cpp:222
#3 0x00003fff7d582df0 in zmq::mailbox_t::recv (this=0x1000d1e3190, cmd
=0x3fff7c04e2f8, timeout
=) at mailbox.cpp:72
#4 0x00003fff7d59c8a4 in zmq::socket_base_t::process_commands (this=0x1000d1e2e60, timeout=, throttle=false) at socket_base.cpp:884
#5 0x00003fff7d59d20c in zmq::socket_base_t::recv (this=0x1000d1e2e60, msg=0x3fff7c04e428, flags_=) at socket_base.cpp:818
#6 0x00003fff7d5b8160 in s_recvmsg (s_=, msg_=, flags_=) at zmq.cpp:460
#7 0x00003fff7d5b8234 in zmq_recv (s_=0x1000d1e2e60, buf_=0x3fff7c04e4a8, len_=255, flags_=) at zmq.cpp:484
#8 0x0000000010002780 in s_recv (socket=) at testutil.hpp:159
#9 0x0000000010002a2c in zap_handler (handler=0x1000d1e2e60) at test_security_curve.cpp:36
#10 0x00003fff7d5ad0d0 in thread_routine (arg_=0x1000d1e3610) at thread.cpp:81
#11 0x00003fff7d518cd4 in start_thread () from /lib64/libpthread.so.0
#12 0x00003fff7d455b00 in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x3fff7d63dab0 (LWP 151829)):
#0 0x00003fff7d522568 in lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00003fff7d51b774 in pthread_mutex_lock () from /lib64/libpthread.so.0
#2 0x00003fff7d58d600 in lock (this=0x1000d1e5a58) at mutex.hpp:98
#3 add (increment
=1, this=0x1000d1e5a50) at atomic_counter.hpp:111
#4 zmq::own_t::inc_seqnum (this=0x1000d1e5820) at own.cpp:58
#5 0x00003fff7d58b174 in zmq::object_t::send_own (this=0x1000d1e5820, destination
=, object_=) at object.cpp:198
#6 0x00003fff7d58d744 in zmq::own_t::launch_child (this=0x1000d1e5820, object_=0x1000d1e6c20) at own.cpp:79
#7 0x00003fff7d59c450 in zmq::socket_base_t::add_endpoint (this=, addr_=0x100036d8 "tcp://localhost:9998", endpoint_=,
pipe=) at socket_base.cpp:623
---Type to continue, or q to quit---
#8 0x00003fff7d59eab0 in zmq::socket_base_t::connect (this=0x1000d1e5820, addr_=0x100036d8 "tcp://localhost:9998") at socket_base.cpp:616
#9 0x00003fff7d5b769c in zmq_connect (s_=0x1000d1e5820, addr_=0x100036d8 "tcp://localhost:9998") at zmq.cpp:320
#10 0x00000000100011bc in main () at test_security_curve.cpp:130

@rodgert
Copy link
Contributor

rodgert commented Jan 27, 2015

Grim...This is PowerPC? Which OS?

On Tue, Jan 27, 2015 at 5:05 PM, Dinar Valeev notifications@github.com
wrote:

Thread 4 (Thread 0x3fff7d04f180 (LWP 151843)):
#0 0x00003fff7d4562ec in epoll_wait () from /lib64/libc.so.6
#1 #1 0x00003fff7d57d990 in
zmq::epoll_t::loop (this=0x1000d1e2710) at epoll.cpp:145
#2 #2 0x00003fff7d5ad0d0 in
thread_routine (arg_=0x1000d1e27b0) at thread.cpp:81
#3 #3 0x00003fff7d518cd4 in
start_thread () from /lib64/libpthread.so.0
#4 #4 0x00003fff7d455b00 in
clone () from /lib64/libc.so.6

Thread 3 (Thread 0x3fff7c84f180 (LWP 151844)):
#0 0x00003fff7d4562ec in epoll_wait () from /lib64/libc.so.6
#1 #1 0x00003fff7d57d990 in
zmq::epoll_t::loop (this=0x1000d1e2c10) at epoll.cpp:145
#2 #2 0x00003fff7d5ad0d0 in
thread_routine (arg_=0x1000d1e2cb0) at thread.cpp:81
#3 #3 0x00003fff7d518cd4 in
start_thread () from /lib64/libpthread.so.0
#4 #4 0x00003fff7d455b00 in
clone () from /lib64/libc.so.6

Thread 2 (Thread 0x3fff7c04f180 (LWP 151845)):
#0 0x00003fff7d449c78 in poll () from /lib64/libc.so.6
#1 #1 0x00003fff7d59adac in poll
(timeout=, nfds=1, fds=) at /usr/include/bits/poll2.h:46
#2 #2 zmq::signaler_t::wait
(this=0x1000d1e3240, timeout
=) at signaler.cpp:222
#3 #3 0x00003fff7d582df0 in
zmq::mailbox_t::recv (this=0x1000d1e3190, cmd
=0x3fff7c04e2f8, timeout
=)
at mailbox.cpp:72
#4 #4 0x00003fff7d59c8a4 in
zmq::socket_base_t::process_commands (this=0x1000d1e2e60, timeout
=,
throttle
=false) at socket_base.cpp:884
#5 #5 0x00003fff7d59d20c in
zmq::socket_base_t::recv (this=0x1000d1e2e60, msg
=0x3fff7c04e428, flags_=)
at socket_base.cpp:818
#6 #6 0x00003fff7d5b8160 in
s_recvmsg (s_=, msg_=, flags_=) at zmq.cpp:460
#7 #7 0x00003fff7d5b8234 in
zmq_recv (s_=0x1000d1e2e60, buf_=0x3fff7c04e4a8, len_=255, flags_=) at
zmq.cpp:484
#8 #8 0x0000000010002780 in
s_recv (socket=) at testutil.hpp:159
#9 #9 0x0000000010002a2c in
zap_handler (handler=0x1000d1e2e60) at test_security_curve.cpp:36
#10 #10 0x00003fff7d5ad0d0 in
thread_routine (arg_=0x1000d1e3610) at thread.cpp:81
#11 #11 0x00003fff7d518cd4 in
start_thread () from /lib64/libpthread.so.0
#12 #12 0x00003fff7d455b00 in
clone () from /lib64/libc.so.6

Thread 1 (Thread 0x3fff7d63dab0 (LWP 151829)):
#0 0x00003fff7d522568 in lll_lock_wait () from /lib64/libpthread.so.0
#1 #1 0x00003fff7d51b774 in
pthread_mutex_lock () from /lib64/libpthread.so.0
#2 #2 0x00003fff7d58d600 in lock
(this=0x1000d1e5a58) at mutex.hpp:98
#3 #3 add (increment
=1,
this=0x1000d1e5a50) at atomic_counter.hpp:111
#4 #4 zmq::own_t::inc_seqnum
(this=0x1000d1e5820) at own.cpp:58
#5 #5 0x00003fff7d58b174 in
zmq::object_t::send_own (this=0x1000d1e5820, destination
=, object_=) at
object.cpp:198
#6 #6 0x00003fff7d58d744 in
zmq::own_t::launch_child (this=0x1000d1e5820, object_=0x1000d1e6c20) at
own.cpp:79
#7 #7 0x00003fff7d59c450 in
zmq::socket_base_t::add_endpoint (this=, addr_=0x100036d8
"tcp://localhost:9998", endpoint_=,
pipe=) at socket_base.cpp:623
---Type to continue, or q to quit---
#8 #8 0x00003fff7d59eab0 in
zmq::socket_base_t::connect (this=0x1000d1e5820, addr_=0x100036d8
"tcp://localhost:9998") at socket_base.cpp:616
#9 #9 0x00003fff7d5b769c in
zmq_connect (s_=0x1000d1e5820, addr_=0x100036d8 "tcp://localhost:9998") at
zmq.cpp:320
#10 #10 0x00000000100011bc in
main () at test_security_curve.cpp:130


Reply to this email directly or view it on GitHub
#1190 (comment).

@k0da
Copy link

k0da commented Jan 27, 2015

Yeah.. This is PowerPC.. OS is Linux (openSUSE).. Happens on ppc32, ppc64 Big Endian, ppc64le Little Endian.

@rodgert
Copy link
Contributor

rodgert commented Jan 27, 2015

There is no architecture specific implementation for PPC of
atomic_counter_t so it resorts to using mutex_t to guard updates to to the
counter. It is not obvious to me how, in the absence of some issue with
pthread_mutex itself, how this could result in being blocked indefinitely
while trying to acquire the mutex.

On Tue, Jan 27, 2015 at 5:18 PM, Dinar Valeev notifications@github.com
wrote:

Yeah.. This is PowerPC.. OS is Linux (openSUSE).. Happens on ppc32, ppc64
Big Endian, ppc64le Little Endian.


Reply to this email directly or view it on GitHub
#1190 (comment).

@rodgert
Copy link
Contributor

rodgert commented Jan 28, 2015

I don't have any way to test this for PowerPC on Linux, but it should be
possible to use the GCC intrinsic __atomic_add_fetch() instead of a mutex
here, assuming GCC 4.7 or later (prior to that, 4.1 - 4.6 defines
__sync_add_fetch).

On Tue, Jan 27, 2015 at 5:35 PM, Thomas Rodgers rodgert@twrodgers.com
wrote:

There is no architecture specific implementation for PPC of
atomic_counter_t so it resorts to using mutex_t to guard updates to to the
counter. It is not obvious to me how, in the absence of some issue with
pthread_mutex itself, how this could result in being blocked indefinitely
while trying to acquire the mutex.

On Tue, Jan 27, 2015 at 5:18 PM, Dinar Valeev notifications@github.com
wrote:

Yeah.. This is PowerPC.. OS is Linux (openSUSE).. Happens on ppc32, ppc64
Big Endian, ppc64le Little Endian.


Reply to this email directly or view it on GitHub
#1190 (comment).

@k0da
Copy link

k0da commented Jan 28, 2015

We have community systems, I could provide you with an access to one.

@rodgert
Copy link
Contributor

rodgert commented Jan 28, 2015

Let me get a version using GCC intrinsics together locally, and then I can
test on one of your community systems.

On Wednesday, January 28, 2015, Dinar Valeev notifications@github.com
wrote:

We have community systems, I could provide you with an access to one.


Reply to this email directly or view it on GitHub
#1190 (comment).

@rodgert
Copy link
Contributor

rodgert commented Jan 28, 2015

I pushed rodgert@e6c45f5 which detects the presence of __atomic_Xxx intrinsics (should work for GCC and Clang) and uses those when available.

@rodgert
Copy link
Contributor

rodgert commented Jan 28, 2015

I have also confirmed that this change is equivalent on x86 to the inline assembly currently in libzmq's atomics -

http://goo.gl/Po62DU

We also have a private instance of http://gcc.godbolt.org/ which now has a cross-compiler for GCC targeting Power and I confirmed the generated assembly looks reasonable correct (to the extent I am able to grok the consequences of Power's weak memory ordering model, anyway).

Let me know when you'd like me to test this on your community hardware.

@k0da
Copy link

k0da commented Jan 28, 2015

I checked your patch, it doesn't fixes a problem.
Michel have fixed a problem, by grabbing few commits from HEAD

Avoid curve test to hang on ppc64 architecture
At least no problem anymore with those commits:
Merge pull request #101 from hintjens/master
Problem: issue #1273, protocol downgrade attack
Merge pull request #100 from hintjens/master
Problem: zmq_ctx_term has insane behavior by default

https://build.opensuse.org/package/view_file/devel:libraries:c_c++/zeromq/zeromq_4.0.5_avoid_curve_test_hang_on_ppc64.patch?expand=1

@rodgert
Copy link
Contributor

rodgert commented Jan 28, 2015

I think it's still worth pursuing, it moves the job of implementing atomic
operations for various CPUs to the compiler (these intrinsics are the basis
for C11 and C++11 atomics in GCC and Clang), and allows proper support for
atomic cas, exchange, increment, decrement on Power & PowerPC
architectures, where currently, libzmq downgrades to a mutex.

On Wed, Jan 28, 2015 at 3:59 PM, Dinar Valeev notifications@github.com
wrote:

I checked your patch, it doesn't fixes a problem.
Michel have fixed a problem, by grabbing few commits from HEAD

Avoid curve test to hang on ppc64 architecture
At least no problem anymore with those commits:
Merge pull request #101 #101
from hintjens/master
Problem: issue #1273 #1273,
protocol downgrade attack
Merge pull request #100 #100
from hintjens/master
Problem: zmq_ctx_term has insane behavior by default

https://build.opensuse.org/package/view_file/devel:libraries:c_c++/zeromq/zeromq_4.0.5_avoid_curve_test_hang_on_ppc64.patch?expand=1


Reply to this email directly or view it on GitHub
#1190 (comment).

@k0da
Copy link

k0da commented Jan 28, 2015

I agree, I'll keep debugging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants