Skip to content

Conversation

RunsFor
Copy link

@RunsFor RunsFor commented May 22, 2019

There is a necessity to be able to builtin librdkafka into tntkafka.so library, so it would be easier to distribute a module by eliminating that librdkafka.so specific dynamic dependency.

To build and link librdkafka into tntkafka library statically just add STATIC_BUILD=ON option.

Tarantool supports static build, so it is also may be used together.

@RunsFor RunsFor force-pushed the build-rdkafka-inplace branch from 39f9f5b to bdcc1a6 Compare May 22, 2019 07:59
@RunsFor RunsFor force-pushed the build-rdkafka-inplace branch from 0248f97 to cda420b Compare May 23, 2019 10:41
@RunsFor RunsFor changed the title Builtin rdkafka library with cmake option BUNDLE_RDKAFKA Builtin rdkafka library with cmake option STATIC_BUILD May 24, 2019
@RepentantGopher RepentantGopher merged commit f407532 into master May 24, 2019
endif()

if(BUNDLE_RDKAFKA)
if(STATIC_BUILD)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want a single option that will be the same for all rocks and mean roughly "link all your stuff statically, and don't use anything external". Hence STATIC_BUILD

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing blocks us with that. We still can add separate options and the helper one. This is the right way, because packaging for distros usually requires linking with system libraries, but sometimes there are problems with some specific system libraries: e.g. lack of them in certain distros.

Hipster way packaging should not break existing agreements. Otherwise it would be the systemd way: a half of the world will hate us.

Copy link

@knazarov knazarov May 27, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not saying we should remove BUNDLED*, I'm just saying there should be a universal option that you can use to link all dependencies statically. It could be an alias of BUNDLED_* in case if there is only one dependency. But if there is more than one (which is true in case of kafka), it should statically link everything.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for systemd, there is only a small and vocal hater community, which doesn't even have strong arguments, except the fact that systemd is monolithic.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not against. Thanks for the clarification.

olegrok added a commit that referenced this pull request Oct 30, 2021
Before this patch it was assumed that between event queues
destroying and producer/consumer close there were no any event.
However it's not so and tests catched it:

```
Process 46864 stopped
* thread #4, name = 'coio', stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT)
    frame #0: 0x00007fff203fed86 libsystem_pthread.dylib`pthread_mutex_lock + 4
libsystem_pthread.dylib`pthread_mutex_lock:
->  0x7fff203fed86 <+4>:  cmpq   $0x4d55545a, (%rdi)       ; imm = 0x4D55545A
    0x7fff203fed8d <+11>: jne    0x7fff203fee02            ; <+128>
    0x7fff203fed8f <+13>: movl   0xc(%rdi), %eax
    0x7fff203fed92 <+16>: movl   %eax, %ecx
Target 0: (tarantool) stopped.
(lldb) bt
* thread #4, name = 'coio', stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT)
  * frame #0: 0x00007fff203fed86 libsystem_pthread.dylib`pthread_mutex_lock + 4
    frame #1: 0x0000000102fdbba5 tntkafka.dylib`queue_push(queue=0x6d726554203a5d70, value=0x0000000102c1e0c0) at queue.c:92:5 [opt]
    frame #2: 0x0000000102fd7f9a tntkafka.dylib`log_callback(rd_kafka=<unavailable>, level=7, fac="DESTROY", buf=<unavailable>) at callbacks.c:56:17 [opt]
    frame #3: 0x000000011e5456f3 librdkafka.1.dylib`rd_kafka_log0 + 563
    frame #4: 0x000000011e5469a0 librdkafka.1.dylib`rd_kafka_destroy_app + 832
    frame #5: 0x0000000102fdb3be tntkafka.dylib`wait_producer_destroy(args=<unavailable>) at producer.c:414:5 [opt]
    frame #6: 0x000000010015d81e tarantool`coio_on_call + 23
    frame #7: 0x00000001003019a7 tarantool`etp_proc + 395
    frame #8: 0x00007fff204038fc libsystem_pthread.dylib`_pthread_start + 224
    frame #9: 0x00007fff203ff443 libsystem_pthread.dylib`thread_start + 15
```

To fix this issue let's reoreder code a bit and start to close
producer/consumer before event queues destruction. This patch
seems to help to solve an issue. Also I'd want to highlight that
not all destructors should be placed after wait_producer_destroy/wait_consumer_destroy
it seems topics they owns should be destroyed before. Otherwise it
leads to an assertion.

Also small code changes were performed to adjust code to Tarantool
code-style and to make consumer_destroy and producer_destroy more
consistent in sence of return values.

Closes #44
filonenko-mikhail pushed a commit that referenced this pull request Nov 9, 2021
Before this patch it was assumed that between event queues
destroying and producer/consumer close there were no any event.
However it's not so and tests catched it:

```
Process 46864 stopped
* thread #4, name = 'coio', stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT)
    frame #0: 0x00007fff203fed86 libsystem_pthread.dylib`pthread_mutex_lock + 4
libsystem_pthread.dylib`pthread_mutex_lock:
->  0x7fff203fed86 <+4>:  cmpq   $0x4d55545a, (%rdi)       ; imm = 0x4D55545A
    0x7fff203fed8d <+11>: jne    0x7fff203fee02            ; <+128>
    0x7fff203fed8f <+13>: movl   0xc(%rdi), %eax
    0x7fff203fed92 <+16>: movl   %eax, %ecx
Target 0: (tarantool) stopped.
(lldb) bt
* thread #4, name = 'coio', stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT)
  * frame #0: 0x00007fff203fed86 libsystem_pthread.dylib`pthread_mutex_lock + 4
    frame #1: 0x0000000102fdbba5 tntkafka.dylib`queue_push(queue=0x6d726554203a5d70, value=0x0000000102c1e0c0) at queue.c:92:5 [opt]
    frame #2: 0x0000000102fd7f9a tntkafka.dylib`log_callback(rd_kafka=<unavailable>, level=7, fac="DESTROY", buf=<unavailable>) at callbacks.c:56:17 [opt]
    frame #3: 0x000000011e5456f3 librdkafka.1.dylib`rd_kafka_log0 + 563
    frame #4: 0x000000011e5469a0 librdkafka.1.dylib`rd_kafka_destroy_app + 832
    frame #5: 0x0000000102fdb3be tntkafka.dylib`wait_producer_destroy(args=<unavailable>) at producer.c:414:5 [opt]
    frame #6: 0x000000010015d81e tarantool`coio_on_call + 23
    frame #7: 0x00000001003019a7 tarantool`etp_proc + 395
    frame #8: 0x00007fff204038fc libsystem_pthread.dylib`_pthread_start + 224
    frame #9: 0x00007fff203ff443 libsystem_pthread.dylib`thread_start + 15
```

To fix this issue let's reoreder code a bit and start to close
producer/consumer before event queues destruction. This patch
seems to help to solve an issue. Also I'd want to highlight that
not all destructors should be placed after wait_producer_destroy/wait_consumer_destroy
it seems topics they owns should be destroyed before. Otherwise it
leads to an assertion.

Also small code changes were performed to adjust code to Tarantool
code-style and to make consumer_destroy and producer_destroy more
consistent in sence of return values.

Closes #44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants