Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container restarting exit code: 132 #174

Open
ganey opened this issue Sep 10, 2024 · 22 comments · May be fixed by #180
Open

Container restarting exit code: 132 #174

ganey opened this issue Sep 10, 2024 · 22 comments · May be fixed by #180

Comments

@ganey
Copy link

ganey commented Sep 10, 2024

Hi,

We're running this as a kubernetes Daemonset (we've also tried sidecar/extraContainers) and containers constantly restart

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: amqproxy
  labels:
    name: amqpproxy
    instance: amqproxy
spec:
  selector:
    matchLabels:
      name: amqproxy
  template:
    metadata:
      labels:
        name: amqproxy
    spec:
      containers:
        - name: amqproxy
          image: cloudamqp/amqproxy:v2.0.2
          env:
            - name: AMQP_URL
              value: {{ .Values.amqproxy.upstreamUrl }}
          ports:
            - name: amqpproxy
              containerPort: 5673
              protocol: TCP
              hostPort: 5673

The logs aren't great, even if we try turning on debug

2024-09-10T14:42:23.082393Z  INFO amq_proxy.server Proxy upstream: someurl.rmq.cloudamqp.com:5671 TLS
2024-09-10T14:42:23.082475Z  INFO amq_proxy.server Proxy listening on 0.0.0.0:5673
2024-09-10T14:42:23.240688Z  INFO amq_proxy.channel_pool[remote_address: "10.42.23.249:57192"] Adding upstream connection

the container then dies all the time with exist code 132:

terminated
Reason:Reason: Error - exit code: 132

Any suggestions?

@ganey
Copy link
Author

ganey commented Sep 10, 2024

Got a bit further on this, i can get the container to stay stable by using a non TLS port / scheme amqp://somehost:5672

This wasn't clear from any errors in v2, downgrading to v1 gives the ssl handshake error which made me try an non TLS connection

@spuun
Copy link
Member

spuun commented Sep 19, 2024

It looks like it stops when the first client connects, is this correct? Does it stay alive if you don't start any clients?

@ganey
Copy link
Author

ganey commented Sep 19, 2024

Yes correct.

I'm connecting with AMQPLib for PHP in case that makes any differenece

@spuun
Copy link
Member

spuun commented Sep 20, 2024

Ok. Can you see in the upstream's logs if amqproxy connects?

@ganey
Copy link
Author

ganey commented Sep 25, 2024

Hey @spuun Sorry for the delay getting back to you. Yes i can see the connection in the cloudamqp logs, I can connect and publish fine now 🤷

If the issue comes back, i'll check the upstream logs and re-open this issue

@ganey ganey closed this as completed Sep 25, 2024
@ganey
Copy link
Author

ganey commented Sep 30, 2024

So, testing this again in production and seeing the issue.

The upstream logs just say the client closed connection

The only difference between working and broken is changing the AMQP_URL env var from amqp://user:pass@x.rmq.cloudamqp.com/user to amqps://user:pass@x.rmq.cloudamqp.com/user

2024-09-30 14:33:06.300421+00:00 [info] <0.20379.2027> accepting AMQP connection <0.20379.2027> (x.x.42:24556 -> 10.57.50.94:5671)
2024-09-30 14:33:06.314933+00:00 [info] <0.20379.2027> connection <0.20379.2027> (x.x.7.42:24556 -> 10.57.50.94:5671) has a client-provided name: AMQProxy 2.0.2
2024-09-30 14:33:06.330875+00:00 [info] <0.20379.2027> connection <0.20379.2027> (x.x.7.42:24556 -> 10.57.50.94:5671 - AMQProxy 2.0.2): user 'shtsaims' authenticated and granted access to vhost 'shtsaims'
2024-09-30 14:33:06.516108+00:00 [warning] <0.20379.2027> closing AMQP connection <0.20379.2027> (x.x.7.42:24556 -> 10.57.50.94:5671 - AMQProxy 2.0.2, vhost: 'shtsaims', user: 'shtsaims'):
2024-09-30 14:33:06.516108+00:00 [warning] <0.20379.2027> client unexpectedly closed TCP connection

v2 debug logs:

2024-09-30T14:53:37.373170Z  INFO amq_proxy.server Proxy upstream: x.rmq.cloudamqp.com:5671 TLS
2024-09-30T14:53:37.373272Z  INFO amq_proxy.server Proxy listening on 0.0.0.0:5673
2024-09-30T14:53:43.906655Z DEBUG amq_proxy.server Client connection failure (x.x.221.187:40558) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-09-30T14:53:43.908596Z DEBUG amq_proxy.server Client connection failure (x.x.221.187:40574) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-09-30T14:53:53.906286Z DEBUG amq_proxy.server Client connection failure (x.x.221.187:49344) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-09-30T14:53:53.906539Z DEBUG amq_proxy.server Client connection failure (x.x.221.187:49352) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-09-30T14:53:55.673728Z DEBUG amq_proxy.client[remote_address: "127.0.0.1:58580"] Connected
2024-09-30T14:53:55.757596Z  INFO amq_proxy.channel_pool[remote_address: "127.0.0.1:58580"] Adding upstream connection

Here's the logs on v1.0.0:

2024-09-30 14:41:59.457589+00:00 [info] <0.28832.7616> accepting AMQP connection <0.28832.7616> (x.x.7.41:63689 -> 10.57.86.239:5671)
2024-09-30 14:41:59.470306+00:00 [info] <0.28832.7616> connection <0.28832.7616> (x.x.7.41:63689 -> 10.57.86.239:5671) has a client-provided name: AMQProxy 1.0.0
2024-09-30 14:41:59.470306+00:00 [info] <0.28832.7616> 
2024-09-30 14:41:59.482496+00:00 [info] <0.28832.7616> connection <0.28832.7616> (x.x.7.41:63689 -> 10.57.86.239:5671 - AMQProxy 1.0.0
2024-09-30 14:41:59.482496+00:00 [info] <0.28832.7616> ): user 'shtsaims' authenticated and granted access to vhost 'shtsaims'
2024-09-30 14:41:59.687900+00:00 [warning] <0.28832.7616> closing AMQP connection <0.28832.7616> (x.x.7.41:63689 -> 10.57.86.239:5671 - AMQProxy 1.0.0
2024-09-30 14:41:59.687900+00:00 [warning] <0.28832.7616> , vhost: 'shtsaims', user: 'shtsaims'):
2024-09-30 14:41:59.687900+00:00 [warning] <0.28832.7616> client unexpectedly closed TCP connection

v1 debug logs:

2024-09-30 14:49:20 UTC: Proxy upstream: x.rmq.cloudamqp.com:5671 TLS
2024-09-30 14:49:20 UTC: Proxy listening on 0.0.0.0:5673
2024-09-30 14:49:31 UTC: Client connected: x.x.7.153:39604
2024-09-30 14:49:31 UTC: Client disconnected: x.x.7.153:39604: #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-09-30 14:49:31 UTC: Client disconnected: x.x.7.153:39604
2024-09-30 14:49:40 UTC: Client connected: 127.0.0.1:51076
2024-09-30 14:49:51 UTC: Proxy upstream: x.rmq.cloudamqp.com:5671 TLS
2024-09-30 14:49:51 UTC: Proxy listening on 0.0.0.0:5673
2024-09-30T14:54:53.906675Z DEBUG amq_proxy.server Client connection failure (x.x.221.187:43096) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-09-30T14:55:03.906615Z DEBUG amq_proxy.server Client connection failure (x.x.221.187:37630) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-09-30T14:55:03.907420Z DEBUG amq_proxy.server Client connection failure (x.x.221.187:37632) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-09-30T14:55:13.906579Z DEBUG amq_proxy.server Client connection failure (x.x.221.187:58616) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-09-30T14:55:13.906747Z DEBUG amq_proxy.server Client connection failure (x.x.221.187:58632) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-09-30T14:55:23.906661Z DEBUG amq_proxy.server Client connection failure (x.x.221.187:43128) #<AMQProxy::Client::NegotiationError:Client negotiation failed>

@ganey ganey reopened this Sep 30, 2024
@carlhoerberg
Copy link
Member

carlhoerberg commented Sep 30, 2024

Hmm, interesting, so something with the TLS.. But somehow the connection is esablished, but then after some packets are exchanged it crashes? In other cases exit code 132 might indicate it's a missing cpu feature, like AVX or so, maybe something with the AES/hardware acceleration that openssl uses?

@mrmason
Copy link

mrmason commented Oct 1, 2024

My theory is that this has happened since the cert on *.rmq.cloudamqp.com changed on 2024-08-07 - https://crt.sh/?q=rmq.cloudamqp.com

More information from Cloudflare here: https://blog.cloudflare.com/upcoming-lets-encrypt-certificate-chain-change-and-impact-for-cloudflare-customers/

It's possibly an error like you say, but I have been comparing open SSL installs on a machine it works on, and one it doesn't, and I can't see anything obviously different.

Here is the /proc/cpu from a machine this doesn't work on - This is running Ubuntu 22.04 LTS, so it's not out of date in terms of OS, and the hardware is fairly new.

processor       : 31
vendor_id       : AuthenticAMD
cpu family      : 25
model           : 17
model name      : AMD EPYC 9354 32-Core Processor
stepping        : 1
microcode       : 0xa101144
cpu MHz         : 3250.000
cache size      : 1024 KB
physical id     : 31
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 31
initial apicid  : 31
fpu             : yes
fpu_exception   : yes
cpuid level     : 16
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt nrip_save avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm arch_capabilities
bugs            : sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass srso
bogomips        : 6500.00
TLB size        : 3584 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 52 bits physical, 57 bits virtual
power management:

vs one it does work on:

processor	: 31
vendor_id	: GenuineIntel
cpu family	: 6
model		: 183
model name	: 13th Gen Intel(R) Core(TM) i9-13950HX
stepping	: 1
microcode	: 0x129
cpu MHz		: 800.000
cache size	: 36864 KB
physical id	: 0
siblings	: 32
core id		: 47
cpu cores	: 24
apicid		: 94
initial apicid	: 94
fpu		: yes
fpu_exception	: yes
cpuid level	: 32
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
vmx flags	: vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs ept_mode_based_exec tsc_scaling usr_wait_pause
bugs		: spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb rfds bhi
bogomips	: 4838.40
clflush size	: 64
cache_alignment	: 64
address sizes	: 46 bits physical, 48 bits virtual
power management:

@dentarg
Copy link
Member

dentarg commented Oct 1, 2024

I was about to post a comment linking to this issue kubernetes/kubernetes#94284 specially thinking of the last comment kubernetes/kubernetes#94284 (comment)

Interesting about the TLS cert... are you able to try amqproxy connecting against some other host over AMQPS that has a different cert?

@ganey
Copy link
Author

ganey commented Oct 1, 2024

@dentarg We're getting the same issue with a custom cert/domain issued by AlphaSSL with these certs https://help.configuressl.com/alphassl-wildcard-intermediate-root-ca-cross-signed-ca-certificates-r6/

@mrmason
Copy link

mrmason commented Oct 1, 2024

We have changed the SSL cert on cloudamqp to our own custom cert issued by AlphaSSL and we're still seeing the same problem.

Can confirm through the stack that the SSL is valid by running openssl s_client -connect x.com:5671 which returns:

New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)

It's hitting this in both instances -

raise NegotiationError.new "Client negotiation failed", ex

Client Logs

2024-10-01T09:45:42.951030Z DEBUG amq_proxy.server Client connection failure (x.70:35584) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-10-01T09:45:52.950888Z DEBUG amq_proxy.server Client connection failure (x.70:40780) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-10-01T09:45:52.951162Z DEBUG amq_proxy.server Client connection failure (x.70:40786) #<AMQProxy::Client::NegotiationError:Client negotiation failed>
2024-10-01T09:45:57.327680Z DEBUG amq_proxy.client[remote_address: "127.0.0.1:34336"] Connected
2024-10-01T09:45:57.418046Z  INFO amq_proxy.channel_pool[remote_address: "127.0.0.1:34336"] Adding upstream connection
2024-10-01T09:46:27.457603Z  INFO amq_proxy.server Proxy upstream: x.com:5671 TLS
2024-10-01T09:46:27.457785Z  INFO amq_proxy.server Proxy listening on 0.0.0.0:5673
2024-10-01T09:46:32.951686Z DEBUG amq_proxy.server Client connection failure (x.70:32816) #<AMQProxy::Client::NegotiationError:Client negotiation failed>

Server Logs

2024-10-01 09:45:57.386138+00:00 [info] <0.18437.4297> accepting AMQP connection <0.18437.4297> (x.70:13189 -> 10.57.20.103:5671)
2024-10-01 09:45:57.399581+00:00 [info] <0.18437.4297> connection <0.18437.4297> (x70:13189 -> 10.57.20.103:5671) has a client-provided name: AMQProxy 2.0.2
2024-10-01 09:45:57.414074+00:00 [info] <0.18437.4297> connection <0.18437.4297> (x.70:13189 -> 10.57.20.103:5671 - AMQProxy 2.0.2): user 'shtsaims' authenticated and granted access to vhost 'shtsaims'
2024-10-01 09:45:57.563005+00:00 [warning] <0.18437.4297> closing AMQP connection <0.18437.4297> (x.70:13189 -> 10.57.20.103:5671 - AMQProxy 2.0.2, vhost: 'shtsaims', user: 'shtsaims'):
2024-10-01 09:45:57.563005+00:00 [warning] <0.18437.4297> client unexpectedly closed TCP connection

@j-m-harris
Copy link
Contributor

We modified cloudamqp to log the exception stack trace:

2024-10-01T12:52:26.527391Z ERROR amq_proxy.server Client connection failure (redacted:36656)
Client negotiation failed (AMQProxy::Client::NegotiationError)
  from tmp/src/amqproxy/client.cr:264:7 in '->'
  from usr/share/crystal/src/fiber.cr:143:11 in 'run'
  from ???
Caused by: End of file reached (IO::EOFError)
  from usr/share/crystal/src/gc/boehm.cr:174:7 in '->'
  from usr/share/crystal/src/fiber.cr:143:11 in 'run'
  from ???

@j-m-harris
Copy link
Contributor

The stack trace is more useful once amqproxy is built with shards build --verbose:

Client negotiation failed (AMQProxy::Client::NegotiationError)
  from tmp/src/amqproxy/client.cr:57 in 'negotiate'
  from tmp/src/amqproxy/client.cr:20:31 in 'initialize'
  from tmp/src/amqproxy/client.cr:18:5 in 'new'
  from tmp/src/amqproxy/server.cr:63:11 in 'handle_connection'
  from tmp/src/amqproxy/server.cr:36:11 in '->'
  from usr/share/crystal/src/fiber.cr:143:11 in 'run'
  from usr/share/crystal/src/fiber.cr:95:34 in '->'
  from ???
Caused by: End of file reached (IO::EOFError)
  from usr/share/crystal/src/io.cr:525:27 in 'read_fully'
  from tmp/src/amqproxy/client.cr:217:7 in 'negotiate'
  from tmp/src/amqproxy/client.cr:20:31 in 'initialize'
  from tmp/src/amqproxy/client.cr:18:5 in 'new'
  from tmp/src/amqproxy/server.cr:63:11 in 'handle_connection'
  from tmp/src/amqproxy/server.cr:36:11 in '->'
  from usr/share/crystal/src/fiber.cr:143:11 in 'run'
  from usr/share/crystal/src/fiber.cr:95:34 in '->'
  from ???

@j-m-harris
Copy link
Contributor

If the amqproxy upstream is using amqps://, is it a requirement that the client connecting to amqproxy is also amqps?

@carlhoerberg
Copy link
Member

If the amqproxy upstream is using amqps://, is it a requirement that the client connecting to amqproxy is also amqps?

No (actually it's not even possible to connect with amqps to amqproxy, our thinking have been that the proxy is installed in a trusted network and that TLS then doesnt add any meaningful benefit)

@j-m-harris
Copy link
Contributor

We've realised at least part of the problem here (the socket EOFError) is coming from health check requests opening the socket, but not providing any data.

@j-m-harris
Copy link
Contributor

On the affected host machine (in a containerised environment, FWIW), we are seeing:

Illegal instruction (core dumped) /usr/bin/amqproxy

However we don't yet have a core dump, and don't know whether the problem lies in openssl, crystallang or amqproxy.

@j-m-harris
Copy link
Contributor

Core dump with backtrace for illegal instruction:

Reading symbols from /usr/bin/amqproxy...
[New LWP 7]
Core was generated by `/usr/bin/amqproxy --listen=0.0.0.0 --debug'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x00ffffffffffe001 in ?? ()

(gdb) where
#0  0x00ffffffffffe001 in ?? ()
#1  0x00007fc284435a58 in ?? ()
#2  0x00007fc284c6e000 in ?? ()
#3  0x00007fc286f75c60 in ?? ()
#4  0x0000558c5517ff60 in new () at /usr/share/crystal/src/slice.cr:70
#5  0x0000558c552650b4 in fill_buffer () at /usr/share/crystal/src/io/buffered.cr:272
#6  0x0000558c55264e5f in read () at /usr/share/crystal/src/io/buffered.cr:93
#7  0x0000558c55261b79 in read_fully? () at /usr/share/crystal/src/io.cr:542
#8  0x0000558c55261a6b in read_fully () at /usr/share/crystal/src/io.cr:525
#9  0x0000558c552e99d8 in from_io () at /tmp/lib/amq-protocol/src/amq/protocol/frames.cr:65
#10 0x0000558c552b0c18 in read_loop () at /tmp/src/amqproxy/upstream.cr:67
#11 0x0000558c552b0b31 in read_loop () at /tmp/src/amqproxy/upstream.cr:63
#12 0x0000558c55140a40 in -> () at /tmp/src/amqproxy/channel_pool.cr:44
#13 0x0000558c551b7274 in run () at /usr/share/crystal/src/fiber.cr:143
#14 0x0000558c5513ad86 in -> () at /usr/share/crystal/src/fiber.cr:95
#15 0x0000000000000000 in ?? ()

@carlhoerberg
Copy link
Member

Ok! In gdb, can you disassemble /m to get the instruction it crashes on?

@j-m-harris
Copy link
Contributor

@carlhoerberg thanks for your help, unfortunately not yet: No function contains program counter for selected frame.

I'm presuming I need to be using debug versions of some system libraries to get that.

@j-m-harris
Copy link
Contributor

So if we have built with shard --debug we get a little more information in the stack:

#0  0x00ffffffffffe001 in ?? ()
#1  0x0000800000000000 in ?? ()
#2  0x00007ffff4d26990 in ?? ()
#3  0x00007ffff7866c60 in ?? ()
#4  0x00005555556ed1f2 in new (__arg0=0x7f0000008000 <error: Cannot access memory at address 0x7f0000008000>, __arg1=32767) at /usr/share/crystal/src/slice.cr:70
#5  0x00005555557e9939 in fill_buffer (self=0x7ffff7866c60) at /usr/share/crystal/src/io/buffered.cr:272
#6  0x00005555557e96a3 in read (self=0x7ffff7866c60, slice=...) at /usr/share/crystal/src/io/buffered.cr:93
#7  0x00005555557e6230 in read_fully? (self=0x7ffff7866c60, slice=...) at /usr/share/crystal/src/io.cr:542
#8  0x00005555557e60fb in read_fully (self=0x7ffff7866c60, slice=...) at /usr/share/crystal/src/io.cr:525
#9  0x0000555555878674 in from_io (io=0x7ffff7866c60) at /tmp/lib/amq-protocol/src/amq/protocol/frames.cr:65
#10 0x000055555583b600 in read_loop (self=0x7ffff7864780, socket=0x7ffff7866c60) at /tmp/src/amqproxy/upstream.cr:67
#11 0x000055555583b4f4 in read_loop (self=0x7ffff7864780) at /tmp/src/amqproxy/upstream.cr:63
#12 0x00005555556a90b9 in -> () at /tmp/src/amqproxy/channel_pool.cr:44
#13 0x00005555557295a5 in run (self=0x7ffff785bb40) at /usr/share/crystal/src/fiber.cr:143
#14 0x00005555556a282d in -> (f=0x7ffff785bb40) at /usr/share/crystal/src/fiber.cr:95
#15 0x0000000000000000 in ?? ()

And when running in gdb with display/i $pc we see this at SIGILL time:

    1: x/i $pc
    => 0xffffffffffe001:    <error: Cannot access memory at address 0xffffffffffe001>

Afaik crystal uses LLVM, whose trap intrinsic will generate a ud2 instruction. That could mean that SIGILL is a red herring for an invalid memory access as the root cause, which makes us suspect the problem is in crystal lang.

@j-m-harris
Copy link
Contributor

I think we've narrowed this down to a mismatch of libssl version between compile and runtime. See issue #178

@mrmason mrmason linked a pull request Oct 7, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants