Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container exits with 132 after starting #802

Open
Luna-devv opened this issue Mar 22, 2024 · 3 comments
Open

Container exits with 132 after starting #802

Luna-devv opened this issue Mar 22, 2024 · 3 comments

Comments

@Luna-devv
Copy link

Hey,
I tried to start the docker container for keydb but not matter what I do it doesn't want to start, it always directly stops with no logs or anything. Any idea?

image

CPU Information

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         40 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
    CPU family:          6
    Model:               26
    Thread(s) per core:  2
    Core(s) per socket:  4
    Socket(s):           2
    Stepping:            5
    CPU max MHz:         2926.0000
    CPU min MHz:         1596.0000
    BogoMIPS:            5866.97
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca 
                         sse4_1 sse4_2 popcnt lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm flush_l1d
Virtualization features: 
  Virtualization:        VT-x
Caches (sum of all):     
  L1d:                   256 KiB (8 instances)
  L1i:                   256 KiB (8 instances)
  L2:                    2 MiB (8 instances)
  L3:                    16 MiB (2 instances)
NUMA:                    
  NUMA node(s):          2
  NUMA node0 CPU(s):     0-3,8-11
  NUMA node1 CPU(s):     4-7,12-15
Vulnerabilities:         
  Gather data sampling:  Not affected
  Itlb multihit:         KVM: Mitigation: VMX disabled
  L1tf:                  Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
  Mds:                   Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
  Meltdown:              Mitigation; PTI
  Mmio stale data:       Unknown: No mitigations
  Retbleed:              Not affected
  Spec rstack overflow:  Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl and seccomp
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
  Srbds:                 Not affected
  Tsx async abort:       Not affected

Memory usage

$ free -m
               total        used        free      shared  buff/cache   available
Mem:           48254       13457       19505         225       15292       34151
Swap:              0           0           0

OS - Linux

image

@mueller-ma
Copy link

I can reproduce this issue on Debian 12. Even when setting docker log level to debug there's no output:

$ docker -D -l debug run --rm eqalpha/keydb:x86_64_v6.3.4
DEBU[0000] [hijack] End of stdout

@mueller-ma
Copy link

It seems to be an issue with the latest version. I tried several docker tags:

Works:

  • unstable
  • x86_64_v6.3.3

Doesn't work:

  • x86_64_v6.3.4
  • alpine_x86_64_v6.3.4
  • latest

@HendrikGrobler
Copy link

This started happening for me too. Using the alpine tag at first. What's weirder is that my cached local version does work, but when I tried to run it on a fresh server it was exiting silently. From the docker hub page it looks like the alpine tag hasn't been updated in 6 months though. I can confirm that unstable and 6.3.3 tags do seem to be working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants