Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.0.25 gives Restarting (132) issue in docker #2959

Closed
2 tasks
archerne opened this issue Oct 5, 2023 · 20 comments · Fixed by #2961
Closed
2 tasks

2.0.25 gives Restarting (132) issue in docker #2959

archerne opened this issue Oct 5, 2023 · 20 comments · Fixed by #2961
Labels
priority/p0 Critical bug without workaround / Must have type/bug Bug. Not working as intended

Comments

@archerne
Copy link

archerne commented Oct 5, 2023

Environment & Version

Environment

  • [x ] docker compose
  • kubernetes
  • docker swarm

Version

  • Version: 2.0.25

Description

pulled the latest build (2.0.25) and the mailu docker containers would no longer start. They would go to a status of "Restarting (132)" and continuously just restart

Replication Steps

Observed behaviour

Expected behaviour

Docker containers to start up and run

Logs

@tdeseez
Copy link

tdeseez commented Oct 5, 2023

same here.
how to revert to 2.0.24 while fixes are released ?

@Dennis14e
Copy link
Contributor

same here.
how to revert to 2.0.24 while fixes are released ?

Create an .env file in your Mailu directory with this content:

MAILU_VERSION=2.0.24

Then run:

docker compose down
docker compose pull
docker compose up -d

@Diman0
Copy link
Member

Diman0 commented Oct 6, 2023

Your issue does not contain sufficient information for troubleshooting. It does not even contain logs.
This version was tested on x64, armv7 and arm64 hardware. I can also not replicate this on my environment.

Please provide logs and provide more information about the hardware you are running this on. Especially the cpu properties are interesting. You can get these via cat /proc/cpuinfo

@Dennis14e
Copy link
Contributor

@Diman0
My /proc/cpuinfo (from a single core):

processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 6
model           : 6
model name      : QEMU Virtual CPU version 2.5+
stepping        : 3
microcode       : 0x1000065
cpu MHz         : 1996.247
cache size      : 512 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm nopl cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes xsave rdrand hypervisor lahf_lm cmp_legacy abm 3dnowprefetch ssbd ibpb vmmcall arch_capabilities
bugs            : fxsave_leak sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips        : 3992.49
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

For me, there are no logs. With docker compose logs -f, I only see:

mailu-antivirus-1 exited with code 132
mailu-webmail-1 exited with code 0
mailu-oletools-1 exited with code 0
mailu-imap-1 exited with code 132
mailu-smtp-1 exited with code 132
mailu-resolver-1 exited with code 132
mailu-admin-1 exited with code 132
mailu-front-1 exited with code 0
mailu-antispam-1 exited with code 132
mailu-webmail-1 exited with code 132
mailu-antivirus-1 exited with code 132
mailu-imap-1 exited with code 132
mailu-smtp-1 exited with code 132
mailu-oletools-1 exited with code 132
mailu-resolver-1 exited with code 132
mailu-admin-1 exited with code 132
mailu-antispam-1 exited with code 0
mailu-webmail-1 exited with code 132
mailu-antivirus-1 exited with code 132
mailu-imap-1 exited with code 132
mailu-oletools-1 exited with code 132
mailu-admin-1 exited with code 132
mailu-smtp-1 exited with code 132
mailu-antispam-1 exited with code 132
mailu-resolver-1 exited with code 132
mailu-imap-1 exited with code 132
mailu-webmail-1 exited with code 132
mailu-antivirus-1 exited with code 0
mailu-smtp-1 exited with code 132
mailu-antispam-1 exited with code 132
mailu-admin-1 exited with code 132
mailu-webmail-1 exited with code 132
mailu-oletools-1 exited with code 132
mailu-imap-1 exited with code 132

@Diman0
Copy link
Member

Diman0 commented Oct 6, 2023

Does it work when you set an empty LD_PRELOAD= in your mailu.env file?

@Dennis14e
Copy link
Contributor

Almost, all containers except oletools start. oletools continues with exit code 132.

@Dennis14e
Copy link
Contributor

I just see that on 2.0.24, I have the following message in every container log (at the very beginning):

WARNING:root:Disabling hardened-malloc on this CPU

Might that help? @Diman0

@Diman0
Copy link
Member

Diman0 commented Oct 6, 2023

It is probably due to the updated hardened malloc. It might need more modern hardware now.
LD_PRELOAD= should disable it due to
https://github.com/Mailu/Mailu/blob/b71039572c2f49f6dc2c81a67cdc465473e752fd/core/base/libs/socrate/socrate/system.py#L76C65-L76C65

but oletools has not been updated to use this method:
https://github.com/Mailu/Mailu/blob/b71039572c2f49f6dc2c81a67cdc465473e752fd/core/oletools/Dockerfile

So oletools must be updated to also use this method.

@Dennis14e
In QEMU what processor did you configure that QEMU must emulate? Hopefully I can replicate the issue when I configure the same CPU model.

A potential workaround is too pass all the CPU flags of the host processor to the VM.
For more info see:
https://www.qemu.org/docs/master/system/qemu-cpu-models.html#qemu-command-line
https://www.qemu.org/docs/master/system/qemu-cpu-models.html#libvirt-guest-xml

@Diman0 Diman0 added priority/p0 Critical bug without workaround / Must have type/bug Bug. Not working as intended labels Oct 6, 2023
@Dennis14e
Copy link
Contributor

@Diman0
I don't configured qemu, I use a VM from an hosting provider. I don't know what real CPU is used.

@Ezwen
Copy link

Ezwen commented Oct 6, 2023

Same problem here, and same as @Dennis14e: I am using a VM from an hosting provider, which I cannot control.

@denk-mal
Copy link

denk-mal commented Oct 6, 2023

I have the same problem on a 'real' server; /proc/cpuinfo

`
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 156
model name : Intel(R) Pentium(R) Silver N6000 @ 1.10GHz
stepping : 0
microcode : 0x24000024
cpu MHz : 2688.031
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 27
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms rdt_a rdseed smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip waitpkg gfni rdpid movdiri movdir64b md_clear flush_l1d arch_capabilities
vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs ept_mode_based_exec tsc_scaling usr_wait_pause
bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs srbds mmio_stale_data
bogomips : 2227.20
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 156
model name : Intel(R) Pentium(R) Silver N6000 @ 1.10GHz
stepping : 0
microcode : 0x24000024
cpu MHz : 1100.000
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 27
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms rdt_a rdseed smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip waitpkg gfni rdpid movdiri movdir64b md_clear flush_l1d arch_capabilities
vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs ept_mode_based_exec tsc_scaling usr_wait_pause
bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs srbds mmio_stale_data
bogomips : 2227.20
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:

processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 156
model name : Intel(R) Pentium(R) Silver N6000 @ 1.10GHz
stepping : 0
microcode : 0x24000024
cpu MHz : 1100.000
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 27
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms rdt_a rdseed smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip waitpkg gfni rdpid movdiri movdir64b md_clear flush_l1d arch_capabilities
vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs ept_mode_based_exec tsc_scaling usr_wait_pause
bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs srbds mmio_stale_data
bogomips : 2227.20
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:

processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 156
model name : Intel(R) Pentium(R) Silver N6000 @ 1.10GHz
stepping : 0
microcode : 0x24000024
cpu MHz : 1100.000
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 27
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms rdt_a rdseed smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip waitpkg gfni rdpid movdiri movdir64b md_clear flush_l1d arch_capabilities
vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs ept_mode_based_exec tsc_scaling usr_wait_pause
bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs srbds mmio_stale_data
bogomips : 2227.20
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
`

@archerne
Copy link
Author

archerne commented Oct 6, 2023

Here is my /proc/cpuinfo (for one core)

processor	: 6
vendor_id	: GenuineIntel
cpu family	: 15
model		: 6
model name	: Common KVM processor
stepping	: 1
microcode	: 0x1
cpu MHz		: 1995.007
cache size	: 16384 KB
physical id	: 1
siblings	: 6
core id		: 0
cpu cores	: 6
apicid		: 8
initial apicid	: 8
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti
bugs		: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 128
address sizes	: 40 bits physical, 48 bits virtual
power management:

It is just using the Default (kvm64) proc on proxmox
Physical server info:

processor       : 20
vendor_id       : GenuineIntel
cpu family      : 6
model           : 46
model name      : Intel(R) Xeon(R) CPU           E7540  @ 2.00GHz
stepping        : 6
microcode       : 0xd
cpu MHz         : 1064.257
cache size      : 18432 KB
physical id     : 0
siblings        : 12
core id         : 11
cpu cores       : 6
apicid          : 22
initial apicid  : 22
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 x2apic popcnt lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1d
vmx flags       : vnmi preemption_timer invvpid ept_x_only flexpriority tsc_offset vtpr mtf vapic ept vpid ple
bugs            : clflush_monitor cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips        : 3989.97
clflush size    : 64
cache_alignment : 64
address sizes   : 44 bits physical, 48 bits virtual
power management:

@dennisvanderpool
Copy link

+1

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Celeron(R) CPU 1017U @ 1.60GHz
stepping : 9
microcode : 0x13
cpu MHz : 798.172
cache size : 2048 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave lahf_lm cpuid_fault epb pti tpr_shadow flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm arat pln pts vnmi
vmx flags : vnmi preemption_timer invvpid ept_x_only flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips : 3193.70
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

@tdeseez
Copy link

tdeseez commented Oct 7, 2023

Thanks for the tip, worked as expected. Should be documented in the wiki ?

So 2.0.24 is hopefully working, but latest version is not at the moment i am writing this.

I run mailu on ARCH x86_64 , nothing very fancy. Never went into this type of trouble with updating mailu until now.

same here.
how to revert to 2.0.24 while fixes are released ?

Create an .env file in your Mailu directory with this content:

MAILU_VERSION=2.0.24

Then run:

docker compose down
docker compose pull
docker compose up -d

@Cavien
Copy link

Cavien commented Oct 7, 2023

root@mail:/mailu# docker-compose up
[+] Running 12/12
⠿ Network mailu_webmail Created 0.1s
⠿ Network mailu_default Created 0.1s
⠿ Network mailu_noinet Created 0.1s
⠿ Container mailu-resolver-1 Created 0.1s
⠿ Container mailu-oletools-1 Created 0.1s
⠿ Container mailu-redis-1 Created 0.1s
⠿ Container mailu-front-1 Created 0.1s
⠿ Container mailu-admin-1 Created 0.2s
⠿ Container mailu-smtp-1 Created 0.1s
⠿ Container mailu-antispam-1 Created 0.2s
⠿ Container mailu-imap-1 Created 0.2s
⠿ Container mailu-webmail-1 Created 0.2s
Attaching to mailu-admin-1, mailu-antispam-1, mailu-front-1, mailu-imap-1, mailu-oletools-1, mailu-redis-1, mailu-resolver-1, mailu-smtp-1, mailu-webmail-1
mailu-redis-1 | 1:C 07 Oct 2023 02:03:52.800 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see jemalloc/jemalloc#1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
mailu-redis-1 | 1:C 07 Oct 2023 02:03:52.800 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
mailu-redis-1 | 1:C 07 Oct 2023 02:03:52.800 * Redis version=7.2.1, bits=64, commit=00000000, modified=0, pid=1, just started
mailu-redis-1 | 1:C 07 Oct 2023 02:03:52.800 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
mailu-redis-1 | 1:M 07 Oct 2023 02:03:52.801 * monotonic clock: POSIX clock_gettime
mailu-redis-1 | 1:M 07 Oct 2023 02:03:52.802 * Running mode=standalone, port=6379.
mailu-redis-1 | 1:M 07 Oct 2023 02:03:52.806 * Server initialized
mailu-redis-1 | 1:M 07 Oct 2023 02:03:52.806 * Loading RDB produced by version 7.2.1
mailu-redis-1 | 1:M 07 Oct 2023 02:03:52.806 * RDB age 62 seconds
mailu-redis-1 | 1:M 07 Oct 2023 02:03:52.806 * RDB memory usage when created 0.83 Mb
mailu-redis-1 | 1:M 07 Oct 2023 02:03:52.806 * Done loading RDB, keys loaded: 0, keys expired: 0.
mailu-redis-1 | 1:M 07 Oct 2023 02:03:52.806 * DB loaded from disk: 0.000 seconds
mailu-redis-1 | 1:M 07 Oct 2023 02:03:52.806 * Ready to accept connections tcp
mailu-resolver-1 exited with code 0
mailu-oletools-1 exited with code 0
mailu-resolver-1 exited with code 0
mailu-admin-1 exited with code 132
mailu-oletools-1 exited with code 0
mailu-front-1 exited with code 0
mailu-smtp-1 exited with code 132
mailu-webmail-1 exited with code 0
mailu-oletools-1 exited with code 0
mailu-imap-1 exited with code 0
mailu-resolver-1 exited with code 132
mailu-admin-1 exited with code 0
mailu-antispam-1 exited with code 0
mailu-smtp-1 exited with code 132
mailu-webmail-1 exited with code 0
mailu-oletools-1 exited with code 132
mailu-admin-1 exited with code 132
mailu-imap-1 exited with code 132
mailu-smtp-1 exited with code 132
mailu-resolver-1 exited with code 132
mailu-antispam-1 exited with code 132
mailu-front-1 exited with code 0
mailu-webmail-1 exited with code 0
mailu-admin-1 exited with code 132
mailu-oletools-1 exited with code 132
mailu-imap-1 exited with code 132
mailu-smtp-1 exited with code 132
mailu-resolver-1 exited with code 132
mailu-webmail-1 exited with code 0
mailu-antispam-1 exited with code 132
mailu-front-1 exited with code 0
mailu-admin-1 exited with code 132
mailu-imap-1 exited with code 132
mailu-smtp-1 exited with code 132
mailu-antispam-1 exited with code 132

@nextgens
Copy link
Contributor

nextgens commented Oct 7, 2023

We have been talking about this on #mailu-dev: The new version of hardened-malloc requires the AVX2 instruction set... our current test only checks for AVX, which explains why Mailu now fails to start when started on some CPUs.

Assuming you are using and controlling a virtualized environment (qemu), you may be able to change the CPU features exposed and enable AVX2 (mainstream CPUs have it since 2011).

We will release a new build shortly that will address the problem by disabling hardened-malloc by default.

@Cavien
Copy link

Cavien commented Oct 7, 2023

Ok, looking forward to new releases

bors bot added a commit that referenced this issue Oct 8, 2023
2961: Hardened malloc was not disabled for oletools when an CPU with missing flags is used r=mergify[bot] a=Diman0

## What type of PR?
bug fix

## What does this PR do?
Updates oletools to also disable hardened malloc when used CPU misses flags

### Related issue(s)
- closes #2959 

## Prerequisites
Before we can consider review and merge, please make sure the following list is done and checked.
If an entry in not applicable, you can check it or remove it from the list.

- [n/a ] In case of feature or enhancement: documentation updated accordingly
- [x] Unless it's docs or a minor change: add [changelog](https://mailu.io/master/contributors/workflow.html#changelog) entry file.


Co-authored-by: Dimitri Huisman <diman@huisman.xyz>
Co-authored-by: Dimitri Huisman <52963853+Diman0@users.noreply.github.com>
Co-authored-by: Florent Daigniere <nextgens@users.noreply.github.com>
Co-authored-by: Florent Daigniere <nextgens@freenetproject.org>
bors bot added a commit that referenced this issue Oct 8, 2023
2961: Hardened malloc was not disabled for oletools when an CPU with missing flags is used r=mergify[bot] a=Diman0

## What type of PR?
bug fix

## What does this PR do?
Updates oletools to also disable hardened malloc when used CPU misses flags

### Related issue(s)
- closes #2959 

## Prerequisites
Before we can consider review and merge, please make sure the following list is done and checked.
If an entry in not applicable, you can check it or remove it from the list.

- [n/a ] In case of feature or enhancement: documentation updated accordingly
- [x] Unless it's docs or a minor change: add [changelog](https://mailu.io/master/contributors/workflow.html#changelog) entry file.


Co-authored-by: Dimitri Huisman <diman@huisman.xyz>
Co-authored-by: Dimitri Huisman <52963853+Diman0@users.noreply.github.com>
Co-authored-by: Florent Daigniere <nextgens@users.noreply.github.com>
Co-authored-by: Florent Daigniere <nextgens@freenetproject.org>
bors bot added a commit that referenced this issue Oct 8, 2023
2961: Hardened malloc was not disabled for oletools when an CPU with missing flags is used r=mergify[bot] a=Diman0

## What type of PR?
bug fix

## What does this PR do?
Updates oletools to also disable hardened malloc when used CPU misses flags

### Related issue(s)
- closes #2959 

## Prerequisites
Before we can consider review and merge, please make sure the following list is done and checked.
If an entry in not applicable, you can check it or remove it from the list.

- [n/a ] In case of feature or enhancement: documentation updated accordingly
- [x] Unless it's docs or a minor change: add [changelog](https://mailu.io/master/contributors/workflow.html#changelog) entry file.


Co-authored-by: Dimitri Huisman <diman@huisman.xyz>
Co-authored-by: Dimitri Huisman <52963853+Diman0@users.noreply.github.com>
Co-authored-by: Florent Daigniere <nextgens@users.noreply.github.com>
Co-authored-by: Florent Daigniere <nextgens@freenetproject.org>
bors bot added a commit that referenced this issue Oct 8, 2023
2961: Hardened malloc was not disabled for oletools when an CPU with missing flags is used r=nextgens a=Diman0

## What type of PR?
bug fix

## What does this PR do?
Updates oletools to also disable hardened malloc when used CPU misses flags

### Related issue(s)
- closes #2959 

## Prerequisites
Before we can consider review and merge, please make sure the following list is done and checked.
If an entry in not applicable, you can check it or remove it from the list.

- [n/a ] In case of feature or enhancement: documentation updated accordingly
- [x] Unless it's docs or a minor change: add [changelog](https://mailu.io/master/contributors/workflow.html#changelog) entry file.


Co-authored-by: Dimitri Huisman <diman@huisman.xyz>
Co-authored-by: Dimitri Huisman <52963853+Diman0@users.noreply.github.com>
Co-authored-by: Florent Daigniere <nextgens@users.noreply.github.com>
Co-authored-by: Florent Daigniere <nextgens@freenetproject.org>
bors bot added a commit that referenced this issue Oct 8, 2023
2961: Hardened malloc was not disabled for oletools when an CPU with missing flags is used r=nextgens a=Diman0

## What type of PR?
bug fix

## What does this PR do?
Updates oletools to also disable hardened malloc when used CPU misses flags

### Related issue(s)
- closes #2959 

## Prerequisites
Before we can consider review and merge, please make sure the following list is done and checked.
If an entry in not applicable, you can check it or remove it from the list.

- [n/a ] In case of feature or enhancement: documentation updated accordingly
- [x] Unless it's docs or a minor change: add [changelog](https://mailu.io/master/contributors/workflow.html#changelog) entry file.


Co-authored-by: Dimitri Huisman <diman@huisman.xyz>
Co-authored-by: Dimitri Huisman <52963853+Diman0@users.noreply.github.com>
Co-authored-by: Florent Daigniere <nextgens@users.noreply.github.com>
Co-authored-by: Florent Daigniere <nextgens@freenetproject.org>
@bors bors bot closed this as completed in 04d6914 Oct 8, 2023
@nextgens
Copy link
Contributor

nextgens commented Oct 8, 2023

v2.0.27 is out and should fix this; let us know if it's still broken

@Zottel92
Copy link

Zottel92 commented Oct 8, 2023

Everything runs smooth again.
Thank you for the fix :)

@tdeseez
Copy link

tdeseez commented Oct 10, 2023

I just run v2.0.27. Appears to work fine.
Thank you very much for your support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/p0 Critical bug without workaround / Must have type/bug Bug. Not working as intended
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants