Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MariaDB service crashes since updating to Laravel 11 oom-kill possible memory leak in Laravel 11? #50807

Closed
sts-ryan-holton opened this issue Mar 28, 2024 · 40 comments

Comments

@sts-ryan-holton
Copy link

Laravel Version

11.1.0

PHP Version

8.3.4

Database Driver & Version

MariaDB 10.11

Description

Hi 👋

I thought I'd just mention this in case anyone else has experienced this since updating to Laravel 11.

I use MariaDB 10.11 in production. Since updating, I'm seeing that my database has been crashing. I've been running my server for a few years now, and it's always been operating on the latest Laravel version, and also has been fine sitting between 75% and 90% memory utilisation (8GB).

Steps To Reproduce

Two noticeable things are different in Laravel 11 compared to Laravel 10:

  1. Removal of dbal package
  2. Mariadb driver

I wanted to figure out whether I'm missing anything. As a result of this, at a cost, I've had to increase my server's memory. I do wonder whether there's a memory leak in how the driver works now in Laravel 11?

× mariadb.service - MariaDB 10.11 database server
     Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; preset: disabled)
     Active: failed (Result: oom-kill) since Thu 2024-03-28 04:00:36 EDT; 5min ago
   Duration: 1d 1h 19min 29.156s
       Docs: man:mariadbd(8)
             https://mariadb.com/kb/en/library/systemd/
    Process: 766 ExecStartPre=/usr/libexec/mariadb-check-socket (code=exited, status=0/SUCCESS)
    Process: 825 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir mariadb.service (code=exited, status=0/SUCCESS)
    Process: 874 ExecStart=/usr/libexec/mariadbd --basedir=/usr $MYSQLD_OPTS $_WSREP_NEW_CLUSTER (code=killed, signal=KILL)
    Process: 1443 ExecStartPost=/usr/libexec/mariadb-check-upgrade (code=exited, status=0/SUCCESS)
   Main PID: 874 (code=killed, signal=KILL)
     Status: "Taking your SQL requests now..."
        CPU: 2h 39min 52.288s

Mar 27 02:41:07 domain-monitor-2022 mariadb-check-upgrade[1473]:   1. Back-up your data before with 'mariadb-upgrade'
Mar 27 02:41:07 domain-monitor-2022 mariadb-check-upgrade[1473]:   2. Start the database daemon using 'systemctl start mariadb.service'
Mar 27 02:41:07 domain-monitor-2022 mariadb-check-upgrade[1473]:   3. Run 'mariadb-upgrade' with a database user that has sufficient privileges
Mar 27 02:41:07 domain-monitor-2022 mariadb-check-upgrade[1473]: Read more about 'mariadb-upgrade' usage at:
Mar 27 02:41:07 domain-monitor-2022 mariadb-check-upgrade[1473]: https://mariadb.com/kb/en/mysql_upgrade/
Mar 27 02:41:07 domain-monitor-2022 systemd[1]: Started MariaDB 10.11 database server.
Mar 28 04:00:36 domain-monitor-2022 systemd[1]: mariadb.service: A process of this unit has been killed by the OOM killer.
Mar 28 04:00:36 domain-monitor-2022 systemd[1]: mariadb.service: Main process exited, code=killed, status=9/KILL
Mar 28 04:00:36 domain-monitor-2022 systemd[1]: mariadb.service: Failed with result 'oom-kill'.
Mar 28 04:00:36 domain-monitor-2022 systemd[1]: mariadb.service: Consumed 2h 39min 52.288s CPU time.

It's been almost 3 hours since rebooting my database server, and I'm seeing the memory usage here at 1GB

● mariadb.service - MariaDB 10.11 database server
     Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; preset: disabled)
     Active: active (running) since Thu 2024-03-28 04:25:45 EDT; 2h 23min ago
       Docs: man:mariadbd(8)
             https://mariadb.com/kb/en/library/systemd/
    Process: 834 ExecStartPre=/usr/libexec/mariadb-check-socket (code=exited, status=0/SUCCESS)
    Process: 887 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir mariadb.service (code=exited, status=0/SUCCESS)
    Process: 1182 ExecStartPost=/usr/libexec/mariadb-check-upgrade (code=exited, status=0/SUCCESS)
   Main PID: 946 (mariadbd)
     Status: "Taking your SQL requests now..."
      Tasks: 95 (limit: 100205)
     Memory: 1.0G
        CPU: 15min 19.823s
     CGroup: /system.slice/mariadb.service
             └─946 /usr/libexec/mariadbd --basedir=/usr

Mar 28 04:25:44 domain-monitor-2022 systemd[1]: Starting MariaDB 10.11 database server...
Mar 28 04:25:44 domain-monitor-2022 mariadb-prepare-db-dir[887]: Database MariaDB is probably initialized in /var/lib/mysql already, nothing is done.
Mar 28 04:25:44 domain-monitor-2022 mariadb-prepare-db-dir[887]: If this is not the case, make sure the /var/lib/mysql is empty before running mariadb-prepare-db-dir.
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade[1212]: The datadir located at /var/lib/mysql needs to be upgraded using 'mariadb-upgrade' tool. This can be done using the fo>
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade[1212]:   1. Back-up your data before with 'mariadb-upgrade'
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade[1212]:   2. Start the database daemon using 'systemctl start mariadb.service'
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade[1212]:   3. Run 'mariadb-upgrade' with a database user that has sufficient privileges
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade[1212]: Read more about 'mariadb-upgrade' usage at:
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade[1212]: https://mariadb.com/kb/en/mysql_upgrade/
Mar 28 04:25:45 domain-monitor-2022 systemd[1]: Started MariaDB 10.11 database server.
@sts-ryan-holton sts-ryan-holton changed the title MariaDB service crashes since updating to Laravel 11 oom-kill MariaDB service crashes since updating to Laravel 11 oom-kill possible memory leak in Laravel 11? Mar 28, 2024
@driesvints
Copy link
Member

Hey there,

Can you first please try one of the support channels below? If you can actually identify this as a bug, feel free to open up a new issue with a link to the original one and we'll gladly help you out.

Thanks!

@sts-ryan-holton
Copy link
Author

@sts-ryan-holton
Copy link
Author

@driesvints Followed the advice on Stackoverflow. I'm being pushed back here mate :)

I would raise this as a bug report on laravel's support page, not here.

Hoping to see if others get a similar problem

@driesvints
Copy link
Member

driesvints commented Mar 28, 2024

I've retained DB_CONNECTION=mysql in production

You need to use the new mariadb connection on Laravel v11.

@sts-ryan-holton
Copy link
Author

sts-ryan-holton commented Mar 28, 2024

Sure, I used Laravel Shift a few days ago to update. It didn't change or mention anything about this. Does this mean I need to migrate all uuid columns over to this? Not quite sure of the compatibility of this if I ever want to go back to MySQL.

Right now I see first party packages like Pulse continuing to use uuid so should be fine? And failed_jobs table uses uuid.

Note, after changing DB_CONNECTION to mariadb I'm not seeing any reduction in memory. Currently at 2.3GB after 6 hours

@driesvints
Copy link
Member

If you use mariadb you should best use that connection. This will make sure you don't deviate from it in the future. You should update your uuid columns as in the upgrade guide. If it's the same problem then that's most likely not it. You're also free to stay on mysql if you really want that.

@hafezdivandari @staudenmeir do any of you have any ideas what could cause the memory for the new MariaDB connection to blow up like above?

@sts-ryan-holton can you please add more info to reproduce this? What sort of queries/usage is causing this?

@sts-ryan-holton
Copy link
Author

sts-ryan-holton commented Mar 28, 2024

I've switched to mariadb for now. I used Laravel Shift, and there were no problems there. A bit more about the setup:

I updated to Laravel 11 and deployed the changes. Then almost 24 hours after the deployment, mariadb.service failed completely with the error attached above. The server had 8gb of memory on a Digital Ocean droplet and 4vcpus. It's had that setup for at least 2 years. There were no problems on Laravel 9, and on Laravel 10. I'm running MariaDB 10.11

My server has always ran at around 75-90% memory. When it crashed, moments before the memory usage went up to around 98% and fell over, I restarted and 24 hours later to this morning, same thing happened. And I've never had this happen ever despite running hot on memory.

I run:

  • Laravel Horizon, roughly processing between 1,000 - 2,000 jobs per minute continuously and have around 60-70 PHP processes running
  • Laravel Pulse (separate Redis injest)

CPU utilisation is never really that high, and when running SHOW FULL PROCESSLIST on the DB there's only around 70 active connections at any one given time.

This morning, I updated my droplet at a £30 monthly increase to double my CPU and memory. The server has been rebooted to perform that change earlier.

Earlier this morning when opening this issue, look at the memory usage:

Tasks: 95 (limit: 100205)
     Memory: 1.0G
        CPU: 15min 19.823s

And right now:

 Tasks: 98 (limit: 100205)
     Memory: 2.5G
        CPU: 48min 52.869s

Over 1.5GB extra consumed over just 5 hours?

I have another project that's got around 4 database queues running Laravel 10, it's been up for over a week and is at 1.0GB, and I've observed this project over an hour, it goes up to 1.1GB then back down to 1.0GB.

Also worth mentioning, I haven't made any config overrides to my.cnf for MariaDB at all in the past 2 years.

My worry here is that something in Laravel 11 is causing an increase in memory usage.

Laravel Shift recommended I add the following back after

APP_TIMEZONE=Europe/London
BCRYPT_ROUNDS=10
CACHE_STORE=file
DB_CHARSET=utf8mb4
DB_COLLATION=utf8mb4_unicode_ci
DB_CONNECTION=mysql
LOG_DAILY_DAYS=3
MAIL_MAILER=smtp
MAIL_MARKDOWN_THEME=domain-monitor
QUEUE_CONNECTION=sync
QUEUE_FAILED_DRIVER=database
SESSION_DRIVER=file

And of course, I removed doctrine/dbal

Copy link

Thank you for reporting this issue!

As Laravel is an open source project, we rely on the community to help us diagnose and fix issues as it is not possible to research and fix every issue reported to us via GitHub.

If possible, please make a pull request fixing the issue you have described, along with corresponding tests. All pull requests are promptly reviewed by the Laravel team.

Thank you!

@sts-ryan-holton
Copy link
Author

Just checked another server too running Centos. Except, not Laravel, it's at 1.1GB over 1 month. So I have:

  • Wordpres Centos 9 -> 1.1GB over 1 month
  • Laravel 10 Centos 9 -> 1.0GB over 1 week
  • Laravel 10 Centos 9 (before Laravel 11) unknown but never crashed in 2 years
  • Laravel 11 Centos 9 after upgrade, 2.8GB after just 8 hours 👁️

@staudenmeir
Copy link
Contributor

Does this server only run MariaDB or your whole application?

How does the number of active connections develop over time? Does it only increase or stay in the same range?
Do you have the Laravel 10 numbers for comparison?

do any of you have any ideas what could cause the memory for the new MariaDB connection to blow up like above?

I don't really see how Laravel could cause these crashes since it "only" sends queries to MariaDB. Laravel itself doesn't allocate any database memory or run background tasks inside the database.

#50044 is the "most fundamental" change in Laravel 11 I can think of, but I haven't seen any other reports of increased memory usage.

Two noticeable things are different in Laravel 11 compared to Laravel 10:
Removal of dbal package

This mostly (or even only) affects migration queries.

Mariadb driver

The MySQL and MariaDB driver are identical except for the UUID column type (which only affects migration queries for new columns).

@sts-ryan-holton
Copy link
Author

sts-ryan-holton commented Mar 28, 2024

The server runs MariaDB, and the whole application. It also runs a Nuxt-JS 2.x front-end. I've checked the number of incoming HTTPD connections and it's only 7. And Google Analytics doesn't show many on the site.

I've ran SHOW FULL PROCESSLIST; multiple times throughout the day, and have ran it a few times one after another. It fluctuates between 50 and 80, and no single item is left executing beyond 200 seconds or so at any one given time, quite normal for the type of application I'm running.

I experience both problems on mysql and mariadb drivers. I've experienced two MariaDB crashes since updating to Laravel 11 and PHP 8.3. I doubt PHP 8.3 is the issue here given the application remains up despite the crash.

Since my comment 2 hours ago which stated: "Memory: 2.5G"

I've checked again now: "Memory: 3.1G" so has been rising. I fear a third crash imminent over night since updating to Laravel 11.

What about configs? Service providers? These are both changes in Laravel 11.

I don't really want to have to set up a cron to auto-restart mariadb.service daily :D

@staudenmeir
Copy link
Contributor

since updating to Laravel 11 and PHP 8.3.

You updated both at the same time?

@sts-ryan-holton
Copy link
Author

I updated PHP to 8.3 from 8.2 about an hour before.

Another note about mariadb.service status. The number of tasks never really change. It too only fluctuates between 96-98

@hafezdivandari
Copy link
Contributor

May be related to queue? Do you have any queued jobs? What is your queue driver?

@sts-ryan-holton
Copy link
Author

sts-ryan-holton commented Mar 28, 2024

I'm using Laravel Horizon as my queue. Which goes through Redis v7. But Redis never crashed when mariadb crashed. Utilisation of redis never peaked either. My Redis is a "managed database" provided by Digital Ocean.

Here's the past 24 hours view. And the crash happened at 8am this morning.

Capture

As for queues, it's not overly busy compared to some platforms. Unless Horizon is connecting to the DB and some change there?

queues

@staudenmeir
Copy link
Contributor

Can you go back to PHP 8.2 but keep Laravel 11?

@sts-ryan-holton
Copy link
Author

sts-ryan-holton commented Mar 28, 2024

Okay, I've switched back to php 8.2. Retaining Laravel 11

php-switch

Memory usage after switching of mariadb is: Memory: 3.2G, so still high

@sts-ryan-holton
Copy link
Author

sts-ryan-holton commented Mar 28, 2024

Whether this is any use:

Locally, using Laravel Sail, PHP 8.3, MySQL 5.7 (image: 'mysql/mysql-server:5.7'), Horizon, Redis etc, I'm running the Laravel Task Scheduler using schedule:work and Horizon is processing everything normally as if it was in production, I'm seeing:

Over a 5 minute period, looking within Docker Desktop on the MySQL container, the memory usage graph in here has gone up over the 5 minute period:

It started at: 226MB at 19:30, and and risen to: 236MB by 19:35, an increase of 10MB in 5 minutes? With very little workload

I don't want this to distract from the issue. Maybe there's something up with tasks being scheduled in Laravel 11.

Meanwhile, production now at 3.3GB memory usage

@ionutantohi
Copy link

A bit unrelated, but I encountered a similar issue, but not with database but with php itself being oom-killed once I switched to php 8.3. In my case It was due to some image manipulations I had to perform. I had to limit the max image size being produced to not go over 2000x2000. Still. on php 8.2 worked fine with larger images.

@sts-ryan-holton
Copy link
Author

Database memory rose by 3GB over night. It's now sitting at 6.1GB

@staudenmeir
Copy link
Contributor

Is the database used by anything else besides the Laravel application?

Did you recently update MariaDB or change its configuration?

Are you using any "special" features like procedures or triggers?

For some people, switching to TCMalloc fixed the memory leaks:
https://mariadb.com/kb/en/using-mariadb-with-tcmalloc-or-jemalloc/
https://jira.mariadb.org/browse/MDEV-30889
https://www.reddit.com/r/mariadb/comments/xqvqbp/configure_mariadb_to_use_tcmalloc/

@sts-ryan-holton
Copy link
Author

sts-ryan-holton commented Mar 29, 2024

Is the database used by anything else besides the Laravel application?

It is not

Did you recently update MariaDB or change its configuration?

At least once a week, if not, once bi-weekly I try to run a dnf update command on my server. I've kept this up for over a year now. But have never actually changed MariaDB's version. I always composer update my application at least once a week as well, if not, at least once a month. So I'm effectively always running whatever changes are being made to the framework.

For some people, switching to TCMalloc fixed the memory leaks:

I've spent a few hours this morning digging further. I did come across jemalloc. Which I found that when running:

SHOW GLOBAL VARIABLES LIKE 'version_malloc_library';

It outputs "system". After some googling on how to install it, I edited mariadb.service and added:

[Service]
Environment="LD_PRELOAD=/usr/lib64/libjemalloc.so.2"

Restarted the database server. And version_malloc_library outputs jemalloc.

Upon rebooting, memory output was 260MB. That was some half hour ago roughly, and now we're sitting at 702MB. So it's still rising. I would hope it doesn't go beyond 1.1GB since my other application running Laravel 10 doesn't go beyond this. Interestingly, when running SHOW GLOBAL VARIABLES LIKE 'version_malloc_library'; on my other Laravel 10 application I get no entries - but it's not a fair test, it's using MySQL 8.0, sure, I could go down the road of migrating my entire database, but I doubt that will solve this issue if it's framework related or a a first-party package related.

I'll switch back to PHP 8.3 whilst I'm here.

I feel that there's something in Laravel 11 making excessive connections to the DB?

@staudenmeir
Copy link
Contributor

staudenmeir commented Mar 29, 2024

I feel that there's something in Laravel 11 making excessive connections to the DB?

The queue worker would be my first (and really only) suspect, but since you are using Redis, it only uses database for batching and storing failed jobs, AFAIK. Are you using batching or do you have a lot failed jobs/retries?

Besides that, Laravel should only connect to the database once for every user request.

@sts-ryan-holton
Copy link
Author

I'm not using job batching. In the past hour Horizon has processed 43,536 jobs, and 4 failures. Of which, the retry attempts is at 2 for each.

I tried pausing Horizon for a brief moment (hard to do in production systems) to see whether the memory usage would drop, and it didn't. But I agree with the idea that the queue worker could be doing something. Yes it connects to Redis, but then Redis has to have an open connection to that database. Could it be failing to close the connection or creating a sleep worker that never gets removed? I'm not sure, I've restarted Horizon.

Just checked in on MariaDB's memory usage and currently:

Tasks: 91 (limit: 100205)
     Memory: 1.0G
        CPU: 14min 39.529s

@driesvints
Copy link
Member

@sts-ryan-holton if you stop the queue worker all together and monitor if the memory usage goes up? That way we'll know for sure.

@sts-ryan-holton
Copy link
Author

sts-ryan-holton commented Mar 29, 2024

I ran sudo supervisorctl stop horizon and observed 2m 30s of Horizon not runninig at all. Meanwhile, had htop running and also periodically running systemctl status mariadb.service.

Over that time, the number of tasks remains at 91, memory at 1.1G, and CPU at 16m.

Here's the status after bringing supervisor back up. Of course, the built up jobs would result in the higher server memory usage here.

Other than that, no change sadly.

htop
mariadb-service

In addition, I reached out to the hosting provider Digital Ocean to get their thoughts. They're pointing back to the application

@driesvints
Copy link
Member

So it's definitely the queue workers. @sts-ryan-holton the next thing we can try is using Laravel's queue worker directly instead of horizon. What happens if you start processing jobs through the queue worker and leave horizon stopped?

@sts-ryan-holton
Copy link
Author

As in me manually running queue:work in a terminal @driesvints ?

@driesvints
Copy link
Member

yeah, take horizon out of the scope entirely and see if the Laravel queue worker has the same issue.

@sts-ryan-holton
Copy link
Author

Okay, I'll run the same test as above, except have three separate tabs and run the following:

php artisan queue:work redis-short-running --queue=on-demand-runs-now,on-demand-stripe-listeners,cron,listeners,redis-short-running --tries=1 --max-jobs=1000 --max-time=300 --sleep=3

php artisan queue:work redis --queue=on-demand-runs-now,uptime,notifications,dns,certificates,domains-expires,default --tries=1 --max-jobs=1000 --max-time=300 --sleep=3

php artisan queue:work redis-long-running --queue=on-demand-runs-now,domains,blacklists,subscriptions,redis-long-running --tries=1 --max-jobs=1000 --max-time=300 --sleep=3

Essentially manually running them, I'll report back the same types of screenshots

@sts-ryan-holton
Copy link
Author

sts-ryan-holton commented Mar 29, 2024

Okay, same test performed, except I manually ran the queue workers. The three separate queue worker tabs running the commands above all successfully run without any errors thrown in the output.

htop shows the same kind of output as with Horizon stopped entirely, the htop command output here is of Horizon stopped and the queue workers manually running. and systemctl status mariadb.service shows the same task count all be it 1 less which rose back to 92, same memory, same CPU time.

queue-working

htop-new

queue-new

@driesvints
Copy link
Member

Then it's horizon most likely, not Laravel. Or maybe a combo of both. Do you still know which version of horizon you were on before you made the upgrade to Laravel v11?

@sts-ryan-holton
Copy link
Author

Prior to the upgrade, I was on:

  • Horizon: 5.23.1
  • Laravel framework: 10.48.3

Post upgrade...

  • Horizon: 5.23.2
  • Laravel framework: 11.1.1

@driesvints
Copy link
Member

@sts-ryan-holton can you try to downgrade to Horizon 5.23.1 and laravel v11.0.7 and see if the issue persists?

@sts-ryan-holton
Copy link
Author

sts-ryan-holton commented Mar 29, 2024

I've downgraded Laravel to 11.0.7 and Horizon to 5.23.1. I had to also downgrade Pulse to Beta 15 from 16 as it required at least 11.0.8.

Memory of database prior to restart: 1.4GB

I've restarted Horizon, cleared config and cache.

I've also restarted mariadb.service at the same time here, so at the time of this reply:

straight after restarting mariadb:

Tasks: 34 (limit: 100205)
     Memory: 262.5M
        CPU: 706ms

@sts-ryan-holton
Copy link
Author

Within a minute of restarting mariadb service, task count and memory has already risen:

Tasks: 71 (limit: 100205)
     Memory: 309.2M
        CPU: 8.757s

@driesvints
Copy link
Member

Thanks. I'm sorry but I'm out of ideas. Right now this doesn't seem Laravel or Horizon related to me but something specifically with your app since we don't have any other reports and Laravel v11 has already been live for a month. I'm going to close this one for now but if there's more reports coming in we could have another look. Anyone's still free to help you out either here or somewhere else.

@sts-ryan-holton
Copy link
Author

I've reverted back to 11.1.1 framework and latest Horizon v5. On PHP 8.3. The only thing then I can think of is to setup a cron to manually restart mariadb once a day, or sooner. I'm hoping others come across this, it's quite a major thing

@sts-ryan-holton
Copy link
Author

@driesvints @staudenmeir Okay, since the last comment I posted, something interesting has happened here. One thing I observed within the mariadb.log file was thousands upon thousands of lines of [ERROR] lines for the following:

2024-03-29 12:51:31 5813 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'histogram' at position 10 to have type longblob, found type varbinary(255).
2024-03-29 12:51:31 5813 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'hist_type' at position 9 to have type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB','JSON_HB'), found type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB').

One suggestion I was given was to run mariadb-upgrade. Except, I've never ran this command ever on my server, and despite the logs above, which has occurred daily, it's never caused the server to crash.

Anyhow, I ran mariadb-upgrade, and for 3 hours prior to running this I had logged the memory output with the command:

systemctl show mariadb.service | grep "MemoryCurrent" | awk -F= '{printf "%.0f\n", $2/1024/1024}'

I then performed the upgrade, whilst:

  • Running PHP 8.3
  • Laravel 11.1.1
  • Latest Horizon

And logged the memory afterwards:

prior to `mariadb-upgrade`

191MB at 2024-01-29 15:00 (30m change: +0mb, 1h change: 0mb)
572MB at 2024-01-29 15:30 (30m change: +381mb, 1h change: 0mb)
667MB at 2024-01-29 16:00 (30m change: +95mb, 1h change: 476mb)
775MB at 2024-01-29 16:30 (30m change: +108mb, 1h change: 203mb)
866MB at 2024-01-29 17:00 (30m change: +91mb, 1h change: 199mb)
1053MB at 2024-01-29 18:00 (30m change: +0, 1h change: 187mb)

3h change: +862mb (+287mb per hour)

after `mariadb-upgrade`

269MB at 2024-01-29 18:00 (30m change: +0mb, 1h change: 0mb)
428MB at 2024-01-29 18:30 (30m change: +159mb, 1h change: 0mb)
446MB at 2024-01-29 19:00 (30m change: +18mb, 1h change: 177mb)
444MB at 2024-01-29 19:30 (30m change: +0mb, 1h change: 16mb)
463MB at 2024-01-29 20:00 (30m change: +19mb, 1h change: 35mb)
455MB at 2024-01-29 20:30 (30m change: +0, 1h change: 11mb)
477MB at 2024-01-29 21:00 (30m change: +22mb, 1h change: 14mb)

3h change: +208 (+69mb per hour)

Since then, I've woken this morning, it's been around 13 hours now, and the memory usage right now is:

  • 483mb

Previously it would've been over 6gb.

Note I don't now have excessive errors in the logs, but query whether Laravel 11, since it doesn't use the dbal package anymore, whether, the errors I was experiencing related to mysql.column_stats are actually because I haven't performed the upgrade command and the removal of that package was doing something with those statistics which since I was getting errors before would now come alight? Thus why there might not have been any other reports of this.

I am still using jemalloc but might try removing this and defaulting it back to "system" for the rest of the day.

@natecarlson
Copy link

natecarlson commented Apr 9, 2024

It sounds like MariaDB was upgraded without mysql_upgrade being run, and that something related to the MariaDB upgrade was causing the memory issue, coincidentally with (but possibly exacerbated by queries run by) the Laravel upgrade.

You mention dnf, so I assume Fedora. I haven't used Fedora in forever.. but it sounds like you can use 'dnf history list' to show package history (ref) - I believe it would be 'dnf history list mariadb-server'. Can you please run that and post the results?

It's worth noting that, unless there is a bug in the database server, nothing that Laravel (or any other database client) does should be able to cause the server to use more than the configured memory amounts/etc. It sounds like your MariaDB instance is probably configured to use more memory than your system can actually support; I suspect that if the buffer sizes/connection pool limits/etc were configured to the limits you expect, even with this issue the database server probably would not have run out of memory - but instead would have started running into major performance issues once all memory available to it was allocated, which would have hopefully made this issue easier to debug. ref - note that MariaDB's recommendations for memory utilization and such usually are written with the expectation that MariaDB is the sole process on a system/container - so adjust accordingly.

For what it's worth, here's another reference to MariaDB running poorly with similar symptoms until mysql_upgrade is run: home-assistant/core#87811

In any case - you'll probably want to make sure that whatever method you use to keep your system up to date will either only allow minor release upgrades for critical processes, or else ensure that things like mysql_upgrade will automatically be run upon major upgrades.

Edit: also, just as context - the mysql.* tables are internal tables that are used internally by the database for any number of purposes, and can also be directly queried. In this case, it was probably MariaDB's optimizer referencing those tables to figure out the best query execution plan for whatever queries were being run by Laravel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants