Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to check Migration seems to hanging #1179

Closed
AleksCee opened this issue Feb 20, 2024 · 22 comments
Closed

How to check Migration seems to hanging #1179

AleksCee opened this issue Feb 20, 2024 · 22 comments
Assignees
Labels
🐛 bug Something isn't working

Comments

@AleksCee
Copy link

AleksCee commented Feb 20, 2024

After the hotfix I can start the migration but it seems tu hang now, how can I check or resume it?

the count 635 is more than 10 minutes without changes.

select count() from results; select count() from results_bad_json;
+----------+
| count(*) |
+----------+
| 635 |
+----------+
1 row in set (0.001 sec)

+----------+
| count(*) |
+----------+
| 7257 |
+----------+
1 row in set (0.007 sec)

@AleksCee
Copy link
Author

If it’s stops I see a log output in the docker console:
speedtest-tracker | ./run: line 2: 115 Killed s6-setuidgid webuser php $WEBUSER_HOME/artisan queue:work --tries=3 --no-ansi -q
speedtest-tracker | ./run: line 2: 223 Killed s6-setuidgid webuser php $WEBUSER_HOME/artisan queue:work --tries=3 --no-ansi -q

Now I have try it again and stops after 800 records.

@AleksCee
Copy link
Author

AleksCee commented Feb 20, 2024

Ok, my problem is fixed - it seems that the migrate-task was terminated to fast. After the systemload was lower I have truncate the results an start the migration again. Now it was faster an finished bevor the task was killed.
perhaps the taskrunner-timeout should be increased for this job?

@jaggel
Copy link

jaggel commented Feb 20, 2024

I had the same issue and ended up in doing "partial" migrations chunk by chunk... Moving some and the deleting the migrated from the _bad_json table and re-starting the migration. I agree that the timeout should be increased especially for installations with large number of records.

@sschneider
Copy link

sschneider commented Feb 20, 2024

After starting the migration, directly two messages are showing.
"Starting data migration..." and "There was an issue migrating the data!"

Will look into it later to share more information.

@alexjustesen
Copy link
Owner

Ok, my problem is fixed - it seam that the migrate-task was terminated to fast. After the systemload was lower I have truncate the results an start the migration again. Now it was faster an finished bevor the task was killed.
perhaps the taskrunner-timeout should be increased for this job?

I'll increase the timeout of the job. I tested with ~5000 records and didn't get this but that comes down to how fast the host system is.

@alexjustesen alexjustesen added the 🐛 bug Something isn't working label Feb 20, 2024
@alexjustesen alexjustesen self-assigned this Feb 20, 2024
@alexjustesen
Copy link
Owner

@AleksCee how long was the process running before it stopped?

@AleksCee
Copy link
Author

@alexjustesen it looks like 6 minutes from the first request to /results till the killed entry in the logs.

@alexjustesen
Copy link
Owner

What hardware are you running on? That feels really slow

@AleksCee
Copy link
Author

It‘s an Synology nas (ds716+) but at Update time the backup jobs were still running.

@AleksCee
Copy link
Author

@alexjustesen btw timing, when starting the docker-compose with Maria-dB and Speedtest sometimes the dB migration starts befor the sql server is ready to connect, because of release upgrade in the docker after update to latest of the dB-Container. Can you perhaps check connection in a loop in a little retry? In case if the database not ready, the Speedtest container crash and restart 2-3 times.. is eventually in some cases after an update a problem….

@juanmanuelbc
Copy link

I'll increase the timeout of the job. I tested with ~5000 records and didn't get this but that comes down to how fast the host system is.

My Synology DS920+ wasn't able to afford the migration of 7.182 results, so I made the migration of the database with my laptop and now everything is working like a charm...

Thank you!!!

@alexjustesen
Copy link
Owner

@alexjustesen btw timing, when starting the docker-compose with Maria-dB and Speedtest sometimes the dB migration starts befor the sql server is ready to connect, because of release upgrade in the docker after update to latest of the dB-Container. Can you perhaps check connection in a loop in a little retry? In case if the database not ready, the Speedtest container crash and restart 2-3 times.. is eventually in some cases after an update a problem….

I'm planning on updating that doc page with a health check so it waits for a healthy DB connection. GitBook is having an issue with that component for the last few days so I haven't been able to make updates. Probably have to just delete it and make a new one.

@sschneider
Copy link

@alexjustesen my sqlite results_bad_json table has over 22 thousand entries.
The error points to a part of the framework (function bindValues)

[2024-02-20 17:49:59] production.ERROR: Error: Object of class stdClass could not be converted to string in /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php:723

I guess this makes it not part of the previous described issue.

@alexjustesen
Copy link
Owner

@sschneider 22k! What are you running it every 5min? I think you might be at the top of the leader board lol

That was way outside of my test criteria, sounds like I'll have to split the job up into batches that can be processed separately to avoid long running processes.

@sschneider
Copy link

sschneider commented Feb 20, 2024

@alexjustesensind 2022-10-31 when I set it up the second or third time :-).
The check runs every 30 minutes ;-)
Thanks for all the work you put into this :)

@AleksCee
Copy link
Author

I have 45576 in the old Speedtest (wait for the importer ;) ), 7600 in yours and about the same rows are lost by changing from sqlite to MySQL because can’t find a way to import the dump to MySQL.
Test runs every 30 minutes, too. :-) we love this tool :-)

@thegodfatherrelish

This comment was marked as duplicate.

@AleksCee AleksCee changed the title How to check Migration seams to hanging How to check Migration seems to hanging Feb 21, 2024
@alexjustesen
Copy link
Owner

@thegodfatherrelish different issue, follow #1205 for that one.

@HiiiiiHa
Copy link

HiiiiiHa commented Feb 22, 2024

hello,
migration launched one hour ago on Synology 220+. No notification of completion so far.
Results are still showing empty. Will wait some more but looks like something wrong happened

Any advise? thanks

update1: no update after 24h, hence updated to latest version and launched again migration...after more than 30min still nothing
update2: upset by this I decided to reinstall from scratch after cleaning everything and now facing a SQLSTATE[HY000] [2002] Connection refused error

@SAS-1
Copy link

SAS-1 commented Feb 22, 2024

same issue running on Synology NAS 920+ and my bad results has 33.913 rows in it, kicked the migration off and it did 17,703 and just stopped. would be nice to do the migration fully but not major.

Maybe we could get a SQL we could manually run to get the data over?

@sschneider
Copy link

sschneider commented Feb 22, 2024

@alexjustesen I checked the records in more detail and it might be that my issue is related to different JSONs with (most) and without "" (764).
image

@alexjustesen
Copy link
Owner

Import/export is coming in 0.21.0, closing out these migration issues as this system will be removed in favor of the framework's version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

8 participants