-
-
Notifications
You must be signed in to change notification settings - Fork 179
Make BulkInfoProviderService GREAT again. #1110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Added logic to clean provider keys and handle corrupted keys during bulk processing.
Implement provider key cleaning logic in BulkInfoProvider
|
Technical Review & Action Plan for Bulk Info Provider Fix Subject: Fixing
The primary reported symptom was that all bulk info provider jobs were failing immediately with a RuntimeException: Initial log analysis revealed that the root cause was not a lack of results, but a fatal error occurring before any API calls were made. The When a provider class (e.g., Corrupted Key Example: The core logic was failing to match the expected clean key (e.g., The current working patch (the one submitted in this PR) implements a robust, multi-step cleaning and matching logic to handle this specific data corruption:
Status: This patch successfully resolves the primary bug. The main bulk import workflow (Step 1) now correctly parses the provider keys, collects the part keywords, and successfully fetches results from the provider API.
While Patch 11.1 fixes the main workflow, further testing revealed a minor edge case. When a job is already in Step 2 (review), clicking the "Research All" button ( Log analysis of this specific action shows that the The cleaning logic in Patch 11.1 was not designed to handle an
The fix for this final edge case is minor and builds directly upon the working code in this Pull Request.
Conceptual Code (to be added to (Note: A previous attempt to implement this fix (Patch 11.2) contained a fatal syntax error ( Conclusion: This Pull Request (Patch 11.1) solves 99% of the critical bug. The final |
|
Tested this patch today. Seem to work fine thx for this ! Edit : Selecting a large amount of reference (80 parts) seem to hit the Mouser API limitation. |
Hello! i am glad that you liked it! I'll take a look at Mouser's limit but, even slowing down the queries, part-db itself will break when launching big bulk import jobs. Even setting a bigger memory limit, jobs bigger than 100-150 queries will return an error 500 code. Anyways, as i would like to fix the research-all feature inside bulk-import, i'll do my best to also fix Mouser's queries timmings. For now, try to launch a limited amount of parts to update. It would be awesome if you share some logs about Mouser's limits, my parts come from LCSC so i haven't had any issue. |
I think the correct approach for that would be to make large batch processing asynchronously from any web loading in some separate worker tasks. Symfony/messenger offers functionality for that quite easy but it is not really used anywwhere so far. If that runs asynchronously then you can split up the part into smaller batches, as time doesn't matter. Just start the import and come back later. |
Understood! Moving to an async worker/queue system sounds like the perfect long-term solution to avoid timeouts and rate limits without freezing the UI. However, that architecture change is a bit above my current PHP skills (I'm mostly an electronics guy fixing what I broke!). For now, this PR just fixes the crashing bug so the feature works again. Maybe we can open a separate 'Feature Request' issue to implement the Async/Symfony Messenger system later? I will definitely try to help, looks like a challenge! |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1110 +/- ##
============================================
- Coverage 58.41% 58.37% -0.05%
- Complexity 7275 7293 +18
============================================
Files 579 579
Lines 23147 23192 +45
============================================
+ Hits 13521 13538 +17
- Misses 9626 9654 +28 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
So I looked into this, and the underlying problem is that an object instance of an info provider was passed, where the provider key (a string) was expected. Unfortunatley PHP cannot ensure the types of an array at runtime and as it was filled by a form the static analysis couldn't catch it. The AI proposed code does a lot of complicated to stuff to extract the provider key out of a stringified version of the whole object. That is not really elegant and will probably break quite easily. I have implemented this fix in the master branch now already, and it will be part of the next Part-DB release. |
Love you! AI is never the way. thanks for taking a look into it, for real, thanks! |
Added logic to clean provider keys and handle corrupted keys during bulk processing.
Don´t hate me, vibe coded it. it works on my machine lol. Nah joking, it has been vibe coded but it does work perfectly fine in my container, i run the latest image. The feature is amazing! we were needing it working.
I have ONLY modified the file: src/Services/InfoProviderSystem/BulkInfoProviderService.php
Love you guys, amazing work!!!