New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
8.0 manage invalid data error #188
Conversation
Discussion continued from #187 |
@gfcapalbo I just tried this at the root of connector import exception
try:
raise exception.InvalidDataTypeError()
except (exception.FailedJobError, Exception):
print "caught it" It works as expected. Can you provide a more detailed example? |
I think the issue comes from connector/connector/exception.py Line 23 in b5c69d0
Exception and not RuntimeError . I don't know why the hell I derived it from RuntimeError ...
|
>>> isinstance(RuntimeError(), Exception)
True So it must be something else, no? |
Still |
I think that deriving from RuntimeError or Exception SHouldn't make any difference being that RuntimeError derives from Exception. In a minimal version made in a PY file the same exception Structure works as expected .( it catches InvalidDatarerror too With an except Exception:) At the time of this posting I had a client waiting for a connector with jobs with invalid data stuck on start. so the content of this MR is clearly the Nuclear Option to deliver a fix. Doing more tests later i hope to give you more insight. |
I 👎 - a specific Exception guard is required everywhere IMO, even if just for There is also I think the issue is instead possibly an incorrect type comparison deeper down the line. Maybe something less elegant than isinstance:
@gfcapalbo do you maybe have a traceback from when this issue was occurring? Long-shot, but never know what it could hold. |
Hi, maybe related: today I am hit by jobs stuck in 'started' after a simple except_orm. This exception is caught correctly, but another exception is raised during the update of the job afterwards.
|
This fixes my problem of jobs getting stuck in 'started' state after an exception was raised during the job: #214. In my case, one or more ORM records were created during the job before the exception was raised. @gfcapalbo was it the same in your case? |
- the job exception types has been moved to the module: `queue_job`. - They are still loaded / proxied by the module `connector.exception`. - But somehow this requires that the module is correctly imported. - It is better to use always import. See: http://effbot.org/zone/import-confusion.htm - Import made similar to: OCA/connector#188 (comment)
- the job exception types has been moved to the module: `queue_job`. - They are still loaded / proxied by the module `connector.exception`. - But somehow this requires that the module is correctly imported. - It is better to use always import. See: http://effbot.org/zone/import-confusion.htm - Import made similar to: OCA/connector#188 (comment)
[MIG] Rename manifest file [MIG] Replace "openerp" ocurrences with "odoo" [MIG] Do not use RedmineConnectorSession anymore (issue #212) - extend ConnectorEnvironment to host the redmine_cache that was previously extended in RedmineConnectorSession - remove session parameter from connector.get_environment (the connector module takes the environment from the backend object now). the default language from the backend is not applied there, because the session object is not available anymore. - Simplify RedmineModelBinder reusing the generalizations from of Binder. This way the use of the session property is not needed. - Proxy the redmine_cache of RedmineImportSyncronizer to the one saved in the connector environment. The same happens for RedmineAdapter - Simplify RedmineImportSynchronizer to avoid the use of session - remove "session" parameter from: import_batch, import_record, mock_delay, get_environment, import_single_user_time_entries. It is not replaced with "env", because env parameter passing is also removed in the last version of odoo_connector. the environment is now taken from the backend_record... - that's why some methods pass now the backend object instead of the backend_id. The backend is used now to get the environment, not at the opossite. [MIG] replace `hr.analytic.timesheet` by `account.analytic.line` - since v9 the model 'hr.analytic.timesheet` has been replaced by `account.analytic.line`: https://github.com/OCA/OpenUpgrade/blob/9.0/addons/hr_timesheet/migrations/9.0.1.0/post-migration.py#L17 - note that in v9 we need also a mapper, in order to set:: is_timesheet = True during the import of the timesheet records [MIG] Use queue_job module from OCA/queue see: OCA/queue#1 [MIG] Renamed ImportSynchronizer to Importer - see: OCA/connector#47 [MIG] import exception module - the job exception types has been moved to the module: `queue_job`. - They are still loaded / proxied by the module `connector.exception`. - But somehow this requires that the module is correctly imported. - It is better to use always import. See: http://effbot.org/zone/import-confusion.htm - Import made similar to: OCA/connector#188 (comment) [MIG] slight changes to the Binder - Rename 'to_backend' to 'to_external' - Rename 'to_odoo' to 'to_internal' see: OCA/connector@fef7ee2 pep8 adjustments [MIG] rename ImportSynchronizers that left [MIG] fix hr_timesheet_sheet_form - the `buttons` element taked as reference by the previous version does not exist anymore - another position was selected so the result is the same fix pylint suggestions fix pyflakes errors
#214 is merged, please re-open if needed. |
Problem:
I would have many jobs stuck in start. THese jobs Had all raised InvalidDataError.
Strangely the Exception was not catched in main.py, but the generic
Therefore the store function that had to write my failed job would never trigger
When explicitly importing InvalidDataError, the exception was catched (see commit
gfcapalbo/connector@5b77f3c)
I replaced except (FailedJobError, InvaliDataError, Exception):
with
except:
I am still not sure why this works for me, but it seems OK to set ALL of the forseeable exceptions to fail jobs.