Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IMP] base: Make it easier to extend Attachment #33392

Closed
wants to merge 4 commits into from
Closed

[IMP] base: Make it easier to extend Attachment #33392

wants to merge 4 commits into from

Conversation

keshrath
Copy link
Contributor

@keshrath keshrath commented May 15, 2019

Description of the issue/feature this PR addresses:

Fixes #33456

Current behavior before PR:

At the moment it is not easy to add a new storage type without overwriting several methods. The same applies to the method for migrating attachments.

Desired behavior after PR is merged:

This patch has the goal to improve this. An example that uses these changes is this addition of the storage type Large Object.

Large Objec: https://github.com/muk-it/muk_base/blob/12.0/muk_attachment_lobject/models/ir_attachment.py

Since the methods: _get_datas_inital_vals, _update_datas_vals, _get_datas_clean_vals, _clean_datas_after_write are used it would even work if several storage extensions were installed in the system, because now every module only has to implement its own cleanup method.

--
I confirm I have signed the CLA and read the PR guidelines at www.odoo.com/submit-pr

Description of the issue/feature this PR addresses:

Current behavior before PR:

At the moment it is not easy to add a new storage type without overwriting several methods. The same applies to the method for migrating attachments.

Desired behavior after PR is merged:

This patch has the goal to improve this. An example that uses these changes is this addition of the storage type Large Object.

Large Objec: https://github.com/muk-it/muk_base/blob/12.0/muk_attachment_lobject/models/ir_attachment.py

Since the methods: `_get_datas_inital_vals`, `_update_datas_vals`, `_get_datas_clean_vals`, `_clean_datas_after_write` are used it would even work if several storage extensions were installed in the system, because now every module only has to implement its own cleanup method.

--
I confirm I have signed the CLA and read the PR guidelines at www.odoo.com/submit-pr
@robodoo robodoo added seen 🙂 CI 🤖 Robodoo has seen passing statuses labels May 15, 2019
@keshrath
Copy link
Contributor Author

There is another problem that this pull request does not solve: #33456

Although I can understand the idea behind this step, it is impossible to call `write` in a subclass without deleting these fields. Maybe it would be a way to control this by context, but since this field is readonly anyway, you could omit it altogether.

if not self.env.context.get('storage', False):
  for field in ('file_size', 'checksum'):
    vals.pop(field, False)
@robodoo robodoo added CI 🤖 Robodoo has seen passing statuses and removed CI 🤖 Robodoo has seen passing statuses labels May 17, 2019
@mart-e
Copy link
Contributor

mart-e commented May 27, 2019

Is this ready? You are not only creating overridable methods but also changing the logic on some places which seems odd.
Please explain why you are making these changes and how you would achieve to actually add a new storage method.

@keshrath
Copy link
Contributor Author

keshrath commented May 27, 2019

@mart-e

First commit:

The only place where the logic should change is in the method force_storage. Since this method did not have the desired effect in my eyes. The rule (

if not any(arg[0] in ('id', 'res_field') for arg in args):
) causes that not all attachments are beeing migrated. The separation into two methods (force_storage and migrate) is basically only for a better overview.

The context (

attach.with_context(migration=True).write({'datas': attach.datas})
) is necessary because otherwise a lot of mimetypes will break. The value datas triggers that the mimetypes are evaluated again, although nothing has changed. But now only datas are given as value so the recalculation is wrong in many cases.

The splitting of _inverse_datas into several methods should not change the logic and actually only serves for better extensibility.

Before:
https://github.com/muk-it/muk_base/blob/11.0/muk_attachment_lobject/models/ir_attachment.py

Afterwards:
https://github.com/muk-it/muk_base/blob/12.0/muk_attachment_lobject/models/ir_attachment.py

This solves the following scenario:

The filestore must be checked each time and the file may have to be deleted.

if fname:
    self._file_delete(fname)

This is not a major problem yet, because an extension could do the same, but if this extension also has a rule like this and there is another extension, it would have to take it into account or know about one another, but in my eyes this is very hard to maintain.

Second commit:

This commit addresses the problem described here: #33456

I do not find this solution particularly safe from a programming point of view.

https://github.com/muk-it/muk_base/blob/12.0/muk_attachment_lobject/models/ir_attachment.py#L111

The super method is also called here, but only works in this one file and not in models inherited from it.

https://github.com/odoo/odoo/pull/33392/files#diff-65be9bcf9d09c93a04039c00e1861b9fR248

I hope it is now a little clearer what the background of this pull request is.

@robodoo robodoo removed the CI 🤖 Robodoo has seen passing statuses label Jun 21, 2019
@keshrath
Copy link
Contributor Author

keshrath commented Jun 21, 2019

@mart-e Did you had the opportunity to read my explanation and is it now clearer what this PR is trying to achieve? I have adapted the PR to the current master branch.

@robodoo robodoo added the CI 🤖 Robodoo has seen passing statuses label Jun 21, 2019
Copy link
Contributor

@mart-e mart-e left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late reply and thanks for the explanations made a first review.
Will check it again next week

@@ -191,31 +231,22 @@ def _compute_datas(self):
attach.datas = self._file_read(attach.store_fname, bin_size)
else:
attach.datas = attach.db_datas


@api.multi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need for api.multi it is the default

record_domain = [
'&', ('type', '=', 'binary'),
'&', storage_domain[self._storage()],
'|', ('res_field', '=', False), ('res_field', '!=', False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this clause is to avoid the _search override? adding a comment to explain is probably a good idea

storage = self._storage().upper()
for index, attach in enumerate(self):
_logger.info(_("Migrate Attachment %s of %s to %s") % (index + 1, record_count, storage))
attach.with_context(migration=True).write({'datas': attach.datas})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and why no giving the existing mimetype in the write values? This way we avoid to introduce a context key to skip verification.

@keshrath
Copy link
Contributor Author

keshrath commented Aug 9, 2019

Thanks for taking the time to review. For some reason the source repository of this PR is broken and I can't make any more changes to it. So I created a new PR and incorporated the feedback and fixed the merge conflict.

PR: #35614 (review)

@keshrath keshrath closed this Aug 9, 2019
@robodoo robodoo added closed 💔 and removed CI 🤖 Robodoo has seen passing statuses labels Aug 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants