Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make resources pickleable/serializable #678

Open
maxrothman opened this issue Jun 11, 2016 · 20 comments
Open

Make resources pickleable/serializable #678

maxrothman opened this issue Jun 11, 2016 · 20 comments
Labels
closing-soon This issue will automatically close in 4 days unless further comments are made. feature-request This issue requests a feature. needs-review p2 This is a standard priority issue resources

Comments

@maxrothman
Copy link

Boto3 resources (e.g. instances, s3 objects, etc.) are not pickleable and have no to_json() method or similar. Therefore, there's currently no way to cache resources retrieved via boto3. This is problematic when retrieving a large number of resources that change infrequently. Even a cache of 30s or so can greatly increase the performance of certain programs and drastically reduce the number of necessary API calls to AWS.

Would it be possible to have some way to serialize resources?

@jamesls
Copy link
Member

jamesls commented Jun 15, 2016

In general, I'd like to improve things with regard to pickling/serializing objects. However, this is going to be challenging to implement giving the dynamic nature of resources/instances.

Marking as a feature request. If anyone has any ideas/suggestions they want to share, feel free to chime in.

@jamesls jamesls added feature-request This issue requests a feature. others-chime-in labels Jun 15, 2016
@maxrothman
Copy link
Author

This should be possible by giving the classes a __reduce__ method. See this StackOverflow question and the Python docs for more info.

@maxrothman
Copy link
Author

I might be interested in contributing a patch for this issue if someone could help orient me in the codebase so I can find the callable that generates resources. @jamesls do you have thoughts on potential challenges in making said patch?

@maxrothman
Copy link
Author

@jamesls ping. Any update on this?

@maxrothman
Copy link
Author

@jamesls ping

@maxrothman
Copy link
Author

I'm working on a patch for this. Is there any way currently that given a resource object you can get a reference to the ServiceContext that was used to create it? Or alternatively, is there a way to create a resource object from raw response JSON?

@maxrothman
Copy link
Author

@jamesls any insight on the above question?

@maxrothman
Copy link
Author

Ping. Is there any way I can get some support on this? I've expressed interest in submitting a patch, but I have some questions (above).

@dazza-codes
Copy link

dazza-codes commented Feb 28, 2019

Wow, this is old! The factory pattern (as implemented) screws up easy exception handling and pickling. Looks like this one is thrown in the too hard basket.

E       _pickle.PicklingError: Can't pickle <class 'boto3.resources.factory.s3.ObjectSummary'>: attribute lookup s3.ObjectSummary on boto3.resources.factory failed

A work-around that might somehow find it's way into boto3:

# substitute a namedtuple if necessary for py 2.x or earlier 3.x
# https://docs.python.org/3/library/collections.html#collections.namedtuple
@dataclass(frozen=True)
class S3Object:
    """Just the bucket_name and key for an s3.ObjectSummary
    This simple data class should work around problems with Pickle
    for an s3.ObjectSummary, so if obj is an s3.ObjectSummary, then:
    S3Object(bucket=obj.bucket_name, key=obj.key)
    """
    bucket: str
    key: str

bucket_name = 'example'
s3 = boto3.resource('s3')
s3_bucket = s3.Bucket(bucket_name)

objects = (
    S3Object(bucket=obj.bucket_name, key=obj.key)
    for obj in s3_bucket.objects.filter(Prefix='example_prefix')
)

with multiprocessing.Pool() as pool:
    processed_objects = pool.map(YourProcessor, objects)

@dazza-codes
Copy link

Symptoms of the factory patterns gone wrong?

>>> type(objects)
<class 'boto3.resources.collection.s3.Bucket.objectsCollection'>
>>> isinstance(objects, boto3.resources.collection.s3.Bucket.objectsCollection)
E       AttributeError: module 'boto3.resources.collection' has no attribute 's3'

@luiscastillocr
Copy link

luiscastillocr commented May 8, 2019

 File "/home/vagrant/home/vagrant/wikirealty/lib/python3.4/site-packages/memoize/__init__.py", line 339, in decorated_function
    timeout=decorated_function.cache_timeout
  File "/home/vagrant/home/vagrant/wikirealty/lib/python3.4/site-packages/memoize/__init__.py", line 82, in set
    self.cache.set(key=key, value=value, timeout=timeout)
  File "/home/vagrant/home/vagrant/wikirealty/lib/python3.4/site-packages/django/core/cache/backends/memcached.py", line 86, in set
    if not self._cache.set(key, value, self.get_backend_timeout(timeout)):
  File "/home/vagrant/home/vagrant/wikirealty/lib/python3.4/site-packages/memcache.py", line 727, in set
    return self._set("set", key, val, time, min_compress_len, noreply)
  File "/home/vagrant/home/vagrant/wikirealty/lib/python3.4/site-packages/memcache.py", line 1055, in _set
    return _unsafe_set()
  File "/home/vagrant/home/vagrant/wikirealty/lib/python3.4/site-packages/memcache.py", line 1030, in _unsafe_set
    store_info = self._val_to_store_info(val, min_compress_len)
  File "/home/vagrant/home/vagrant/wikirealty/lib/python3.4/site-packages/memcache.py", line 994, in _val_to_store_info
    pickler.dump(val)
_pickle.PicklingError: Can't pickle <class 'boto3.resources.factory.s3.Bucket'>: attribute lookup s3.Bucket on boto3.resources.factory failed

I am getting this error using django-memoize with memcached, looks like it is still an issue!

@dazza-codes
Copy link

A similar issue was resolved in:

It might help to add a test suite with pickle tests like:

import pickle
import boto3.session
import botocore.session

def test_pickle_botocore_session():
    session = botocore.session.get_session()
    assert pickle.loads(pickle.dumps(session))

def test_pickle_boto3_session():
    session = boto3.session.Session()
    assert pickle.loads(pickle.dumps(session))

Unfortunately they fail:


    def test_pickle_botocore_session():
        session = botocore.session.get_session()
>       assert pickle.loads(pickle.dumps(session))
E       AttributeError: Can't pickle local object '_createenviron.<locals>.encode'

tests/test_clients.py:52: AttributeError
___________________________________________________________________________________ test_pickle_boto3_session ____________________________________________________________________________________

    def test_pickle_boto3_session():
        session = boto3.session.Session()
>       assert pickle.loads(pickle.dumps(session))
E       AttributeError: Can't pickle local object 'lazy_call.<locals>._handler'

@SoraDevin
Copy link

Any update on this?

@cygniv404
Copy link

Any new updates coming this year?

mikewallace1979 added a commit to EnterpriseDB/barman that referenced this issue Jul 23, 2021
Excludes the boto3 client from the S3CloudInterface state so that
it is not pickled by multiprocessing.

This fixes barman-cloud-backup with Python >= 3.8. Previously this
would fail with the following error:

    ERROR: Backup failed uploading data (Can't pickle <class 'boto3.resources.factory.s3.ServiceResource'>: attribute lookup s3.ServiceResource on boto3.resources.factory failed)

This is because boto3 cannot be pickled using the default pickle
protocol in Python >= 3.8. See the following boto3 issue:

    boto/boto3#678

The workaround of forcing pickle to use an older version of the
pickle protocol is not available because it is multiprocessing
which invokes pickle and it does not allow us to specify the
protocol version.

We therefore exclude the boto3 client from the pickle operation by
implementing custom `__getstate__` and `__setstate__` methods as
documented here:

    https://docs.python.org/3/library/pickle.html#handling-stateful-objects

This works because the worker processes create their own boto3
session anyway due to race conditions around re-using the boto3
session from the parent process.

It is also necessary to defer the assignment of the
`worker_processes` list until after all worker processes have been
spawned as the references to those worker processes also cannot
be pickled with the default pickle protocol in Python >= 3.8. As
with the boto3 client, the `worker_processes` list was not being
used by the worker processes anyway.
amenonsen pushed a commit to EnterpriseDB/barman that referenced this issue Jul 23, 2021
Excludes the boto3 client from the S3CloudInterface state so that
it is not pickled by multiprocessing.

This fixes barman-cloud-backup with Python >= 3.8. Previously this
would fail with the following error:

    ERROR: Backup failed uploading data (Can't pickle <class 'boto3.resources.factory.s3.ServiceResource'>: attribute lookup s3.ServiceResource on boto3.resources.factory failed)

This is because boto3 cannot be pickled using the default pickle
protocol in Python >= 3.8. See the following boto3 issue:

    boto/boto3#678

The workaround of forcing pickle to use an older version of the
pickle protocol is not available because it is multiprocessing
which invokes pickle and it does not allow us to specify the
protocol version.

We therefore exclude the boto3 client from the pickle operation by
implementing custom `__getstate__` and `__setstate__` methods as
documented here:

    https://docs.python.org/3/library/pickle.html#handling-stateful-objects

This works because the worker processes create their own boto3
session anyway due to race conditions around re-using the boto3
session from the parent process.

It is also necessary to defer the assignment of the
`worker_processes` list until after all worker processes have been
spawned as the references to those worker processes also cannot
be pickled with the default pickle protocol in Python >= 3.8. As
with the boto3 client, the `worker_processes` list was not being
used by the worker processes anyway.
@iainelder
Copy link

This would make it much easier to use the multiprocessing library with boto3. For example, I would like to pass a session object and a list of organization accounts to the Pool.starmap function that calls a function that gets gets the tags on the account and merges them into the existing account objects.

@alexandrosandre
Copy link

Would have used this to avoid globals in multiprocessing.

@MrBeeMovie
Copy link

Is this issue still being looked at? Would greatly appreciate this being added if at all possible.

@Xezed
Copy link

Xezed commented Aug 24, 2022

Just reminding that this feature would be useful.

@aBurmeseDev aBurmeseDev added the p2 This is a standard priority issue label Nov 10, 2022
@dhkim0225
Copy link

Any update on this?

@RyanFitzSimmonsAK RyanFitzSimmonsAK added the closing-soon This issue will automatically close in 4 days unless further comments are made. label Jan 18, 2023
@RyanFitzSimmonsAK
Copy link
Contributor

RyanFitzSimmonsAK commented Jan 18, 2023

The boto3 team has recently announced that the Resource interface has entered a feature freeze and won’t be accepting new changes at this time: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/resources.html. We’ll be closing existing feature requests, such as this issue, to avoid any confusion on current implementation status. We do appreciate your feedback and will ensure it’s considered in future feature decisions.

We’d like to highlight that all existing code using resources is supported and will continue to work in Boto3. No action is needed from users of the library.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closing-soon This issue will automatically close in 4 days unless further comments are made. feature-request This issue requests a feature. needs-review p2 This is a standard priority issue resources
Projects
None yet
Development

No branches or pull requests