Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Azure Block Blob Storage backend #4685

Merged
merged 6 commits into from Aug 30, 2018

Conversation

Projects
None yet
4 participants
@c-w
Copy link
Contributor

commented Apr 27, 2018

Description

This pull request adds a new results backend. The backend is implemented on top of the azure-storage library which uses Azure Blob Storage for a scalable low-cost PaaS backend.

The backend was load tested via a simple nginx/gunicorn/sanic app hosted on a DS4 virtual machine (4 vCores, 16 GB RAM) and was able to handle 600+ concurrent users at ~170 RPS.

The commit also contains a live end-to-end test to facilitate verification of the backend functionality. The test is activated by setting the AZUREBLOCKBLOB_URL environment variable to azureblockblob://{ConnectionString} where the value for ConnectionString can be found in the Access Keys pane of a Storage Account resources in the Azure Portal:

Screenshot showing location of ConnectionString in Azure Portal

This pull request was created together with @ankurokok, @dkisselev, @evandropaula, @martinpeck and @michaelperel.

@auvipy
Copy link
Member

left a comment

plz check the lint error, fixing those will trigger test build

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented Apr 27, 2018

Thanks for the reply, @auvipy. The lint steps should now pass.

I needed to make one change to be able to run the configcheck lint step (TOXENV=configcheck tox) in the docker container, so if you wish I can make a follow-up pull request to submit that fix. (EDIT: done in #4687)

@codecov

This comment has been minimized.

Copy link

commented Apr 27, 2018

Codecov Report

Merging #4685 into master will increase coverage by 0.05%.
The diff coverage is 98.14%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #4685      +/-   ##
==========================================
+ Coverage    82.6%   82.65%   +0.05%     
==========================================
  Files         140      141       +1     
  Lines       15868    15922      +54     
  Branches     1983     1987       +4     
==========================================
+ Hits        13107    13160      +53     
  Misses       2567     2567              
- Partials      194      195       +1
Impacted Files Coverage Δ
celery/app/backends.py 59.37% <ø> (ø) ⬆️
celery/app/defaults.py 95.45% <ø> (ø) ⬆️
celery/backends/azureblockblob.py 98.14% <98.14%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8c753d7...fdd661b. Read the comment docs.

@georgepsarakis

This comment has been minimized.

Copy link
Member

commented Apr 28, 2018

@c-w thank you very much for your contribution! Is there any chance that we can run integration/functional test with this backend, without actually connecting to Azure?

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented Apr 28, 2018

Hi @georgepsarakis. There is Azurite for local emulation of Azure Storage which should be able to emulate all the APIs used in this pull request. If you wish, I can look into setting up Azurite for integration tests.

How often do you run the full integration test suite? Are you fundamentally opposed to making live calls against a backend or is it just an issue of setting up test credentials and the associated cost?

@georgepsarakis

This comment has been minimized.

Copy link
Member

commented Apr 28, 2018

@c-w Azurite seems as a great choice for running the tests.

@thedrow @auvipy what are your thoughts on this? I believe this could be merged (probably in a later version than 4.2), once we have a closer look. I am just asking if you are agreed, in order to continue looking into the integration testing option. Thanks.

@auvipy

This comment has been minimized.

Copy link
Member

commented Apr 28, 2018

as its a bit big feature we should skip 4.2

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented Apr 28, 2018

That's great to hear. I'll get onto the integration tests via Azurite and push them onto this pull request.

We were also planning to make a few follow-up pull requests after this one:

The overall idea is to use the Storage Queue transport channel with the Block Blob storage backend for simple workloads since the services are very cheap and combined only require setting up a single Azure resource. The combination of ServiceBus transport channel and CosmosDB storage backend on the other hand would enable more demanding workloads due to offering stronger latency guarantees, multi-zone failover, etc.

We already have some branches with work-in-progress code for each of these features, but it'll take a bit more time to clean them up and submit pull requests. If you have any thoughts about the above, just let me know.

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented Apr 28, 2018

@georgepsarakis I've implemented the end-to-end test for the backend in 3ea28b6 and 1b9a263.

Travis CI now starts Azurite via Docker:

Screenshot showing Azurite starting via Docker on Travis

The required environment variables to light up the end-to-end test are then also set and the test executes:

Screenshot showing end-to-end test for Azure Block Blob backend running on Travis

Changing between Azurite and live Azure with this mechanism is also easy (just over-write the AZUREBLOCKBLOB_URL environment variable with an encrypted Travis environment variable).

@auvipy

This comment has been minimized.

Copy link
Member

commented Apr 29, 2018

great work ! @c-w

@georgepsarakis

This comment has been minimized.

Copy link
Member

commented Apr 29, 2018

Excellent work @c-w . I will have a closer look to the code and let you know if there are any suggestions.

@georgepsarakis
Copy link
Member

left a comment

@c-w some minor notes, generally this looks fine.

.travis.yml Outdated
fi
- |
docker run -d -e executable=blob -t -p 10000:10000 arafato/azurite:2.6.5
while ! nc -zv 127.0.0.1 10000; do sleep 10; done

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

Very simple and efficient way of testing when the listening port is available 👍 .

.travis.yml Outdated
- |
docker run -d -e executable=blob -t -p 10000:10000 arafato/azurite:2.6.5
while ! nc -zv 127.0.0.1 10000; do sleep 10; done
export AZUREBLOCKBLOB_URL="azureblockblob://DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;"

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

I am guessing the AccountKey parameter is a random value. Does it have to be in a specific format or we could use something that is more clearly a testing sample value?

This comment has been minimized.

Copy link
@c-w

c-w Apr 29, 2018

Author Contributor

Unfortunately this is a required "well-known" value. See the key authentication section in the official docs for the storage emulator.

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

I see, great, thanks for clarifying.


self._container_name = (
container_name or
conf.get(CONF_CONTAINER_NAME) or

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

It's slightly simpler to just use the default value of the get method here:

conf.get(CONF_CONTAINER_NAME, DEFAULT_CONTAINER_NAME)

This comment has been minimized.

Copy link
@c-w

c-w Apr 29, 2018

Author Contributor

That was my initial implementation but it failed the tests since the conf value were explicitly set to None in one of the mappings that the conf access checks. Moving the default values to app/defaults.py fixed this. Implemented in 44517e3.


self._retry_initial_backoff_sec = (
retry_initial_backoff_sec or
conf.get(CONF_RETRY_INITIAL_BACKOFF_SEC) or

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

Same comment as above for all configuration options.

This comment has been minimized.

Copy link
@c-w

c-w Apr 29, 2018

Author Contributor

Implemented in 44517e3.

container_name=self._container_name, fail_on_exist=False)

if created:
LOGGER.info("Created container %s", self._container_name)

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

You can also specify that this action refers to Azure Block Blob storage in the log message.

This comment has been minimized.

Copy link
@c-w

c-w Apr 29, 2018

Author Contributor

Done in f91b69b.

@@ -0,0 +1,3 @@
azure-storage>=0.36.0

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

It is not advisable to use non-exact tags for package versions, as they might lead to inconsistencies among installations and CI results. Could you please use the most recent usable version?

This comment has been minimized.

Copy link
@c-w

c-w Apr 29, 2018

Author Contributor

Could you provide some more guidance on what you require here? The azure-storage SDK has a bit of an odd versioning scheme where all versions below 0.36.0 are considered deprecated and the "new" SDK is versions 0.36.0 and above (see compatibility section on the official repository). As such, this dependency statement basically means "use any of the new SDK versions". Would you like to pin this to a specific version instead?

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 30, 2018

Member

Exactly, my suggestion was to pin the version for the new dependencies to the exact latest:

azure-storage==0.36.0
azure-common==...
azure-storage-common==...

This comment has been minimized.

Copy link
@c-w

c-w Apr 30, 2018

Author Contributor

Alright, done in ce12efd.



@contextlib.contextmanager
def patch_client():

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

I am not sure, but I don't think you need this context manager. patch will work as you require, see this example of mocking an instance method.

This comment has been minimized.

Copy link
@c-w

c-w Apr 29, 2018

Author Contributor

Great catch! Done in 16ce838.

with patch_client() as mock_client:
self.backend.get(b"mykey")

mock_client.get_blob_to_text.assert_has_calls(

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

I think that perhaps assert_called_once_with is more suitable here.

This comment has been minimized.

Copy link
@c-w

c-w Apr 29, 2018

Author Contributor

Done in 16ce838.

mock_client.delete_blob.assert_called_once_with(
"celery", "mykey")

@skip.unless_environ("AZUREBLOCKBLOB_URL")

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

If I understand correctly, this test case probably belongs to the integration test suite.

This comment has been minimized.

Copy link
@c-w

c-w Apr 29, 2018

Author Contributor

I'm having some issues running the integration tests locally via docker-compose so I've made a change and pushed it up to Travis in 9ba725d to see if the move of the test was successful. I'll tag you on the PR when everything is ready to be re-reviewed.

This comment has been minimized.

Copy link
@c-w

c-w Apr 30, 2018

Author Contributor

Alright, confirmed that integration tests ran and the new backend test executed and passed. Sample CI job log: https://travis-ci.org/celery/celery/jobs/372769687

@@ -47,8 +47,9 @@ setenv =
dynamodb: TEST_BACKEND=dynamodb://@localhost:8000
dynamodb: AWS_ACCESS_KEY_ID=test_aws_key_id
dynamodb: AWS_SECRET_ACCESS_KEY=test_aws_secret_key
PASSENV =

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

The conversion to lowercase is an improvement of course, but as a general note, I believe it is preferable to avoid making changes outside the scope of the PR (however small), especially in extensive feature requests, as they might confuse reviewers and anyone inspecting the commit history in the future. Again, just explaining my view, no need to revert this 😄 !

This comment has been minimized.

Copy link
@c-w

c-w Apr 29, 2018

Author Contributor

Totally agree on keeping diffs minimal, I'm a stickler for that in all of my code reviews, so we're definitely aligned there ;-)

However, I do believe that this is a required change. I was investigating for a while why the live test would still be skipped despite the AZUREBLOCKBLOB_URL variable being set, and it ended up being linked to the environment isolation that tox uses. Simply adding the AZUREBLOCKBLOB_URL to the passenv section though didn't fix it since the passenv property apparently needs to be lowercased (which is surprising given that configparser is supposed to be case insensitive).

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 29, 2018

Member

Wasn't aware of that, thanks for fixing this then 😄 !

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented Apr 30, 2018

Hi @georgepsarakis. I believe I've addressed all the comments. Could you please take another look?

@@ -598,6 +602,7 @@ Can be one of the following:
.. _`CouchDB`: http://www.couchdb.com/
.. _`Couchbase`: https://www.couchbase.com/
.. _`Consul`: https://consul.io/
.. _`AzureBlockBlob`: https://azure.microsoft.com/en-us/services/storage/

This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 30, 2018

Member

Correct me if I am wrong, but I think I found a more accurate link here.

This comment has been minimized.

Copy link
@c-w

c-w May 1, 2018

Author Contributor

Done in e6284b0.


from celery.backends.azureblockblob import AzureBlockBlobBackend


This comment has been minimized.

Copy link
@georgepsarakis

georgepsarakis Apr 30, 2018

Member

Although this test has certainly value, it is better if you also add and activate the integration suite by changing the tox configuration. You simply need to specify TEST_BROKER and TEST_BACKEND variables accordingly.

This comment has been minimized.

Copy link
@c-w

c-w May 1, 2018

Author Contributor

Done in 446130f. Will verify on Travis and ping you once it's ready to be reviewed.

There are quite a few backends that don't run a full integration test though (e.g. cassandra, elasticsearch or mongodb). Is it really worth extending the CI run substantially for this integration test when the simple backend test already gives us confidence that the backend works? If not: we might even have an opportunity here to speed up the CI suite by moving the DynamoDB test (and all other Key-Value Store backends) to the new backend integration test style.

This comment has been minimized.

Copy link
@thedrow

thedrow May 1, 2018

Member

I have a plan to use seaworthy for the integration tests so that we could start our services from within the test suite.
Please don't change how the test suite works yet.

c-w added some commits Apr 29, 2018

Add Azure Block Blob Storage backend
The backend is implemented on top of the azure-storage library [1] which
uses Azure Blob Storage [2] for a scalable low-cost PaaS backend.

The backend was load tested via a simple nginx/gunicorn/sanic app hosted
on a DS4 virtual machine (4 vCores, 16 GB RAM) and was able to handle
600+ concurrent users at ~170 RPS.

The commit also contains a live end-to-end test to facilitate
verification of the backend functionality. The test is activated by
setting the `AZUREBLOCKBLOB_URL` environment variable to
`azureblockblob://{ConnectionString}` where the value for
`ConnectionString` can be found in the `Access Keys` pane of a Storage
Account resources in the Azure Portal.

[1] https://github.com/Azure/azure-storage-python
[2] https://azure.microsoft.com/en-us/services/storage/
@c-w

This comment has been minimized.

Copy link
Contributor Author

commented May 1, 2018

This should now be ready for the next round of reviews, @georgepsarakis.

@georgepsarakis

This comment has been minimized.

Copy link
Member

commented May 1, 2018

@c-w thank you very much for the effort and changes so far. Integration tests seem to be running.
However builds seem a bit slow. Would you like to give a try to Docker container tmpfs option?

docker run -d -e executable=blob --tmpfs /opt --tmpfs /tmp --tmpfs /var -t -p 10000:10000 arafato/azurite:2.6.5

DynamoDB is started without persistence so I think there is not much to be done, however the same may apply:

docker run -d --tmpfs /tmp --tmpfs /opt -p 8000:8000 dwmkerr/dynamodb:38

What do you think?

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented May 1, 2018

That's very interesting. I didn't know about tmpfs in Docker; seems like a neat feature. I did a bit of research and used docker diff {container} to find the mutated directories of the dynamodb and azurite containers and mapped those to memory (so that we avoid potentially confusing extra declarations). Let's see if that speeds up the build time from the current ~8 minutes for azureblockblob and ~4 minutes for dynamodb.

Alternatively, as I mentioned in #4685 (comment), we could also consider turning off the full integration tests for azureblockblob and dynamodb and instead only rely on the new backend integration test which only takes seconds to run. The idea here would be to use the core redis and rabbitmq integration tests to verify that the framework wiring is working as expected and then to verify additional backends like azureblockblob, dynamodb, cassandra, etc. only at the backend interface layer (under the assumption that if the backend contract is correctly implemented, the rest of the framework wiring will work as expected). This should massively speed up the build at a small loss of end-to-end verification. Thoughts?

Another thought: would it make sense to add the .tox directory to the Travis cache?

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented May 1, 2018

Looks like we save about 60% execution time on the azureblockblob integration tests by using tmpfs, so that was a great suggestion!

Overall the CI now takes still quite a bit longer though due to the new integration tests:

Let me know if this is acceptable, @georgepsarakis, or if you'd prefer to try some more optimizations as described in #4685 (comment).

@thedrow

This comment has been minimized.

Copy link
Member

commented May 1, 2018

@c-w The .tox directory is quite large and caching it might not provide us with the benefit we're after. In any case, if we choose to do so we'll need to also install and use tox-pip-extensions to avoid recreating the virtual environment every time requirements change.
I'd rather discuss this after we merge this PR.

At this point I'd rather extend our CI time than to loose the guarantee that everything works as expected.
We need to include every result backend we have and broker we support in the integration test suite.
Celery had quite a few bugs that were uncovered very late due to infrequent use of some of the result backends.
In Celery 5, we will split those result backends to different packages. They'll have a separate CI process which will help us reduce the build time.

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented May 1, 2018

@thedrow Excellent, thanks for the clarification. It would definitely be neat to split out the results backends into their own repo (just like how the brokers are currently split out into the kombu repository). So is there anything else you need me to do before you can consider this ready to merge?

@thedrow
Copy link
Member

left a comment

Overall this is very good work.
There are a few minor issues I'd like to address first.

PASSENV =

azureblockblob: TEST_BROKER=redis://
azureblockblob: TEST_BACKEND=azureblockblob://DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;

This comment has been minimized.

Copy link
@thedrow

thedrow May 1, 2018

Member

What's this account? Is it available to us forever?
If not, the secrets should not appear here.

This comment has been minimized.

Copy link
@c-w

c-w May 1, 2018

Author Contributor

This is a "well known" string to connect to the Azurite storage emulator, not a real secret. See earlier discussion #4685 (comment) or the official docs.

@skip.unless_module("azure")
@skip.unless_environ("AZUREBLOCKBLOB_URL")
class test_AzureBlockBlobBackend:
def test_live(self, manager):

This comment has been minimized.

Copy link
@thedrow

thedrow May 1, 2018

Member

Integration tests usually take a long time to complete.
This test checks two different cases. Let's split it to two tests.

This comment has been minimized.

Copy link
@c-w

c-w May 1, 2018

Author Contributor

Done in 0167476. NB: the test seems to take less than 1 second on Travis currently so the runtime is not too bad (I guess because it's hitting the backend directly instead of also going through a more complex workflow with the broker and the rest of the platform).

capture

app=manager.app,
url=os.environ["AZUREBLOCKBLOB_URL"])

key_values = dict((("akey%d" % i).encode(), "avalue%d" % i)

This comment has been minimized.

Copy link
@thedrow

thedrow May 1, 2018

Member

Please use a dict comprehension.

This comment has been minimized.

Copy link
@c-w

c-w May 1, 2018

Author Contributor

Done in 0167476.


actual_values = backend.mget(key_values.keys())
for value, actual_value in zip(key_values.values(), actual_values):
assert value == actual_value

This comment has been minimized.

Copy link
@thedrow

thedrow May 1, 2018

Member

Please assert all(value == actual_value for value, actual_value in zip(key_values.values(), actual_values).

You can add a custom message to show the difference between both.

This comment has been minimized.

Copy link
@c-w

c-w May 1, 2018

Author Contributor

Done in 0167476. The custom message is formatted by py.test for array comparison already.

image


@classmethod
def _parse_url(cls, url):
match = URL_RE.match(url)

This comment has been minimized.

Copy link
@thedrow

thedrow May 1, 2018

Member

We usually use urllib.parse as far as I can recall.
Why do we need a regex here?

This comment has been minimized.

Copy link
@c-w

c-w May 1, 2018

Author Contributor

Good catch. Refactored in 4dc8f31.

@thedrow

thedrow approved these changes May 1, 2018

Copy link
Member

left a comment

LGTM.
Let's merge this after 4.2 hits GA.

@georgepsarakis
Copy link
Member

left a comment

Nice work @c-w , thank you very much for your contribution!

@c-w c-w referenced this pull request May 20, 2018

Closed

Add CosmosDB storage backend #4720

@georgepsarakis georgepsarakis added this to the v4.3 milestone May 25, 2018

auvipy added a commit to celery/kombu that referenced this pull request Aug 30, 2018

Add transports based on Azure PaaS (#891)
* Add transports based on Azure PaaS

This pull request adds two new transport implementations:

- `azurestoragequeues` is implemented on top of Azure Storage
  Queues [1]. This offers a simple but scalable and low-cost PaaS
  transport for Celery users in Azure. The transport is intended to be
  used in conjunction with the Azure Block Blob Storage backend [2].

- `azureservicebus` is implemented on top of Azure Service Bus [3] and
  offers PaaS support for more demanding Celery workloads in Azure. The
  transport is intended to be used in conjunction with the Azure
  CosmosDB backend [4].

This pull request was created together with @ankurokok, @dkisselev,
@evandropaula, @martinpeck and @michaelperel.

[1] https://azure.microsoft.com/en-us/services/storage/queues/
[2] celery/celery#4685
[3] https://azure.microsoft.com/en-us/services/service-bus/
[4] celery/celery#4720

* Exclude Azure transports from code coverage

There is test coverage for the transports but the tests require Azure
credentials to run (passed via environment variables) so codecov doesn't
exercise them.

* Remove env vars to configure transport

* Remove abbreviations
@auvipy

auvipy approved these changes Aug 30, 2018

@auvipy auvipy merged commit dfc9b2e into celery:master Aug 30, 2018

4 checks passed

codecov/patch 98.14% of diff hit (target 82.6%)
Details
codecov/project 82.65% (+0.05%) compared to 8c753d7
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@auvipy

This comment has been minimized.

Copy link
Member

commented Aug 30, 2018

@c-w thanks! and please don't forget to chime in when azure related issues arise!

auvipy added a commit that referenced this pull request Aug 30, 2018

@auvipy

This comment has been minimized.

Copy link
Member

commented Aug 30, 2018

should we revert this before kombu 4.3 release?

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented Aug 30, 2018

Sure thing, @auvipy; feel free to tag me on Azure-related issues.

@c-w c-w deleted the CatalystCode:azureblockblob-backend branch Aug 30, 2018

@auvipy

This comment has been minimized.

Copy link
Member

commented Aug 31, 2018

Clemens, could you investigate the reasons of build failure after merging this PR?

auvipy added a commit that referenced this pull request Sep 1, 2018

@thedrow

This comment has been minimized.

Copy link
Member

commented Sep 2, 2018

Was this reverted because of test failures?

@auvipy

This comment has been minimized.

Copy link
Member

commented Sep 2, 2018

yes! I was confused, and reverted back!

@thedrow

This comment has been minimized.

Copy link
Member

commented Sep 3, 2018

So we need a new PR with the changes in this PR right?

auvipy added a commit that referenced this pull request Sep 3, 2018

@auvipy

This comment has been minimized.

Copy link
Member

commented Sep 3, 2018

yes!! what I did is again reverted the reverted pr! let's see what the build says!

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented Sep 3, 2018

Looking into it now, @auvipy.

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented Sep 3, 2018

Looks like the build failure is linked to Travis CI setup. When I originally opened this PR, Docker had to be manually installed via apt-get on Travis and since then this requirement was removed (which makes the apt-get code now fail). Removing the custom Docker setup on Travis makes the CI proceed as normal.

Here's the diff for the fix:

---
 .travis.yml | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index 2e30d75..7760a6f 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -98,9 +98,3 @@ services:
     - rabbitmq
     - redis
     - docker
-addons:
-  apt:
-    sources:
-      - debian-sid
-    packages:
-      - docker-ce
--
2.7.4

I'm now waiting for a full build to go through and will submit a follow-up PR once that's done.

@auvipy

This comment has been minimized.

Copy link
Member

commented Sep 3, 2018

great thanks for investigating!

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented Sep 3, 2018

Here's an example of a Travis build with the Azure Block Blob backend change and the Travis CI fix mentioned above: https://travis-ci.org/CatalystCode/celery/builds/424006344. For some reason, the tests for the Redis backend on Python 3 are now failing which I find surprising. I'll investigate why this is the case.

@c-w

This comment has been minimized.

Copy link
Contributor Author

commented Sep 3, 2018

Looks like the Redis test failures are transient and linked to timeouts/timing. Restarting the build fixes the test runs. Submitted the new pull request: #5025

auvipy added a commit that referenced this pull request Sep 4, 2018

Un-revert Azure Block Blob backend (#5025)
* Revert "Revert "Add Azure Block Blob Storage backend (#4685)" (#5015)"

This reverts commit 8e6b2bf.

* Remove custom docker install on Travis

Docker now has first class support on Travis so the custom apt-get
install instructions are no longer required and make the build fail.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.