Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Azure Block Blob Storage backend #4685

Merged
merged 6 commits into from Aug 30, 2018
Merged

Add Azure Block Blob Storage backend #4685

merged 6 commits into from Aug 30, 2018

Conversation

@c-w
Copy link
Contributor

@c-w c-w commented Apr 27, 2018

Description

This pull request adds a new results backend. The backend is implemented on top of the azure-storage library which uses Azure Blob Storage for a scalable low-cost PaaS backend.

The backend was load tested via a simple nginx/gunicorn/sanic app hosted on a DS4 virtual machine (4 vCores, 16 GB RAM) and was able to handle 600+ concurrent users at ~170 RPS.

The commit also contains a live end-to-end test to facilitate verification of the backend functionality. The test is activated by setting the AZUREBLOCKBLOB_URL environment variable to azureblockblob://{ConnectionString} where the value for ConnectionString can be found in the Access Keys pane of a Storage Account resources in the Azure Portal:

Screenshot showing location of ConnectionString in Azure Portal

This pull request was created together with @ankurokok, @dkisselev, @evandropaula, @martinpeck and @michaelperel.

Copy link
Member

@auvipy auvipy left a comment

plz check the lint error, fixing those will trigger test build

@c-w
Copy link
Contributor Author

@c-w c-w commented Apr 27, 2018

Thanks for the reply, @auvipy. The lint steps should now pass.

I needed to make one change to be able to run the configcheck lint step (TOXENV=configcheck tox) in the docker container, so if you wish I can make a follow-up pull request to submit that fix. (EDIT: done in #4687)

@codecov
Copy link

@codecov codecov bot commented Apr 27, 2018

Codecov Report

Merging #4685 into master will increase coverage by 0.05%.
The diff coverage is 98.14%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #4685      +/-   ##
==========================================
+ Coverage    82.6%   82.65%   +0.05%     
==========================================
  Files         140      141       +1     
  Lines       15868    15922      +54     
  Branches     1983     1987       +4     
==========================================
+ Hits        13107    13160      +53     
  Misses       2567     2567              
- Partials      194      195       +1
Impacted Files Coverage Δ
celery/app/backends.py 59.37% <ø> (ø) ⬆️
celery/app/defaults.py 95.45% <ø> (ø) ⬆️
celery/backends/azureblockblob.py 98.14% <98.14%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8c753d7...fdd661b. Read the comment docs.

@georgepsarakis
Copy link
Member

@georgepsarakis georgepsarakis commented Apr 28, 2018

@c-w thank you very much for your contribution! Is there any chance that we can run integration/functional test with this backend, without actually connecting to Azure?

@c-w
Copy link
Contributor Author

@c-w c-w commented Apr 28, 2018

Hi @georgepsarakis. There is Azurite for local emulation of Azure Storage which should be able to emulate all the APIs used in this pull request. If you wish, I can look into setting up Azurite for integration tests.

How often do you run the full integration test suite? Are you fundamentally opposed to making live calls against a backend or is it just an issue of setting up test credentials and the associated cost?

@georgepsarakis
Copy link
Member

@georgepsarakis georgepsarakis commented Apr 28, 2018

@c-w Azurite seems as a great choice for running the tests.

@thedrow @auvipy what are your thoughts on this? I believe this could be merged (probably in a later version than 4.2), once we have a closer look. I am just asking if you are agreed, in order to continue looking into the integration testing option. Thanks.

@auvipy
Copy link
Member

@auvipy auvipy commented Apr 28, 2018

as its a bit big feature we should skip 4.2

@c-w
Copy link
Contributor Author

@c-w c-w commented Apr 28, 2018

That's great to hear. I'll get onto the integration tests via Azurite and push them onto this pull request.

We were also planning to make a few follow-up pull requests after this one:

The overall idea is to use the Storage Queue transport channel with the Block Blob storage backend for simple workloads since the services are very cheap and combined only require setting up a single Azure resource. The combination of ServiceBus transport channel and CosmosDB storage backend on the other hand would enable more demanding workloads due to offering stronger latency guarantees, multi-zone failover, etc.

We already have some branches with work-in-progress code for each of these features, but it'll take a bit more time to clean them up and submit pull requests. If you have any thoughts about the above, just let me know.

@c-w
Copy link
Contributor Author

@c-w c-w commented Apr 28, 2018

@georgepsarakis I've implemented the end-to-end test for the backend in 3ea28b6 and 1b9a263.

Travis CI now starts Azurite via Docker:

Screenshot showing Azurite starting via Docker on Travis

The required environment variables to light up the end-to-end test are then also set and the test executes:

Screenshot showing end-to-end test for Azure Block Blob backend running on Travis

Changing between Azurite and live Azure with this mechanism is also easy (just over-write the AZUREBLOCKBLOB_URL environment variable with an encrypted Travis environment variable).

@auvipy
Copy link
Member

@auvipy auvipy commented Apr 29, 2018

great work ! @c-w

@georgepsarakis
Copy link
Member

@georgepsarakis georgepsarakis commented Apr 29, 2018

Excellent work @c-w . I will have a closer look to the code and let you know if there are any suggestions.

Copy link
Member

@georgepsarakis georgepsarakis left a comment

@c-w some minor notes, generally this looks fine.

.travis.yml Outdated
fi
- |
docker run -d -e executable=blob -t -p 10000:10000 arafato/azurite:2.6.5
while ! nc -zv 127.0.0.1 10000; do sleep 10; done
Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

Very simple and efficient way of testing when the listening port is available 👍 .

.travis.yml Outdated
- |
docker run -d -e executable=blob -t -p 10000:10000 arafato/azurite:2.6.5
while ! nc -zv 127.0.0.1 10000; do sleep 10; done
export AZUREBLOCKBLOB_URL="azureblockblob://DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;"
Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

I am guessing the AccountKey parameter is a random value. Does it have to be in a specific format or we could use something that is more clearly a testing sample value?

Copy link
Contributor Author

@c-w c-w Apr 29, 2018

Unfortunately this is a required "well-known" value. See the key authentication section in the official docs for the storage emulator.

Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

I see, great, thanks for clarifying.


self._container_name = (
container_name or
conf.get(CONF_CONTAINER_NAME) or
Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

It's slightly simpler to just use the default value of the get method here:

conf.get(CONF_CONTAINER_NAME, DEFAULT_CONTAINER_NAME)

Copy link
Contributor Author

@c-w c-w Apr 29, 2018

That was my initial implementation but it failed the tests since the conf value were explicitly set to None in one of the mappings that the conf access checks. Moving the default values to app/defaults.py fixed this. Implemented in 44517e3.


self._retry_initial_backoff_sec = (
retry_initial_backoff_sec or
conf.get(CONF_RETRY_INITIAL_BACKOFF_SEC) or
Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

Same comment as above for all configuration options.

Copy link
Contributor Author

@c-w c-w Apr 29, 2018

Implemented in 44517e3.

container_name=self._container_name, fail_on_exist=False)

if created:
LOGGER.info("Created container %s", self._container_name)
Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

You can also specify that this action refers to Azure Block Blob storage in the log message.

Copy link
Contributor Author

@c-w c-w Apr 29, 2018

Done in f91b69b.

@@ -0,0 +1,3 @@
azure-storage>=0.36.0
Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

It is not advisable to use non-exact tags for package versions, as they might lead to inconsistencies among installations and CI results. Could you please use the most recent usable version?

Copy link
Contributor Author

@c-w c-w Apr 29, 2018

Could you provide some more guidance on what you require here? The azure-storage SDK has a bit of an odd versioning scheme where all versions below 0.36.0 are considered deprecated and the "new" SDK is versions 0.36.0 and above (see compatibility section on the official repository). As such, this dependency statement basically means "use any of the new SDK versions". Would you like to pin this to a specific version instead?

Copy link
Member

@georgepsarakis georgepsarakis Apr 30, 2018

Exactly, my suggestion was to pin the version for the new dependencies to the exact latest:

azure-storage==0.36.0
azure-common==...
azure-storage-common==...

Copy link
Contributor Author

@c-w c-w Apr 30, 2018

Alright, done in ce12efd.



@contextlib.contextmanager
def patch_client():
Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

I am not sure, but I don't think you need this context manager. patch will work as you require, see this example of mocking an instance method.

Copy link
Contributor Author

@c-w c-w Apr 29, 2018

Great catch! Done in 16ce838.

with patch_client() as mock_client:
self.backend.get(b"mykey")

mock_client.get_blob_to_text.assert_has_calls(
Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

I think that perhaps assert_called_once_with is more suitable here.

Copy link
Contributor Author

@c-w c-w Apr 29, 2018

Done in 16ce838.

mock_client.delete_blob.assert_called_once_with(
"celery", "mykey")

@skip.unless_environ("AZUREBLOCKBLOB_URL")
Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

If I understand correctly, this test case probably belongs to the integration test suite.

Copy link
Contributor Author

@c-w c-w Apr 29, 2018

I'm having some issues running the integration tests locally via docker-compose so I've made a change and pushed it up to Travis in 9ba725d to see if the move of the test was successful. I'll tag you on the PR when everything is ready to be re-reviewed.

Copy link
Contributor Author

@c-w c-w Apr 30, 2018

Alright, confirmed that integration tests ran and the new backend test executed and passed. Sample CI job log: https://travis-ci.org/celery/celery/jobs/372769687

@@ -47,8 +47,9 @@ setenv =
dynamodb: TEST_BACKEND=dynamodb://@localhost:8000
dynamodb: AWS_ACCESS_KEY_ID=test_aws_key_id
dynamodb: AWS_SECRET_ACCESS_KEY=test_aws_secret_key
PASSENV =
Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

The conversion to lowercase is an improvement of course, but as a general note, I believe it is preferable to avoid making changes outside the scope of the PR (however small), especially in extensive feature requests, as they might confuse reviewers and anyone inspecting the commit history in the future. Again, just explaining my view, no need to revert this 😄 !

Copy link
Contributor Author

@c-w c-w Apr 29, 2018

Totally agree on keeping diffs minimal, I'm a stickler for that in all of my code reviews, so we're definitely aligned there ;-)

However, I do believe that this is a required change. I was investigating for a while why the live test would still be skipped despite the AZUREBLOCKBLOB_URL variable being set, and it ended up being linked to the environment isolation that tox uses. Simply adding the AZUREBLOCKBLOB_URL to the passenv section though didn't fix it since the passenv property apparently needs to be lowercased (which is surprising given that configparser is supposed to be case insensitive).

Copy link
Member

@georgepsarakis georgepsarakis Apr 29, 2018

Wasn't aware of that, thanks for fixing this then 😄 !

@c-w
Copy link
Contributor Author

@c-w c-w commented Apr 30, 2018

Hi @georgepsarakis. I believe I've addressed all the comments. Could you please take another look?

@@ -598,6 +602,7 @@ Can be one of the following:
.. _`CouchDB`: http://www.couchdb.com/
.. _`Couchbase`: https://www.couchbase.com/
.. _`Consul`: https://consul.io/
.. _`AzureBlockBlob`: https://azure.microsoft.com/en-us/services/storage/
Copy link
Member

@georgepsarakis georgepsarakis Apr 30, 2018

Correct me if I am wrong, but I think I found a more accurate link here.

Copy link
Contributor Author

@c-w c-w May 1, 2018

Done in e6284b0.


from celery.backends.azureblockblob import AzureBlockBlobBackend


Copy link
Member

@georgepsarakis georgepsarakis Apr 30, 2018

Although this test has certainly value, it is better if you also add and activate the integration suite by changing the tox configuration. You simply need to specify TEST_BROKER and TEST_BACKEND variables accordingly.

Copy link
Contributor Author

@c-w c-w May 1, 2018

Done in 446130f. Will verify on Travis and ping you once it's ready to be reviewed.

There are quite a few backends that don't run a full integration test though (e.g. cassandra, elasticsearch or mongodb). Is it really worth extending the CI run substantially for this integration test when the simple backend test already gives us confidence that the backend works? If not: we might even have an opportunity here to speed up the CI suite by moving the DynamoDB test (and all other Key-Value Store backends) to the new backend integration test style.

Copy link
Member

@thedrow thedrow May 1, 2018

I have a plan to use seaworthy for the integration tests so that we could start our services from within the test suite.
Please don't change how the test suite works yet.

c-w added 3 commits Apr 30, 2018
The backend is implemented on top of the azure-storage library [1] which
uses Azure Blob Storage [2] for a scalable low-cost PaaS backend.

The backend was load tested via a simple nginx/gunicorn/sanic app hosted
on a DS4 virtual machine (4 vCores, 16 GB RAM) and was able to handle
600+ concurrent users at ~170 RPS.

The commit also contains a live end-to-end test to facilitate
verification of the backend functionality. The test is activated by
setting the `AZUREBLOCKBLOB_URL` environment variable to
`azureblockblob://{ConnectionString}` where the value for
`ConnectionString` can be found in the `Access Keys` pane of a Storage
Account resources in the Azure Portal.

[1] https://github.com/Azure/azure-storage-python
[2] https://azure.microsoft.com/en-us/services/storage/
@c-w
Copy link
Contributor Author

@c-w c-w commented May 1, 2018

This should now be ready for the next round of reviews, @georgepsarakis.

@georgepsarakis
Copy link
Member

@georgepsarakis georgepsarakis commented May 1, 2018

@c-w thank you very much for the effort and changes so far. Integration tests seem to be running.
However builds seem a bit slow. Would you like to give a try to Docker container tmpfs option?

docker run -d -e executable=blob --tmpfs /opt --tmpfs /tmp --tmpfs /var -t -p 10000:10000 arafato/azurite:2.6.5

DynamoDB is started without persistence so I think there is not much to be done, however the same may apply:

docker run -d --tmpfs /tmp --tmpfs /opt -p 8000:8000 dwmkerr/dynamodb:38

What do you think?

@c-w
Copy link
Contributor Author

@c-w c-w commented May 1, 2018

That's very interesting. I didn't know about tmpfs in Docker; seems like a neat feature. I did a bit of research and used docker diff {container} to find the mutated directories of the dynamodb and azurite containers and mapped those to memory (so that we avoid potentially confusing extra declarations). Let's see if that speeds up the build time from the current ~8 minutes for azureblockblob and ~4 minutes for dynamodb.

Alternatively, as I mentioned in #4685 (comment), we could also consider turning off the full integration tests for azureblockblob and dynamodb and instead only rely on the new backend integration test which only takes seconds to run. The idea here would be to use the core redis and rabbitmq integration tests to verify that the framework wiring is working as expected and then to verify additional backends like azureblockblob, dynamodb, cassandra, etc. only at the backend interface layer (under the assumption that if the backend contract is correctly implemented, the rest of the framework wiring will work as expected). This should massively speed up the build at a small loss of end-to-end verification. Thoughts?

Another thought: would it make sense to add the .tox directory to the Travis cache?

@c-w
Copy link
Contributor Author

@c-w c-w commented May 1, 2018

Looks like we save about 60% execution time on the azureblockblob integration tests by using tmpfs, so that was a great suggestion!

Overall the CI now takes still quite a bit longer though due to the new integration tests:

Let me know if this is acceptable, @georgepsarakis, or if you'd prefer to try some more optimizations as described in #4685 (comment).

@thedrow
Copy link
Member

@thedrow thedrow commented May 1, 2018

@c-w The .tox directory is quite large and caching it might not provide us with the benefit we're after. In any case, if we choose to do so we'll need to also install and use tox-pip-extensions to avoid recreating the virtual environment every time requirements change.
I'd rather discuss this after we merge this PR.

At this point I'd rather extend our CI time than to loose the guarantee that everything works as expected.
We need to include every result backend we have and broker we support in the integration test suite.
Celery had quite a few bugs that were uncovered very late due to infrequent use of some of the result backends.
In Celery 5, we will split those result backends to different packages. They'll have a separate CI process which will help us reduce the build time.

@c-w
Copy link
Contributor Author

@c-w c-w commented May 1, 2018

@thedrow Excellent, thanks for the clarification. It would definitely be neat to split out the results backends into their own repo (just like how the brokers are currently split out into the kombu repository). So is there anything else you need me to do before you can consider this ready to merge?

Copy link
Member

@thedrow thedrow left a comment

Overall this is very good work.
There are a few minor issues I'd like to address first.

PASSENV =

azureblockblob: TEST_BROKER=redis://
azureblockblob: TEST_BACKEND=azureblockblob://DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;
Copy link
Member

@thedrow thedrow May 1, 2018

What's this account? Is it available to us forever?
If not, the secrets should not appear here.

Copy link
Contributor Author

@c-w c-w May 1, 2018

This is a "well known" string to connect to the Azurite storage emulator, not a real secret. See earlier discussion #4685 (comment) or the official docs.

@skip.unless_module("azure")
@skip.unless_environ("AZUREBLOCKBLOB_URL")
class test_AzureBlockBlobBackend:
def test_live(self, manager):
Copy link
Member

@thedrow thedrow May 1, 2018

Integration tests usually take a long time to complete.
This test checks two different cases. Let's split it to two tests.

Copy link
Contributor Author

@c-w c-w May 1, 2018

Done in 0167476. NB: the test seems to take less than 1 second on Travis currently so the runtime is not too bad (I guess because it's hitting the backend directly instead of also going through a more complex workflow with the broker and the rest of the platform).

capture

app=manager.app,
url=os.environ["AZUREBLOCKBLOB_URL"])

key_values = dict((("akey%d" % i).encode(), "avalue%d" % i)
Copy link
Member

@thedrow thedrow May 1, 2018

Please use a dict comprehension.

Copy link
Contributor Author

@c-w c-w May 1, 2018

Done in 0167476.


actual_values = backend.mget(key_values.keys())
for value, actual_value in zip(key_values.values(), actual_values):
assert value == actual_value
Copy link
Member

@thedrow thedrow May 1, 2018

Please assert all(value == actual_value for value, actual_value in zip(key_values.values(), actual_values).

You can add a custom message to show the difference between both.

Copy link
Contributor Author

@c-w c-w May 1, 2018

Done in 0167476. The custom message is formatted by py.test for array comparison already.

image


@classmethod
def _parse_url(cls, url):
match = URL_RE.match(url)
Copy link
Member

@thedrow thedrow May 1, 2018

We usually use urllib.parse as far as I can recall.
Why do we need a regex here?

Copy link
Contributor Author

@c-w c-w May 1, 2018

Good catch. Refactored in 4dc8f31.

thedrow
thedrow approved these changes May 1, 2018
Copy link
Member

@thedrow thedrow left a comment

LGTM.
Let's merge this after 4.2 hits GA.

Copy link
Member

@georgepsarakis georgepsarakis left a comment

Nice work @c-w , thank you very much for your contribution!

@georgepsarakis georgepsarakis added this to the v4.3 milestone May 25, 2018
auvipy added a commit to celery/kombu that referenced this issue Aug 30, 2018
* Add transports based on Azure PaaS

This pull request adds two new transport implementations:

- `azurestoragequeues` is implemented on top of Azure Storage
  Queues [1]. This offers a simple but scalable and low-cost PaaS
  transport for Celery users in Azure. The transport is intended to be
  used in conjunction with the Azure Block Blob Storage backend [2].

- `azureservicebus` is implemented on top of Azure Service Bus [3] and
  offers PaaS support for more demanding Celery workloads in Azure. The
  transport is intended to be used in conjunction with the Azure
  CosmosDB backend [4].

This pull request was created together with @ankurokok, @dkisselev,
@evandropaula, @martinpeck and @michaelperel.

[1] https://azure.microsoft.com/en-us/services/storage/queues/
[2] celery/celery#4685
[3] https://azure.microsoft.com/en-us/services/service-bus/
[4] celery/celery#4720

* Exclude Azure transports from code coverage

There is test coverage for the transports but the tests require Azure
credentials to run (passed via environment variables) so codecov doesn't
exercise them.

* Remove env vars to configure transport

* Remove abbreviations
auvipy
auvipy approved these changes Aug 30, 2018
@auvipy auvipy merged commit dfc9b2e into celery:master Aug 30, 2018
4 checks passed
@auvipy
Copy link
Member

@auvipy auvipy commented Aug 30, 2018

@c-w thanks! and please don't forget to chime in when azure related issues arise!

@auvipy
Copy link
Member

@auvipy auvipy commented Aug 30, 2018

should we revert this before kombu 4.3 release?

@c-w
Copy link
Contributor Author

@c-w c-w commented Aug 30, 2018

Sure thing, @auvipy; feel free to tag me on Azure-related issues.

@c-w c-w deleted the azureblockblob-backend branch Aug 30, 2018
@auvipy
Copy link
Member

@auvipy auvipy commented Aug 31, 2018

Clemens, could you investigate the reasons of build failure after merging this PR?

auvipy added a commit that referenced this issue Sep 1, 2018
@thedrow
Copy link
Member

@thedrow thedrow commented Sep 2, 2018

Was this reverted because of test failures?

@auvipy
Copy link
Member

@auvipy auvipy commented Sep 2, 2018

yes! I was confused, and reverted back!

@thedrow
Copy link
Member

@thedrow thedrow commented Sep 3, 2018

So we need a new PR with the changes in this PR right?

@auvipy
Copy link
Member

@auvipy auvipy commented Sep 3, 2018

yes!! what I did is again reverted the reverted pr! let's see what the build says!

@c-w
Copy link
Contributor Author

@c-w c-w commented Sep 3, 2018

Looking into it now, @auvipy.

@c-w
Copy link
Contributor Author

@c-w c-w commented Sep 3, 2018

Looks like the build failure is linked to Travis CI setup. When I originally opened this PR, Docker had to be manually installed via apt-get on Travis and since then this requirement was removed (which makes the apt-get code now fail). Removing the custom Docker setup on Travis makes the CI proceed as normal.

Here's the diff for the fix:

---
 .travis.yml | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index 2e30d75..7760a6f 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -98,9 +98,3 @@ services:
     - rabbitmq
     - redis
     - docker
-addons:
-  apt:
-    sources:
-      - debian-sid
-    packages:
-      - docker-ce
--
2.7.4

I'm now waiting for a full build to go through and will submit a follow-up PR once that's done.

@auvipy
Copy link
Member

@auvipy auvipy commented Sep 3, 2018

great thanks for investigating!

@c-w
Copy link
Contributor Author

@c-w c-w commented Sep 3, 2018

Here's an example of a Travis build with the Azure Block Blob backend change and the Travis CI fix mentioned above: https://travis-ci.org/CatalystCode/celery/builds/424006344. For some reason, the tests for the Redis backend on Python 3 are now failing which I find surprising. I'll investigate why this is the case.

@c-w
Copy link
Contributor Author

@c-w c-w commented Sep 3, 2018

Looks like the Redis test failures are transient and linked to timeouts/timing. Restarting the build fixes the test runs. Submitted the new pull request: #5025

auvipy added a commit that referenced this issue Sep 4, 2018
* Revert "Revert "Add Azure Block Blob Storage backend (#4685)" (#5015)"

This reverts commit 8e6b2bf.

* Remove custom docker install on Travis

Docker now has first class support on Travis so the custom apt-get
install instructions are no longer required and make the build fail.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

4 participants