Skip to content
This repository has been archived by the owner on Mar 20, 2023. It is now read-only.

What is the best way to convert the job parameters for a deep learning run into the input parameters for a batch job? #68

Closed
setuc opened this issue Apr 12, 2017 · 2 comments
Labels

Comments

@setuc
Copy link

setuc commented Apr 12, 2017

We currently run mutiple of the jobs via pre-configured Virtual Machines. The jobs are read off an Azure Storage Queue by a python script and executed as per the instructions in the queue. If I were to extend this to Azure Batch running the same job in a pre-configured docker job pool, what is the best way to pass such instructions? Is there a way to pass the job parameters directly via the queue?

@alfpark
Copy link
Collaborator

alfpark commented Apr 12, 2017

@setuc You may want to investigate using an Azure Function queue trigger which would automatically react to a queue message arriving. Your script (can be in python) that runs as part of the Azure Function trigger would then translate the queue message instructions to Batch Shipyard configuration files. You can then submit the job to an already pre-allocated pool (if you care about minimizing latency) or to an autoscale pool (if you care more about cost optimization) via Batch Shipyard within the Azure Function environment.

Batch Shipyard now has a site extension available that automates installing it into an Azure Function environment.

@alfpark
Copy link
Collaborator

alfpark commented Apr 20, 2017

Closing, please re-open if you have further questions/issues.

@alfpark alfpark closed this as completed Apr 20, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants