Skip to content

Commit

Permalink
Scripted pretrained model download (#24)
Browse files Browse the repository at this point in the history
* Script to download models

* Model download bash script, usage explained
  • Loading branch information
wesleyw72 authored and mingyuliutw committed Mar 2, 2018
1 parent 7508715 commit f895e62
Show file tree
Hide file tree
Showing 3 changed files with 43 additions and 0 deletions.
5 changes: 5 additions & 0 deletions USAGE.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,14 @@ These models are extracted from Torch7 models and currently used in the project.

**Original Torch7 models**

Manually download the model files.
- Download pretrained networks via the following [link](https://drive.google.com/open?id=1ENgQm9TgabE1R99zhNf5q6meBvX6WFuq).
- Unzip and store the model files under `models`.

Automatically downloads pretrained networks and unzips them.
- Requires requests (`pip install requests`)
- `bash download_models.sh`

`converter.py` shows how to convert Torch7 models to PyTorch models.

### Example 1: Transfer the style of a style photo to a content photo.
Expand Down
35 changes: 35 additions & 0 deletions download_models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Download code taken from Code taken from https://stackoverflow.com/questions/25010369/wget-curl-large-file-from-google-drive/39225039#39225039
import requests

def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"

session = requests.Session()

response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)

if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)

save_response_content(response, destination)

def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value

return None

def save_response_content(response, destination):
CHUNK_SIZE = 32768

with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)

file_id = '1ENgQm9TgabE1R99zhNf5q6meBvX6WFuq'
destination = './models.zip'
download_file_from_google_drive(file_id, destination)
3 changes: 3 additions & 0 deletions download_models.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
python download_models.py
unzip models.zip

0 comments on commit f895e62

Please sign in to comment.