-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
@mxnet-label-bot Add [Doc] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little concerned about that ! missing in the comment... might break the page.
If the nin example is now broken because of the missing model... can you put in a big bold thing at the top that says this example is broken, and link to an issue (that you create). If this were a tutorial, we'd have to remove it or exclude it from tests, but examples are not tested. Either way, we should have an issue and a notification present.
tools/coreml/README.md
Outdated
@@ -1,4 +1,4 @@ | |||
<!--- Licensed to the Apache Software Foundation (ASF) under one --> | |||
<--- Licensed to the Apache Software Foundation (ASF) under one --> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accidental edit?
@@ -17,7 +17,7 @@ | |||
|
|||
# End to End Neural Art | |||
|
|||
Please refer to this [blog](http://dmlc.ml/mxnet/2016/06/20/end-to-end-neural-style.html) for details of how it is implemented. | |||
Please refer to this [blog](#) for details of how it is implemented. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use this one? https://thomasdelteil.github.io/NeuralStyleTransfer_MXNet/
@@ -50,7 +50,7 @@ This approach works with variable length sequences. For more complicated models | |||
|
|||
## Variable-length Sequence Training for Sherlock Holmes | |||
|
|||
We use the [Sherlock Holmes language model example](https://github.com/dmlc/mxnet/tree/master/example/rnn) for this example. If you are not familiar with this example, see [this tutorial (in Julia)](http://dmlc.ml/mxnet/2015/11/15/char-lstm-in-julia.html) first. | |||
We use the [Sherlock Holmes language model example](https://github.com/dmlc/mxnet/tree/master/example/rnn) for this example. If you are not familiar with this example, see [this tutorial (in Julia)](https://mxnet.incubator.apache.org/versions/master/api/julia/site/tutorial/char-lstm/) first. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please only use relative links and not versioned ones
@@ -151,8 +151,8 @@ def test_pred_vgg16(self): | |||
|
|||
def test_pred_nin(self): | |||
self._test_model(model_name='nin', epoch_num=0, | |||
files=["http://data.dmlc.ml/models/imagenet/nin/nin-symbol.json", | |||
"http://data.dmlc.ml/models/imagenet/nin/nin-0000.params"]) | |||
files=["", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
???
@@ -100,7 +100,7 @@ Any MXNet model that uses the above operators can be converted easily. For insta | |||
mxnet_coreml_converter.py --model-prefix='Inception-BN' --epoch=126 --input-shape='{"data":"3,224,224"}' --mode=classifier --pre-processing-arguments='{"image_input_names":"data"}' --class-labels synset.txt --output-file="InceptionBN.mlmodel" | |||
``` | |||
|
|||
2. [NiN](http://data.dmlc.ml/models/imagenet/nin/) | |||
2. [NiN](#) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
?
@@ -118,7 +118,7 @@ for i in range(0, 10000): | |||
end = time.time() | |||
print(time.process_time() - start) | |||
``` | |||
We run timing with a warmup once more, and on the same machine, run in **18.99s**. A 1.8x speed improvement! Speed improvements when using libraries like TensorRT can come from a variety of optimizations, but in this case our speedups are coming from a technique known as [operator fusion](http://dmlc.ml/2016/11/21/fusion-and-runtime-compilation-for-nnvm-and-tinyflow.html). | |||
We run timing with a warmup once more, and on the same machine, run in **18.99s**. A 1.8x speed improvement! Speed improvements when using libraries like TensorRT can come from a variety of optimizations, but in this case our speedups are coming from a technique known as [operator fusion](#). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
?
@marcoabreu Sorry about rushing that through without addressing your questions. We needed to be rid of the malware links ASAP. |
@marcoabreu I removed those links because I couldn't find any replacement for the content that was hosted on dmlc.ml. It seems better to temporarily have replaced the links with a #, while waiting for that content to be recovered and rehosted, rather than having malware links still active on the site. The current experience is that the links redirect to the current page. |
Issues related to missing content: |
Replaced v1.5.x Julia page with the content on master. Also, replaced dmlc.ml links with available content.