Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

how to use my own loss to compute grad? #3917

Closed
superYangwenwen opened this issue Nov 21, 2016 · 6 comments
Closed

how to use my own loss to compute grad? #3917

superYangwenwen opened this issue Nov 21, 2016 · 6 comments

Comments

@superYangwenwen
Copy link

Now, I want to use my own loss Class "MyLoss", but where to use it ? used like below?
model.fit(X=X_train, eval_metric=[MyLoss],
batch_end_callback=mx.callback.Speedometer(batch_size,50),)

@piiswrong
Copy link
Contributor

metric is only for reporting. You need to write a loss layer

@VoVAllen
Copy link

mx.sym.MakeLoss

@Andrjusha
Copy link

Hi guys! I'm also wondering how to write a loss layer. I've seen the example/numpy-ops/custom_softmax.py, but I don't understand where exactly is the loss function is applied? It looks like it just applies softmax, but after we need a loss function (e.g. log-loss), where is it defined? I'd like to change it.
And also I'd like that my loss layer works not just on CPU, but on multiple GPU, could you please tell me how to do it?

@whaozl
Copy link

whaozl commented Feb 7, 2017

yes, eval_metric is used to report the model train accuracy rate, eval_metric doesn't influence the model training. I also don't know how to define loss lay. I doesn't know what loss function on the Softmax lay.

@yxzf
Copy link

yxzf commented Mar 2, 2017

@piiswrong How to write a loss layer? I don't find an example about the loss layer.

@szha
Copy link
Member

szha commented Sep 29, 2017

This issue is closed due to lack of activity in the last 90 days. Feel free to ping me to reopen if this is still an active issue. Thanks!

@tqchen tqchen closed this as completed Oct 19, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants