Skip to content
This repository has been archived by the owner on Sep 20, 2022. It is now read-only.

Commit

Permalink
11bd1f8 replay starts
Browse files Browse the repository at this point in the history
  • Loading branch information
DrRacket committed Dec 7, 2017
1 parent 9bd9c03 commit 6774041
Show file tree
Hide file tree
Showing 2 changed files with 27 additions and 9 deletions.
4 changes: 2 additions & 2 deletions core/src/main/java/hivemall/common/ConversionState.java
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ public boolean isLossIncreased() {
return currLosses > prevLosses;
}

public boolean isConverged(final long obserbedTrainingExamples) {
public boolean isConverged(final long observedTrainingExamples) {
if (conversionCheck == false) {
return false;
}
Expand Down Expand Up @@ -110,7 +110,7 @@ public boolean isConverged(final long obserbedTrainingExamples) {
if (logger.isDebugEnabled()) {
logger.debug("Iteration #" + curIter + " [curLosses=" + currLosses
+ ", prevLosses=" + prevLosses + ", changeRate=" + changeRate
+ ", #trainingExamples=" + obserbedTrainingExamples + ']');
+ ", #trainingExamples=" + observedTrainingExamples + ']');
}
this.readyToFinishIterations = false;
}
Expand Down
32 changes: 25 additions & 7 deletions docs/gitbook/misc/prediction.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,8 +109,8 @@ Below we list possible options for `train_regression` and `train_classifier`, an
- For `train_regression`
- SquaredLoss (synonym: squared)
- QuantileLoss (synonym: quantile)
- EpsilonInsensitiveLoss (synonym: epsilon_intensitive)
- SquaredEpsilonInsensitiveLoss (synonym: squared_epsilon_intensitive)
- EpsilonInsensitiveLoss (synonym: epsilon_insensitive)
- SquaredEpsilonInsensitiveLoss (synonym: squared_epsilon_insensitive)
- HuberLoss (synonym: huber)
- For `train_classifier`
- HingeLoss (synonym: hinge)
Expand All @@ -120,8 +120,8 @@ Below we list possible options for `train_regression` and `train_classifier`, an
- The following losses are mainly designed for regression but can sometimes be useful in classification as well:
- SquaredLoss (synonym: squared)
- QuantileLoss (synonym: quantile)
- EpsilonInsensitiveLoss (synonym: epsilon_intensitive)
- SquaredEpsilonInsensitiveLoss (synonym: squared_epsilon_intensitive)
- EpsilonInsensitiveLoss (synonym: epsilon_insensitive)
- SquaredEpsilonInsensitiveLoss (synonym: squared_epsilon_insensitive)
- HuberLoss (synonym: huber)

- Regularization function: `-reg`, `-regularization`
Expand All @@ -130,16 +130,34 @@ Below we list possible options for `train_regression` and `train_classifier`, an
- ElasticNet
- RDA

Additionally, there are several variants of the SGD technique, and it is also configureable as:
Additionally, there are several variants of the SGD technique, and it is also configurable as:

- Optimizer `-opt`, `-optimizer`
- Optimizer: `-opt`, `-optimizer`
- SGD
- AdaGrad
- AdaDelta
- Adam

> #### Note
>
> Option values are case insensitive and you can use `sgd` or `rda`, or `huberloss`.
> Option values are case insensitive and you can use `sgd` or `rda`, or `huberloss` in lower-case letters.
Furthermore, optimizer offers to set auxiliary options such as:

- Number of iterations: `-iter`, `-iterations` [default: 10]
- Repeat optimizer's learning procedure more than once to diligently find better result.
- Convergence rate: `-cv_rate`, `-convergence_rate` [default: 0.005]
- Define a stopping criterion for the iterative training.
- If the criterion is too small or too large, you may encounter over-fitting or under-fitting depending on value of `-iter` option.
- Mini-batch size: `-mini_batch`, `-mini_batch_size` [default: 1]
- Instead of learning samples one-by-one, this option enables optimizer to utilize multiple samples at once to minimize the error function.
- Appropriate mini-batch size leads efficient training and effective prediction model.

For details of available options, following queries might be helpful to list all of them:

```sql
select train_regression(array(), 0, '-help');
select train_classifier(array(), 0, '-help');
```

In practice, you can try different combinations of the options in order to achieve higher prediction accuracy.

0 comments on commit 6774041

Please sign in to comment.