Skip to content

Commit

Permalink
Merge pull request #2400 from birm/doxy-warn-err
Browse files Browse the repository at this point in the history
Fix Doxygen warnings
  • Loading branch information
birm committed May 16, 2020
2 parents 40bef18 + a52b52f commit c06eb3a
Show file tree
Hide file tree
Showing 1,209 changed files with 1,944 additions and 1,899 deletions.
1 change: 1 addition & 0 deletions Doxyfile
Expand Up @@ -75,6 +75,7 @@ FILE_VERSION_FILTER =
#---------------------------------------------------------------------------
QUIET = NO
WARNINGS = YES
WARN_AS_ERROR = YES
WARN_IF_UNDOCUMENTED = YES
WARN_IF_DOC_ERROR = YES
WARN_NO_PARAMDOC = YES
Expand Down
14 changes: 7 additions & 7 deletions doc/guide/hpt.hpp
@@ -1,7 +1,7 @@
namespace mlpack {
namespace hpt {

/*! @page hpt Hyper-Parameter Tuning
/*! @page hpt_guide Hyper-Parameter Tuning
@section hptintro Introduction
Expand Down Expand Up @@ -32,7 +32,7 @@ The interface of the hyper-parameter tuning module is quite similar to the
interface of the @ref cv "cross-validation module". To construct a \c
HyperParameterTuner object you need to specify as template parameters what
machine learning algorithm, cross-validation strategy, performance measure, and
optimization strategy (\ref optimization::GridSearch "GridSearch" will be used by
optimization strategy (\c ens::GridSearch will be used by
default) you are going to use. Then, you must pass the same arguments as for
the cross-validation classes: the data and labels (or responses) to use are
given to the constructor, and the possible hyperparameter values are given to
Expand Down Expand Up @@ -68,7 +68,7 @@ computation time.
std::tie(bestLambda) = hpt.Optimize(lambdas);
@endcode
In this example we have used \ref optimization::GridSearch "GridSearch" (the
In this example we have used \c ens::GridSearch (the
default optimizer) to find a good value for the \c lambda hyper-parameter. For
that we have specified what values should be tried.
Expand Down Expand Up @@ -121,7 +121,7 @@ real-valued hyperparameters, but wish to further tune those values.
In this case, we can use a gradient-based optimizer for hyperparameter search.
In the following example, we try to optimize the \c lambda1 and \c lambda2
hyper-parameters for \ref regression::LARS "LARS" with the
\ref optimization::GradientDescent "GradientDescent" optimizer.
\c ens::GradientDescent optimizer.
@code
HyperParameterTuner<LARS, MSE, SimpleCV, GradientDescent> hpt3(validationSize,
Expand All @@ -147,7 +147,7 @@ hyper-parameters for \ref regression::LARS "LARS" with the
The \c HyperParameterTuner class is very similar to the
\ref cv::KFoldCV "KFoldCV" and \ref cv::SimpleCV "SimpleCV" classes (see the
@ref "cross-validation tutorial" for more information on those two classes), but
\ref cv "cross-validation tutorial" for more information on those two classes), but
there are a few important differences.
First, the \c HyperParameterTuner accepts five different hyperparameters; only
Expand Down Expand Up @@ -190,7 +190,7 @@ HyperParameterTuner<LinearRegression, MSE, SimpleCV> hpt(0.2, dataset,
@endcode
Next, we must set up the hyperparameters to be optimized. If we are doing a
grid search with the \ref optimization::GridSearch "GridSearch" optimizer (the
grid search with the \c ens::GridSearch optimizer (the
default), then we only need to pass a `std::vector` (for non-numeric
hyperparameters) or an `arma::vec` (for numeric hyperparameters) containing all
of the possible choices that we wish to search over.
Expand Down Expand Up @@ -222,7 +222,7 @@ Alternately, the \c Fixed() method (detailed in the @ref hptfixed
"Fixed arguments" section) can be used to fix the values of some parameters.
For continuous optimizers like
\ref optimization::GradientDescent "GradientDescent", a range does not need to
\c ens::GradientDescent, a range does not need to
be specified but instead only a single value. See the
\ref hptgradient "Gradient-Based Optimization" section for more details.
Expand Down
2 changes: 1 addition & 1 deletion doc/policies/elemtype.hpp
@@ -1,6 +1,6 @@
/*! @page elem The ElemType policy in mlpack
@section Overview
@section elem_overview Overview
\b mlpack algorithms should be as generic as possible. Often this means
allowing arbitrary metrics or kernels to be used, but this also means allowing
Expand Down
5 changes: 2 additions & 3 deletions doc/policies/kernels.hpp
Expand Up @@ -19,9 +19,8 @@ A kernel (or `Mercer kernel') \f$\mathcal{K}(\cdot, \cdot)\f$ takes two objects
as input and returns some sort of similarity value. The specific details and
properties of kernels are outside the scope of this documentation; for a better
introduction to kernels and kernel methods, there are numerous better resources
available, including \ref
http://www.eric-kim.net/eric-kim-net/posts/1/kernel_trick.html "Eric Kim's
tutorial".
available, including
<a href="http://www.eric-kim.net/eric-kim-net/posts/1/kernel_trick.html">Eric Kim's tutorial</a>
mlpack implements a number of kernel methods and, accordingly, each of these
methods allows arbitrary kernels to be used via the \c KernelType template
Expand Down
4 changes: 3 additions & 1 deletion doc/policies/trees.hpp
Expand Up @@ -47,11 +47,13 @@ are nearby should lie in similar nodes.
We can rigorously define what a tree is, using the definition of **space tree**
introduced in the following paper:
@code
@quote
R.R. Curtin, W.B. March, P. Ram, D.V. Anderson, A.G. Gray, and C.L. Isbell Jr.,
"Tree-independent dual-tree algorithms," in Proceedings of the 30th
International Conference on Machine Learning (ICML '13), pp. 1435--1443, 2013.
@endquote
@endcode
The definition is:
Expand Down Expand Up @@ -398,7 +400,7 @@ This section is divided into five parts:
@subsection treetype_rigorous_template Template parameters
\ref treetype_template_param "An earlier section" discussed the three different
\ref treetype_template_params "An earlier section" discussed the three different
template parameters that are required by the \c TreeType policy.
The \ref metrics "MetricType policy" provides one method that will be useful for
Expand Down
25 changes: 12 additions & 13 deletions doc/tutorials/ann/ann.txt
Expand Up @@ -79,9 +79,8 @@ have a number of methods in common:

@note
To be able to optimize the network, both classes implement the OptimizerFunction
API; see \ref optimizertutorial "Optimizer API" for more information. In short,
the \c FNN and \c RNN class implement two methods: \c Evaluate() and \c
Gradient(). This enables the optimization given some learner and some
API. In short, the \c FNN and \c RNN class implement two methods: \c Evaluate()
and \c Gradient(). This enables the optimization given some learner and some
performance measure.

Similar to the existing layer infrastructure, the \c FFN and \c RNN classes are
Expand Down Expand Up @@ -230,13 +229,13 @@ int main()
arma::mat predictionTemp;
model.Predict(testData, predictionTemp);

/*
Since the predictionsTemp is of dimensions (3 x number_of_data_points)
with continuous values, we first need to reduce it to a dimension of
/*
Since the predictionsTemp is of dimensions (3 x number_of_data_points)
with continuous values, we first need to reduce it to a dimension of
(1 x number_of_data_points) with scalar values, to be able to compare with
testLabels.

The first step towards doing this is to create a matrix of zeros with the
The first step towards doing this is to create a matrix of zeros with the
desired dimensions (1 x number_of_data_points).

In predictionsTemp, the 3 dimensions for each data point correspond to the
Expand All @@ -252,8 +251,8 @@ int main()
arma::max(predictionTemp.col(i)) == predictionTemp.col(i), 1)) + 1;
}

/*
Compute the error between predictions and testLabels,
/*
Compute the error between predictions and testLabels,
now that we have the desired predictions.
*/
size_t correct = arma::accu(prediction == testLabels);
Expand All @@ -266,7 +265,7 @@ int main()
@endcode

Now, the matrix prediction holds the classification of each point in the
dataset. Subsequently, we find the classification error by comparing it
dataset. Subsequently, we find the classification error by comparing it
with testLabels.

In the next example, we create simple noisy sine sequences, which are trained
Expand Down Expand Up @@ -420,7 +419,7 @@ not work; this example exists for its API, not its implementation.
Note that layer sometimes have different properties. These properties are
known at compile-time through the mlpack::ann::LayerTraits class, and some
properties may imply the existence (or non-existence) of certain functions.
Refer to the LayerTraits @ref LayerTraits for more documentation on that.
Refer to the LayerTraits @ref layer_traits.hpp for more documentation on that.

The two template parameters below must be template parameters to the layer, in
the order given below. More template parameters are fine, but they must come
Expand Down Expand Up @@ -662,10 +661,10 @@ $ cat model.xml
</boost_serialization>
@endcode

As you can see, the \c <parameter> section of \c model.xml contains the trained
As you can see, the \c \<parameter\> section of \c model.xml contains the trained
network weights. We can see that this section also contains the network input
size, which is 66 rows and 1 column. Note that in this example, we used three
different layers, as can be seen by looking at the \c <network> section. Each
different layers, as can be seen by looking at the \c \<network\> section. Each
node has a unique id that is used to reconstruct the model when loading.

The models can also be saved as \c .bin or \c .txt; the \c .xml format provides
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/approx_kfn/approx_kfn.txt
Expand Up @@ -87,7 +87,7 @@ In order to solve this problem, \b mlpack provides a number of interfaces.
approximate furthest neighbors
- a simple \ref cpp_qdafn_akfntut "C++ class for QDAFN"
- a simple \ref cpp_ds_akfntut "C++ class for DrusillaSelect"
- a simple \ref cpp_kfn_akfntut "C++ class for tree-based and brute-force"
- a simple \ref cpp_ns_akfntut "C++ class for tree-based and brute-force"
search

@section toc_akfntut Table of Contents
Expand Down
8 changes: 4 additions & 4 deletions doc/tutorials/cf/cf.txt
Expand Up @@ -310,7 +310,7 @@ factorization with alternating least squares update rules). These include:

The amf::AMF<> class has many other possibilities than those listed here; it is
a framework for alternating matrix factorization techniques. See the
\ref amf::AMF<> "class documentation" or \ref amftutorial "tutorial on AMF" for
\ref mlpack::amf::AMF<> "class documentation" or \ref amftutorial "tutorial on AMF" for
more information.

The use of another factorizer is straightforward; the example from the previous
Expand Down Expand Up @@ -458,15 +458,15 @@ items and number of columns equal to the \c rank parameter, and \c H should have
number of rows equal to the \c rank parameter, and number of columns equal to
the number of users.

The \ref mlpack::amf::AMF "amf::AMF<> class" can be used as a base for
The \ref mlpack::amf::AMF<> "amf::AMF<> class" can be used as a base for
factorizers that alternate between updating \c W and updating \c H. A useful
reference is the \ref amftutorial "AMF tutorial".

@section further_doc_cftut Further documentation

Further documentation for the \c CF class may be found in the \ref
mlpack::cf::CF "complete API documentation". In addition, more information on
the \c AMF class of factorizers may be found in its \ref mlpack::amf::AMF
mlpack::cf "complete API documentation". In addition, more information on
the \c AMF class of factorizers may be found in its \ref mlpack::amf::AMF<>
"complete API documentation".

*/
4 changes: 2 additions & 2 deletions doc/tutorials/det/det.txt
Expand Up @@ -344,9 +344,9 @@ For further documentation on the DTree class, consult the
The usual regularized error \f$R_\alpha(t)\f$ of a node \f$t\f$ is given by:
\f$R_\alpha(t) = R(t) + \alpha |\tilde{t}|\f$ where

\f[
\f{
R(t) = -\frac{|t|^2}{N^2 V(t)}.
\f]
\f}

\f$V(t)\f$ is the volume of the node \f$t\f$ and \f$\tilde{t}\f$ is
the set of leaves in the subtree rooted at \f$t\f$.
Expand Down
24 changes: 12 additions & 12 deletions doc/tutorials/image/image.txt
Expand Up @@ -5,32 +5,32 @@

@page imagetutorial Image Utilities tutorial

@section intro_imagetut Introduction
@section intro_imagetu Introduction

Image datasets are becoming increasingly popular in deep learning.

mlpack's image saving/loading functionality is based on [stb/](https://github.com/nothings/stb).

@section toc_imagetut Table of Contents
@section toc_imagetu Table of Contents

This tutorial is split into the following sections:

- \ref intro_imagetut
- \ref toc_imagetut
- \ref model_api_imagetut
- \ref imageinfo_api_imagetut
- \ref load_api_imagetut
- \ref save_api_imagetut
- \ref intro_imagetu
- \ref toc_imagetu
- \ref model_api_imagetu
- \ref imageinfo_api_imagetu
- \ref load_api_imagetu
- \ref save_api_imagetu

@section model_api_imagetut Model API
@section model_api_imagetu Model API

Image utilities supports loading and saving of images.

It supports filetypes "jpg", "png", "tga","bmp", "psd", "gif", "hdr", "pic", "pnm" for loading and "jpg", "png", "tga", "bmp", "hdr" for saving.

The datatype associated is unsigned char to support RGB values in the range 1-255. To feed data into the network typecast of `arma::Mat` may be required. Images are stored in matrix as (width * height * channels, NumberOfImages). Therefore imageMatrix.col(0) would be the first image if images are loaded in imageMatrix.

@section imageinfo_api_imagetut ImageInfo
@section imageinfo_api_imagetu ImageInfo

ImageInfo class contains the metadata of the images.
@code
Expand All @@ -48,7 +48,7 @@ ImageInfo class contains the metadata of the images.
Other public memebers include:
- quality Compression of the image if saved as jpg (0-100).

@section load_api_imagetut Load
@section load_api_imagetu Load


Standalone loading of images.
Expand Down Expand Up @@ -115,7 +115,7 @@ Loading multiple images:
data::load(files, matrix, info, false, true);
@endcode

@section save_api_imagetut Save
@section save_api_imagetu Save

Save images expects a matrix of type unsigned char in the form (width * height * channels, NumberOfImages).
Just like load it can be used to save one image or multiple images. Besides image data it also expects the shape of the image as input (width, height, channels).
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/kmeans/kmeans.txt
Expand Up @@ -140,7 +140,7 @@ last iteration when the cluster was not empty.
$ mlpack_kmeans -c 5 -i dataset.csv -v -e -o assignments.csv -C centroids.csv
@endcode

@subsection cli_ex3_kmtut Killing empty clusters
@subsection cli_ex3a_kmtut Killing empty clusters

If you would like to kill empty clusters , instead of reinitializing
them, simply specify the \c -E (\c --kill_empty_clusters) option. Note that
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/linear_regression/linear_regression.txt
Expand Up @@ -132,7 +132,7 @@ $ cat lr.xml
@endcode

As you can see, the function for this input is \f$f(y)=0+1x_1\f$. We can see
that the model we have trained catches this; in the \c <parameters> section of
that the model we have trained catches this; in the \c \<parameters\> section of
\c lr.xml, we can see that there are two elements, which are (approximately) 0
and 1. The first element corresponds to the intercept 0, and the second column
corresponds to the coefficient 1 for the variable \f$x_1\f$. Note that in this
Expand Down

0 comments on commit c06eb3a

Please sign in to comment.