Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Doxygen warnings #2400

Merged
merged 40 commits into from May 16, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
56d3365
Fix the latex ELiSH expression.
zoq May 9, 2020
d3057f9
Use the correct expression.
zoq May 10, 2020
81d1dea
latex errors, unfixed as of yet
birm May 10, 2020
8d2ca2d
section name duplication
birm May 10, 2020
3507262
wrong and ambigious @file
birm May 10, 2020
da53245
fix tex for r2
birm May 10, 2020
989e9c0
tex fix mish
birm May 10, 2020
e21aa1b
Configure doxygen to immediately stop when a warning is encountered.
zoq May 10, 2020
1c5e1c8
namespece fix
birm May 10, 2020
e82643e
Do not use '\begin{equation}' to open another math environment.
zoq May 10, 2020
5037868
another dummyclass
birm May 10, 2020
fa875aa
don't try to document ens namespace
birm May 10, 2020
a5ccd71
changing [] to {} fixed somehow
birm May 10, 2020
2f13255
use namespace
birm May 11, 2020
1d5e9b7
re-add namespace qualifier
birm May 11, 2020
c02c006
try using standard format
birm May 11, 2020
65627ef
realign
birm May 11, 2020
1b6bba5
no idea what tutorial is being referenced
birm May 11, 2020
b5515bf
command and ref cleanup
birm May 11, 2020
1565765
doc fixes
birm May 11, 2020
3986361
fix missing, unused, wrong params
birm May 11, 2020
9eb9b31
backtrace style warn fix
birm May 11, 2020
a2acefb
Re-add "Regularization" to param str
birm May 11, 2020
dca79bb
remaining errs
birm May 11, 2020
0599cd0
No more breaking changes please :)
birm May 11, 2020
7c45bdd
Merge branch 'doxy-warn-err' of https://github.com/birm/mlpack into d…
birm May 11, 2020
c03136e
Realign namespace open braces
birm May 11, 2020
799b7e2
align params for SVDWrapper
birm May 11, 2020
f3ad22b
Merge remote-tracking branch 'zoq/elish-latex-fix' into doxy-warn-err
birm May 12, 2020
8f87dbb
Incorporate easily-incorporated suggestions
birm May 13, 2020
67bc405
suggestions which required testing
birm May 13, 2020
46d94d9
fix cf tutorial refs
birm May 13, 2020
3c65293
Add missing href end quote
birm May 13, 2020
496d121
use fuller path names
birm May 13, 2020
df42a83
Merge branch 'doxy-warn-err' of https://github.com/birm/mlpack into d…
birm May 13, 2020
0679529
ens ref to c
birm May 13, 2020
8bc4d69
re-add hpt refs
birm May 13, 2020
6ff6b88
NO_DOXYGEN more descriptive
birm May 13, 2020
420c63c
Merge remote-tracking branch 'upstream/master' into doxy-warn-err
birm May 15, 2020
a52b52f
use full path for normal dist
birm May 15, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 1 addition & 0 deletions Doxyfile
Expand Up @@ -75,6 +75,7 @@ FILE_VERSION_FILTER =
#---------------------------------------------------------------------------
QUIET = NO
WARNINGS = YES
WARN_AS_ERROR = YES
WARN_IF_UNDOCUMENTED = YES
WARN_IF_DOC_ERROR = YES
WARN_NO_PARAMDOC = YES
Expand Down
14 changes: 7 additions & 7 deletions doc/guide/hpt.hpp
@@ -1,7 +1,7 @@
namespace mlpack {
namespace hpt {

/*! @page hpt Hyper-Parameter Tuning
/*! @page hpt_guide Hyper-Parameter Tuning

@section hptintro Introduction

Expand Down Expand Up @@ -32,7 +32,7 @@ The interface of the hyper-parameter tuning module is quite similar to the
interface of the @ref cv "cross-validation module". To construct a \c
HyperParameterTuner object you need to specify as template parameters what
machine learning algorithm, cross-validation strategy, performance measure, and
optimization strategy (\ref optimization::GridSearch "GridSearch" will be used by
optimization strategy (\c ens::GridSearch will be used by
default) you are going to use. Then, you must pass the same arguments as for
the cross-validation classes: the data and labels (or responses) to use are
given to the constructor, and the possible hyperparameter values are given to
Expand Down Expand Up @@ -68,7 +68,7 @@ computation time.
std::tie(bestLambda) = hpt.Optimize(lambdas);
@endcode

In this example we have used \ref optimization::GridSearch "GridSearch" (the
In this example we have used \c ens::GridSearch (the
default optimizer) to find a good value for the \c lambda hyper-parameter. For
that we have specified what values should be tried.

Expand Down Expand Up @@ -121,7 +121,7 @@ real-valued hyperparameters, but wish to further tune those values.
In this case, we can use a gradient-based optimizer for hyperparameter search.
In the following example, we try to optimize the \c lambda1 and \c lambda2
hyper-parameters for \ref regression::LARS "LARS" with the
\ref optimization::GradientDescent "GradientDescent" optimizer.
\c ens::GradientDescent optimizer.

@code
HyperParameterTuner<LARS, MSE, SimpleCV, GradientDescent> hpt3(validationSize,
Expand All @@ -147,7 +147,7 @@ hyper-parameters for \ref regression::LARS "LARS" with the

The \c HyperParameterTuner class is very similar to the
\ref cv::KFoldCV "KFoldCV" and \ref cv::SimpleCV "SimpleCV" classes (see the
@ref "cross-validation tutorial" for more information on those two classes), but
\ref cv "cross-validation tutorial" for more information on those two classes), but
there are a few important differences.

First, the \c HyperParameterTuner accepts five different hyperparameters; only
Expand Down Expand Up @@ -190,7 +190,7 @@ HyperParameterTuner<LinearRegression, MSE, SimpleCV> hpt(0.2, dataset,
@endcode

Next, we must set up the hyperparameters to be optimized. If we are doing a
grid search with the \ref optimization::GridSearch "GridSearch" optimizer (the
grid search with the \c ens::GridSearch optimizer (the
default), then we only need to pass a `std::vector` (for non-numeric
hyperparameters) or an `arma::vec` (for numeric hyperparameters) containing all
of the possible choices that we wish to search over.
Expand Down Expand Up @@ -222,7 +222,7 @@ Alternately, the \c Fixed() method (detailed in the @ref hptfixed
"Fixed arguments" section) can be used to fix the values of some parameters.

For continuous optimizers like
\ref optimization::GradientDescent "GradientDescent", a range does not need to
\c ens::GradientDescent, a range does not need to
be specified but instead only a single value. See the
\ref hptgradient "Gradient-Based Optimization" section for more details.

Expand Down
2 changes: 1 addition & 1 deletion doc/policies/elemtype.hpp
@@ -1,6 +1,6 @@
/*! @page elem The ElemType policy in mlpack

@section Overview
@section elem_overview Overview

\b mlpack algorithms should be as generic as possible. Often this means
allowing arbitrary metrics or kernels to be used, but this also means allowing
Expand Down
5 changes: 2 additions & 3 deletions doc/policies/kernels.hpp
Expand Up @@ -19,9 +19,8 @@ A kernel (or `Mercer kernel') \f$\mathcal{K}(\cdot, \cdot)\f$ takes two objects
as input and returns some sort of similarity value. The specific details and
properties of kernels are outside the scope of this documentation; for a better
introduction to kernels and kernel methods, there are numerous better resources
available, including \ref
http://www.eric-kim.net/eric-kim-net/posts/1/kernel_trick.html "Eric Kim's
tutorial".
available, including
<a href="http://www.eric-kim.net/eric-kim-net/posts/1/kernel_trick.html">Eric Kim's tutorial</a>

mlpack implements a number of kernel methods and, accordingly, each of these
methods allows arbitrary kernels to be used via the \c KernelType template
Expand Down
4 changes: 3 additions & 1 deletion doc/policies/trees.hpp
Expand Up @@ -47,11 +47,13 @@ are nearby should lie in similar nodes.
We can rigorously define what a tree is, using the definition of **space tree**
introduced in the following paper:

@code
@quote
R.R. Curtin, W.B. March, P. Ram, D.V. Anderson, A.G. Gray, and C.L. Isbell Jr.,
"Tree-independent dual-tree algorithms," in Proceedings of the 30th
International Conference on Machine Learning (ICML '13), pp. 1435--1443, 2013.
@endquote
@endcode
birm marked this conversation as resolved.
Show resolved Hide resolved

The definition is:

Expand Down Expand Up @@ -398,7 +400,7 @@ This section is divided into five parts:

@subsection treetype_rigorous_template Template parameters

\ref treetype_template_param "An earlier section" discussed the three different
\ref treetype_template_params "An earlier section" discussed the three different
template parameters that are required by the \c TreeType policy.

The \ref metrics "MetricType policy" provides one method that will be useful for
Expand Down
25 changes: 12 additions & 13 deletions doc/tutorials/ann/ann.txt
Expand Up @@ -79,9 +79,8 @@ have a number of methods in common:

@note
To be able to optimize the network, both classes implement the OptimizerFunction
API; see \ref optimizertutorial "Optimizer API" for more information. In short,
birm marked this conversation as resolved.
Show resolved Hide resolved
the \c FNN and \c RNN class implement two methods: \c Evaluate() and \c
Gradient(). This enables the optimization given some learner and some
API. In short, the \c FNN and \c RNN class implement two methods: \c Evaluate()
and \c Gradient(). This enables the optimization given some learner and some
performance measure.

Similar to the existing layer infrastructure, the \c FFN and \c RNN classes are
Expand Down Expand Up @@ -230,13 +229,13 @@ int main()
arma::mat predictionTemp;
model.Predict(testData, predictionTemp);

/*
birm marked this conversation as resolved.
Show resolved Hide resolved
Since the predictionsTemp is of dimensions (3 x number_of_data_points)
with continuous values, we first need to reduce it to a dimension of
/*
Since the predictionsTemp is of dimensions (3 x number_of_data_points)
with continuous values, we first need to reduce it to a dimension of
(1 x number_of_data_points) with scalar values, to be able to compare with
testLabels.

The first step towards doing this is to create a matrix of zeros with the
The first step towards doing this is to create a matrix of zeros with the
desired dimensions (1 x number_of_data_points).

In predictionsTemp, the 3 dimensions for each data point correspond to the
Expand All @@ -252,8 +251,8 @@ int main()
arma::max(predictionTemp.col(i)) == predictionTemp.col(i), 1)) + 1;
}

/*
Compute the error between predictions and testLabels,
/*
Compute the error between predictions and testLabels,
now that we have the desired predictions.
*/
size_t correct = arma::accu(prediction == testLabels);
Expand All @@ -266,7 +265,7 @@ int main()
@endcode

Now, the matrix prediction holds the classification of each point in the
dataset. Subsequently, we find the classification error by comparing it
dataset. Subsequently, we find the classification error by comparing it
with testLabels.

In the next example, we create simple noisy sine sequences, which are trained
Expand Down Expand Up @@ -420,7 +419,7 @@ not work; this example exists for its API, not its implementation.
Note that layer sometimes have different properties. These properties are
known at compile-time through the mlpack::ann::LayerTraits class, and some
properties may imply the existence (or non-existence) of certain functions.
Refer to the LayerTraits @ref LayerTraits for more documentation on that.
Refer to the LayerTraits @ref layer_traits.hpp for more documentation on that.

The two template parameters below must be template parameters to the layer, in
the order given below. More template parameters are fine, but they must come
Expand Down Expand Up @@ -662,10 +661,10 @@ $ cat model.xml
</boost_serialization>
@endcode

As you can see, the \c <parameter> section of \c model.xml contains the trained
As you can see, the \c \<parameter\> section of \c model.xml contains the trained
network weights. We can see that this section also contains the network input
size, which is 66 rows and 1 column. Note that in this example, we used three
different layers, as can be seen by looking at the \c <network> section. Each
different layers, as can be seen by looking at the \c \<network\> section. Each
node has a unique id that is used to reconstruct the model when loading.

The models can also be saved as \c .bin or \c .txt; the \c .xml format provides
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/approx_kfn/approx_kfn.txt
Expand Up @@ -87,7 +87,7 @@ In order to solve this problem, \b mlpack provides a number of interfaces.
approximate furthest neighbors
- a simple \ref cpp_qdafn_akfntut "C++ class for QDAFN"
- a simple \ref cpp_ds_akfntut "C++ class for DrusillaSelect"
- a simple \ref cpp_kfn_akfntut "C++ class for tree-based and brute-force"
- a simple \ref cpp_ns_akfntut "C++ class for tree-based and brute-force"
birm marked this conversation as resolved.
Show resolved Hide resolved
search

@section toc_akfntut Table of Contents
Expand Down
8 changes: 4 additions & 4 deletions doc/tutorials/cf/cf.txt
Expand Up @@ -310,7 +310,7 @@ factorization with alternating least squares update rules). These include:

The amf::AMF<> class has many other possibilities than those listed here; it is
a framework for alternating matrix factorization techniques. See the
\ref amf::AMF<> "class documentation" or \ref amftutorial "tutorial on AMF" for
\ref mlpack::amf::AMF<> "class documentation" or \ref amftutorial "tutorial on AMF" for
more information.

The use of another factorizer is straightforward; the example from the previous
Expand Down Expand Up @@ -458,15 +458,15 @@ items and number of columns equal to the \c rank parameter, and \c H should have
number of rows equal to the \c rank parameter, and number of columns equal to
the number of users.

The \ref mlpack::amf::AMF "amf::AMF<> class" can be used as a base for
The \ref mlpack::amf::AMF<> "amf::AMF<> class" can be used as a base for
factorizers that alternate between updating \c W and updating \c H. A useful
reference is the \ref amftutorial "AMF tutorial".

@section further_doc_cftut Further documentation

Further documentation for the \c CF class may be found in the \ref
mlpack::cf::CF "complete API documentation". In addition, more information on
the \c AMF class of factorizers may be found in its \ref mlpack::amf::AMF
mlpack::cf "complete API documentation". In addition, more information on
the \c AMF class of factorizers may be found in its \ref mlpack::amf::AMF<>
"complete API documentation".

*/
4 changes: 2 additions & 2 deletions doc/tutorials/det/det.txt
Expand Up @@ -344,9 +344,9 @@ For further documentation on the DTree class, consult the
The usual regularized error \f$R_\alpha(t)\f$ of a node \f$t\f$ is given by:
\f$R_\alpha(t) = R(t) + \alpha |\tilde{t}|\f$ where

\f[
\f{
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know why this fixed this, or why it was an issue. Not a latex person.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I remember right \f{ wil not center the formula, really strange that we get a warning here.

R(t) = -\frac{|t|^2}{N^2 V(t)}.
\f]
\f}

\f$V(t)\f$ is the volume of the node \f$t\f$ and \f$\tilde{t}\f$ is
the set of leaves in the subtree rooted at \f$t\f$.
Expand Down
24 changes: 12 additions & 12 deletions doc/tutorials/image/image.txt
Expand Up @@ -5,32 +5,32 @@

@page imagetutorial Image Utilities tutorial

@section intro_imagetut Introduction
@section intro_imagetu Introduction

Image datasets are becoming increasingly popular in deep learning.

mlpack's image saving/loading functionality is based on [stb/](https://github.com/nothings/stb).

@section toc_imagetut Table of Contents
@section toc_imagetu Table of Contents

This tutorial is split into the following sections:

- \ref intro_imagetut
- \ref toc_imagetut
- \ref model_api_imagetut
- \ref imageinfo_api_imagetut
- \ref load_api_imagetut
- \ref save_api_imagetut
- \ref intro_imagetu
- \ref toc_imagetu
- \ref model_api_imagetu
- \ref imageinfo_api_imagetu
- \ref load_api_imagetu
- \ref save_api_imagetu
birm marked this conversation as resolved.
Show resolved Hide resolved

@section model_api_imagetut Model API
@section model_api_imagetu Model API

Image utilities supports loading and saving of images.

It supports filetypes "jpg", "png", "tga","bmp", "psd", "gif", "hdr", "pic", "pnm" for loading and "jpg", "png", "tga", "bmp", "hdr" for saving.

The datatype associated is unsigned char to support RGB values in the range 1-255. To feed data into the network typecast of `arma::Mat` may be required. Images are stored in matrix as (width * height * channels, NumberOfImages). Therefore imageMatrix.col(0) would be the first image if images are loaded in imageMatrix.

@section imageinfo_api_imagetut ImageInfo
@section imageinfo_api_imagetu ImageInfo

ImageInfo class contains the metadata of the images.
@code
Expand All @@ -48,7 +48,7 @@ ImageInfo class contains the metadata of the images.
Other public memebers include:
- quality Compression of the image if saved as jpg (0-100).

@section load_api_imagetut Load
@section load_api_imagetu Load


Standalone loading of images.
Expand Down Expand Up @@ -115,7 +115,7 @@ Loading multiple images:
data::load(files, matrix, info, false, true);
@endcode

@section save_api_imagetut Save
@section save_api_imagetu Save

Save images expects a matrix of type unsigned char in the form (width * height * channels, NumberOfImages).
Just like load it can be used to save one image or multiple images. Besides image data it also expects the shape of the image as input (width, height, channels).
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/kmeans/kmeans.txt
Expand Up @@ -140,7 +140,7 @@ last iteration when the cluster was not empty.
$ mlpack_kmeans -c 5 -i dataset.csv -v -e -o assignments.csv -C centroids.csv
@endcode

@subsection cli_ex3_kmtut Killing empty clusters
birm marked this conversation as resolved.
Show resolved Hide resolved
@subsection cli_ex3a_kmtut Killing empty clusters

If you would like to kill empty clusters , instead of reinitializing
them, simply specify the \c -E (\c --kill_empty_clusters) option. Note that
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/linear_regression/linear_regression.txt
Expand Up @@ -132,7 +132,7 @@ $ cat lr.xml
@endcode

As you can see, the function for this input is \f$f(y)=0+1x_1\f$. We can see
that the model we have trained catches this; in the \c <parameters> section of
that the model we have trained catches this; in the \c \<parameters\> section of
\c lr.xml, we can see that there are two elements, which are (approximately) 0
and 1. The first element corresponds to the intercept 0, and the second column
corresponds to the coefficient 1 for the variable \f$x_1\f$. Note that in this
Expand Down