Skip to content

Commit 6cb00d7

Browse files
committed
update NEWS
1 parent 3078342 commit 6cb00d7

File tree

1 file changed

+101
-16
lines changed

1 file changed

+101
-16
lines changed

inst/NEWS.Rd

Lines changed: 101 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,110 @@
11
\name{NEWS}
22
\alias{NEWS}
3-
\title{
4-
Recent changes to the fda package
5-
}
3+
\title{Recent changes to the fda package}
64
\description{
7-
Changes in version fda_5.5.0 2021-10-28:
8-
9-
Many data smoothing situations require that the smooth
10-
curves satisfy some constraints.
11-
12-
Take function \code{smooth.monotone.R} for example. Its curves are either strictly increasing or strictly decreasing, even though the data are not. This is the case in modelling human growth, where we can reasonably assume that daily or monthly measurements will reflect a trend that increases everywhere.
13-
14-
Function \code{smooth.morph.R}, which plays an important role in curve registration, adds the additional constraint that the domain limits mapped exactly into the range limits.
5+
\itemize{
6+
\item{Changes in version fda_6.0.3 2022-05-02:}{
7+
\itemize{
8+
\item{Landmark registration:}{Landmark registration using function
9+
\code{landmarkreg} can no longer be done by using
10+
function \code{smooth.basis} instead of function \code{smooth.morph}. The
11+
warping function must be strictly monotonic, and we have found that using
12+
\code{smooth.basis} too often violates this monotonicity constraint. Function
13+
\code{smooth.morph} ensures monotonicity and in most applications takes negligible
14+
computer time to do so.
15+
}
16+
\item{PACE in fda:}{
17+
Function \code{pcaPACE} arries out a functional PCA with regularization from the
18+
estimate of the covariance surface.
19+
20+
Function \code{scoresPACE} estimates functional Principal Component
21+
scores through Conditional Expectation (PACE).
22+
}
23+
\item{Further changes to \code{smooth.morph} and \code{landmarkreg:}}{
24+
\code{Smooth.morph} estimates a warping function when the target of the fit by
25+
registration is a functional data object. This function has been extended
26+
to work when the target for the fit and the fitted functions have different
27+
ranges or domains. The warping also maps each boundary into its target
28+
boundary. Simiarly \code{landmarkreg} uses a small number of discrete
29+
values to define the warping, and how has an extra argument, \code{x0lim},
30+
that defines the range of the target domain. Since it defaults to the
31+
range of the warped domain, it continues to work if not used and the
32+
domains have the same range.
33+
}
34+
\item{Surprisal smoothing:}{This function works with multinomial data that
35+
evolve over a continuum, such as the value of a latent variable in
36+
psychometrics. A multinomial observation consists of a set of
37+
probabilities that are in the open interval (0,1) and sum to one.
38+
The surprisal value S(P_m) corresponding to a probabity P_m is
39+
-log_M(P_m), where M is the number of probabities and is the base of
40+
the logarithm. The inverse function is P(S_m) = M^(-S_m).
41+
42+
Surprisal is also known as "self-information" in the field of information
43+
theory. It has the characteristics of a true metric: Surprisals can be
44+
added, multiplied by positive numbers, and the difference between two
45+
surprisal values mean the same thing everywhere along the information.
46+
continuum. The unit of the metric is called the "M-bit", the
47+
generalization of the familiar "bit" or "2-bit" for binary data.
48+
The metric property is not possessed by so-called latent
49+
variables because they can be arbitrarily monotonically transformed.
50+
51+
Smoothing surprisal data is much easier and faster than smoothing
52+
probabilities since surprisal values are only constrained to be
53+
non-negative and are otherwise unbounded.
54+
55+
The function \code{smooth.surp} estimates smooth curves which fit a set of
56+
surprisal values and which also satisfy the constraint that their
57+
probability versions sum to one.
58+
}
59+
\item{Improvements in iterative optimisation:}{
60+
Many functions in the fda package optimize a fitting criterion
61+
iteratively. Function \code{smooth.monotone} is an example.
62+
The optimisation algorithm used was a rather early design,
63+
and many improvements have since been made. In most of our
64+
optimisations, we have switched to the algorithm to be found
65+
in Press, Teukolsky, Vetterling and Flannery Numerical Recipes
66+
volumes. We have noticed a bit improvement in speed, are in
67+
the process of upgrading all of our optimisers using this
68+
approach.
69+
}
70+
}
71+
}
72+
\item{Changes in version fda_5.5.0 2021-10-28:}{
73+
\itemize{
74+
\item{Smooth and constrained curves:}{
75+
Many data smoothing situations require that the smooth curves satisfy some constraints.
1576

16-
In this version two new constrained curves are introduced. Nonsingular multinomial probability vectors contain nonzero probabilities that sum to zero. A simple transformation of these probabilities, $S = -log(P)$, converts probabilities into what is often called surprisal. Surprisal is a measure of information where the unit of measurement is the M-bit, where $M$ is the length of the multinomial vector. Information measured in this way can be added and subtracted, and fixed differences mean the same thing anywhere along the surprisal continuum, which is positive with an origin at 0. Probability 1 corresponds to surprisal 0, and a very small probability produces a very large positive surprisal. Probabilities 0.05 and 0.01 correspond to 2-bit surprisals 4.3 and 6.1, respectively.
77+
Take function \code{smooth.monotone.R} for example. Its curves are either strictly
78+
increasing or strictly decreasing, even though the data are not. This is the case in
79+
modelling human growth, where we can reasonably assume that daily or monthly measurements
80+
will reflect a trend that increases everywhere.
1781

18-
Probability curves result if the probabilities change with over continuous scale, often called a latent variable in statistics. The corresponding surprisal curves satisfy the constraint at any index value $log(sum(M^S)) = 0.$ The unbounded nature of surprisal curves plus their metric property render them much easier to work with computationally, as well having the metric property.
82+
Function \code{smooth.morph.R}, which plays an important role in curve registration,
83+
adds the additional constraint that the domain limits are mapped exactly into the range
84+
limits.
1985

20-
Functions smooth.surp.R and error sum of squares fit function surp.fit.R are added in this version in order to support a package \code{TestGardener} that analyzes choice or psychometric data.
86+
In this version two new constrained curves are introduced. Nonsingular multinomial
87+
probability vectors contain nonzero probabilities that sum to zero. A simple
88+
transformation of these probabilities, $S = -log(P)$, converts probabilities into what
89+
is often called surprisal. Surprisal is a measure of information where the unit of
90+
measurement is the M-bit, where $M$ is the length of the multinomial vector.
91+
Information measured in this way can be added and subtracted, and fixed differences
92+
mean the same thing anywhere along the surprisal continuum, which is positive with
93+
an origin at 0. Probability 1 corresponds to surprisal 0, and a very small
94+
probability produces a very large positive surprisal. Probabilities 0.05 and 0.01
95+
correspond to 2-bit surprisals 4.3 and 6.1, respectively.
2196

22-
Function \code{smooth.morph.R} is also now extended by function \code{smooth.morph2.R} in order to map the limits of a domain into different limits for the range.
97+
Probability curves result if the probabilities change with over continuous scale,
98+
often called a latent variable in statistics. The corresponding surprisal curves
99+
satisfy the constraint at any index value $log(sum(M^-S)) = 0.$ The unbounded nature
100+
of surprisal curves plus their metric property render them much easier to work with
101+
computationally, as well having the metric property.
23102

103+
Functions smooth.surp.R and error sum of squares fit function surp.fit.R are added
104+
in this version in order to support a package \code{TestGardener} that analyzes
105+
choice or psychometric data.
106+
}
107+
}
108+
}
109+
}
24110
}
25-

0 commit comments

Comments
 (0)