where e is the obervational error term.
The least squares (LS) estimation of the main parameter x reads:
where P is the weight matrix of the observations.
is the co-factor matrix of the main parameters.
where e is the obervational error term, z = [1 1 ... 1]'.The least squares (LS) estimation of the main parameter x satisfies:
According to the nuisance elimination theory, the least squares (LS) estimation of the main parameter x reads:
where J is the reweighting matrix and it reads:
For the differencing operator K that satisfies
left multiplying the differencing operator K to both sides of the model 2 will result in
With regard to the correlation between the differenced observations, i.e., the co-factor matrix of differenced observations
then the differenced solution reads
where
is called as differencing equivalence weight (DEW) matrix which also means to reweighting the observations like the solution 1.
Let n is the number of observations, if rank(K)=n-1 and [K Z] is full of column rank, the solution 1 is equivalent to the solution 2 because the following equality
always holds.
In equal-weight case, we can establish the following conversion formula between the solution 2-2 and soution 1
where
where is the entry of the matrix A.
The above formula can be generalized to the unequal weight case, that
where pi are the weights of the observations. The conversion formula between the variance estimations can also be established.
The above formula can also be generalized to the case with z = [z1 z2 ... zn]' which indicates the error terms related to the nusaince paramters are not constants.
The model 2 can be generalized to the following case with multi nuisance parameters:
The conversion formula can be generalized from one nusiance parameter to multi nusiance parameters of the above form.
We can develop a very efficient algorithm solving the model with nusiance parameters based on the conversion formulae. The code files are listed as follows:
- FastDiffSolEW.m
Function: Dimension-reduction algorithm for equal-weight observations
- FastDiffSol2UEW.m
Function: Dimension-reduction algorithm for unequal-weight observations
- RapidDiffSolEW.m
Function: Blocking-stacking algorithm for equal-weight observations
- UnDiffSolSim.m
Function: Blocking-stacking algorithm for unequal-weight observations
Coming back to the observation model 2, assume that the number of each set of observations is , and the number q of nuisance parameters gradually increases from 1 to 200 to test the proposed algorithm performance. To illustrate the running speed of the algorithm, we define the index c=1/t where t is the running time of the algorithm to measure the speed, and then the speed ratio is defined as k=c(dimension-reduction algorithm)/ c(blocking-stacking algorithm) is used to show the improvement of the proposed algorithm on the blocking-stacking algorithm.
(a) equal-weight case
(b) unequal-weight case
Fig.1 Running time increasing with nuisance parameters
In the equal-weight case, Fig.1(a) shows that the running time of the blocking-stacking algorithm increases linearly with the number q of nuisance parameters, while that of the dimension-reduction algorithm increases very slowly and takes far less than 0.1 second . The speed ratios show that the improvement of the proposed algorithm on the blocking-stacking algorithm is very effective, e.g., the former is more than 100 times faster than the latter for the nuisances exceeding 30. In the unequal-weight case, as shown in Fig.1 (b), we can draw the same conclusion that the proposed algorithm is still more efficient and takes at most 0.1 seconds. Although the speed ratios become relatively smaller, it still increases linearly with the number of nuisance parameters, i.e., the proposed dimension-reduction algorithm performance is still outstanding.(a) equal-weight case
(b) unequal-weight case
Fig.2 Running time increasing with observations
To test that the running time increases with the number of observations, we fix the dimension of the nuisance parameter, q=50, and then increase the number of each set of observations from 2 to 2000. As shown in Fig. 2, not only the dimension-reduction algorithm is still efficient, but also the running time of this algorithm is increased with the number of observations only linearly. This is a very good characteristic for the algorithm design to process a huge number of modern geodetic positioning observations.