-
Notifications
You must be signed in to change notification settings - Fork 1.2k
/
integrator_base.h
1691 lines (1507 loc) · 75.9 KB
/
integrator_base.h
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
#pragma once
#include <algorithm>
#include <cmath>
#include <limits>
#include <memory>
#include <utility>
#include <vector>
#include "drake/common/default_scalars.h"
#include "drake/common/drake_assert.h"
#include "drake/common/drake_bool.h"
#include "drake/common/drake_copyable.h"
#include "drake/common/text_logging.h"
#include "drake/common/trajectories/piecewise_polynomial.h"
#include "drake/systems/framework/context.h"
#include "drake/systems/framework/system.h"
#include "drake/systems/framework/vector_base.h"
namespace drake {
namespace systems {
/** @addtogroup simulation
@{
@defgroup integrators Integrators
Apart from solving initial value problems, for which the integrator is a
key component of a simulator, integrators can also be used to solve
boundary value problems (via numerical methods like the Multiple Shooting
Method) and trajectory optimization problems (via numerical methods like
direct transcription). IntegratorBase and its derivatives were developed
primarily toward the former application (through
IntegratorBase::IntegrateNoFurtherThanTime() and the Simulator class).
However, the IntegratorBase architecture was developed to support these
ancillary applications as well using the
IntegratorBase::IntegrateWithMultipleStepsToTime() and
IntegratorBase::IntegrateWithSingleFixedStepToTime() methods; the latter
permits the caller to advance time using fixed steps in applications where
variable stepping would be deleterious (e.g., direct transcription).
@section integrator-selection Integrator selection
A natural question for a user to ask of an integrator is: Which scheme
(method) should be applied to a particular problem? The answer is whichever
one most quickly computes the solution to the desired accuracy! Selecting
an integration scheme for a particular problem is presently an artform. As
examples of some selection criteria: multistep methods (none of which are
currently implemented in Drake) generally work poorly when events (that
require state reinitializations) are common, symplectic methods generally
work well at maintaining stability for large integration steps, and stiff
integrators are often best for computationally stiff systems. If ignorant
as to the characteristics of a particular problem, it is often best to start
with an explicit, Runge-Kutta type method. Statistics collected by the
integrator can help diagnose performance issues and possibly point to the
use of a different integration scheme.
Some systems are known to exhibit "computational stiffness", by which it is
meant that (excessively) small integration steps are necessary for purposes
of stability: in other words, steps must be taken smaller than that
required to achieve a desired accuracy *over a particular interval*.
Thus, the nature of computationally stiff problems is that the solution to
the ODE is *smooth* in the interval of stiffness (in contrast, some problems
possess such high frequency dynamics that very small steps are simply
necessary to capture the solution accurately). Implicit integrators are the
go-to approach for solving computationally stiff problems, but careful
consideration is warranted. Implicit integrators typically require much more
computation than non-implicit (explicit) integrators, stiffness might be an
issue on only a very small time interval, and some problems might be only
"moderately stiff". Put another way, applying an implicit integrator to a
potentially stiff problem might not yield faster computation. The first
chapter of [Hairer, 1996] illustrates the issues broached in this paragraph
using various examples.
@section settings Integrator settings
IntegratorBase provides numerous settings and flags that can leverage
problem-specific information to speed integration and/or improve integration
accuracy. As an example, IntegratorBase::set_maximum_step_size() allows the
user to prevent overly large integration steps (that integration error
control alone might be insufficient to detect). As noted previously,
IntegratorBase also collects a plethora of statistics that can be used to
diagnose poor integration performance. For example, a large number of
shrinkages due to @ref error-estimation-and-control "error control" could
indicate that a system is computationally stiff. **Note that you might need
to alter the default settings to obtain desired performance even though we
have attempted to select reasonable defaults for many problems.**
See settings for @ref integrator-accuracy,
@ref integrator-maxstep "maximum step size",
@ref integrator-minstep "minimum step size", and
@ref weighting-state-errors "weighting state errors" for
in-depth information about the various performance settings shared across
integrators.
@section dense-sampling Dense sampling (interpolation)
For applications that require a more dense sampling of the system
continuous state than what would be available through either fixed or
error-controlled step integration (for a given accuracy), dense output
support is available (through IntegratorBase::StartDenseIntegration() and
IntegratorBase::StopDenseIntegration() methods). The accuracy and performance
of these outputs may vary with each integration scheme implementation.
@section references References
- [Hairer, 1996] E. Hairer and G. Wanner. Solving Ordinary Differential
Equations II (Stiff and Differential-Algebraic Problems).
Springer, 1996.
@}
*/
/**
An abstract class for an integrator for ODEs and DAEs as represented by a
Drake System. Integrators solve initial value problems of the form:<pre>
ẋ(t) = f(t, x(t)) with f : ℝ × ℝⁿ → ℝⁿ
</pre>
(i.e., `f()` is an ordinary differential equation) given initial conditions
(t₀, x₀). Thus, integrators advance the continuous state of a dynamical
system forward in time.
Drake's subclasses of IntegratorBase<T> should follow the naming pattern
`FooIntegrator<T>` by convention.
@tparam_default_scalar
@ingroup integrators
*/
template <class T>
class IntegratorBase {
public:
DRAKE_NO_COPY_NO_MOVE_NO_ASSIGN(IntegratorBase)
/**
Status returned by IntegrateNoFurtherThanTime().
When a step is successful, it will return an indication of what caused it
to stop where it did. When unsuccessful it will throw an exception so you
won't see any return value. When return of control is due ONLY to reaching
a publish time, (status is kReachedPublishTime) the context may return an
interpolated value at an earlier time.
@note the simulation step must always end at an update time but can end
after a publish time.
*/
// TODO(edrumwri): incorporate kReachedZeroCrossing into the simulator.
enum StepResult {
/// Indicates a publish time has been reached but not an update time.
kReachedPublishTime = 1,
/// Localized an event; this is the *before* state (interpolated).
kReachedZeroCrossing = 2,
/// Indicates that integration terminated at an update time.
kReachedUpdateTime = 3,
/// User requested control whenever an internal step is successful.
kTimeHasAdvanced = 4,
/// Reached the desired integration time without reaching an update time.
kReachedBoundaryTime = 5,
/// Took maximum number of steps without finishing integrating over the
/// interval.
kReachedStepLimit = 6,
};
/**
Maintains references to the system being integrated and the context used
to specify the initial conditions for that system (if any).
@param system A reference to the system to be integrated; the integrator
will maintain a reference to the system in perpetuity, so
the integrator must not outlive the system.
@param context A pointer to a writeable context (nullptr is ok, but a
non-null pointer must be set before Initialize() is
called). The integrator will advance the system state using
the pointer to this context. The pointer to the context will
be maintained internally. The integrator must not outlive
the context.
*/
explicit IntegratorBase(const System<T>& system,
Context<T>* context = nullptr)
: system_(system), context_(context) {
initialization_done_ = false;
}
virtual ~IntegratorBase() = default;
/**
@anchor integrator-accuracy
@name Methods for getting and setting integrator accuracy
The precise meaning of *accuracy* is a complicated discussion, but it
translates roughly to the number of significant digits you want in the
results. By convention it is supplied as `10^-digits`, meaning that an
accuracy of 1e-3 provides about three significant digits. For more
discussion of accuracy, see @ref accuracy_and_tolerance and ref.
[[1]](https://dx.doi.org/10.1016/j.piutam.2011.04.023).
Integrators vary in the range of accuracy (loosest to tightest) that they
can support, and each integrator will choose a default accuracy to be used
that lies somewhere within this range and attempts to balance computation
and accuracy. If you request accuracy outside the supported range for the
chosen integrator it will be quietly adjusted to be in range. You can find
out the accuracy setting actually being used using `get_accuracy_in_use()`.
Implicit integrators additionally use the accuracy setting for determining
when the underlying Newton-Raphson root finding process has converged. For
those integrators, the accuracy setting also limits the allowable iteration
error in the Newton-Raphson process. Looser accuracy in that process
certainly implies greater error in the ODE solution and might impact the
stability of the solution negatively as well.
- [1] M. Sherman, A. Seth, S. Delp. Procedia IUTAM 2:241-261 (2011),
Section 3.3. https://dx.doi.org/10.1016/j.piutam.2011.04.023
@{
*/
/**
Request that the integrator attempt to achieve a particular accuracy for
the continuous portions of the simulation. Otherwise a default accuracy is
chosen for you. This may be ignored for fixed-step integration since
accuracy control requires variable step sizes. You should call
supports_error_estimation() to ensure that the
integrator supports this capability before calling this function; if
the integrator does not support it, this method will throw an exception.
@throws std::exception if integrator does not support error
estimation.
*/
// TODO(edrumwri): complain if integrator with error estimation wants to drop
// below the minimum step size
void set_target_accuracy(double accuracy) {
if (!supports_error_estimation())
throw std::logic_error(
"Integrator does not support accuracy estimation "
"and user has requested error control");
target_accuracy_ = accuracy;
accuracy_in_use_ = accuracy;
}
/**
Gets the target accuracy.
@sa get_accuracy_in_use()
*/
double get_target_accuracy() const { return target_accuracy_; }
/**
Gets the accuracy in use by the integrator. This number may differ from
the target accuracy if, for example, the user has requested an accuracy
not attainable or not recommended for the particular integrator.
*/
double get_accuracy_in_use() const { return accuracy_in_use_; }
// @}
/**
@anchor error-estimation-and-control
@name Methods related to error estimation and control
Established methods for integrating ordinary differential equations
invariably make provisions for estimating the "local error" (i.e., the
error over a small time interval) of a solution. Although the relationship
between local error and global error (i.e., the accumulated error over
multiple time steps) can be tenuous, such error estimates can allow
integrators to work adaptively, subdividing time intervals as necessary
(if, e.g., the system is particularly dynamic or stationary in an interval).
Even for applications that do not recommend such adaptive integration- like
direct transcription methods for trajectory optimization- error estimation
allows the user to assess the accuracy of the solution.
@{
*/
/**
Derived classes must override this function to indicate whether the
integrator supports error estimation. Without error estimation, the target
accuracy setting (see @ref integrator-accuracy "accuracy settings") will be
unused.
*/
virtual bool supports_error_estimation() const = 0;
/**
Derived classes must override this function to return the order of the
asymptotic term in the integrator's error estimate. An error estimator
approximates the truncation error in an integrator's solution. That
truncation error e(.) is approximated by a Taylor Series expansion in the
neighborhood around t:
@verbatim
e(t+h) ≈ e(t) + he(t) + he'(t) + ½h²e''(t) + ...
≈ e(t) + he(t) + he'(t) + ½h²e''(t) + O(h³)
@endverbatim
where we have replaced the "..." with the asymptotic error of all terms
truncated from the series.
Implementions should return the order of the asymptotic term in the Taylor
Series expansion around the expression for the error. For an integrator
that propagates a second-order solution and provides an estimate of the
error using an embedded first-order method, this method should return "2",
as can be seen in the derivation below, using y* as the true solution:
@verbatim
y̅ = y* + O(h³) [second order solution]
ŷ = y* + O(h²) [embedded first-order method]
e = (y̅ - ŷ) = O(h²)
@endverbatim
If the integrator does not provide an error estimate, the derived class
implementation should return 0.
*/
virtual int get_error_estimate_order() const = 0;
/**
Gets the error estimate (used only for integrators that support error
estimation). If the integrator does not support error estimation, nullptr
is returned.
*/
const ContinuousState<T>* get_error_estimate() const {
return err_est_.get();
}
/**
Return the step size the integrator would like to take next, based
primarily on the integrator's accuracy prediction. This value will not
be computed for integrators that do not support error estimation and
NaN will be returned.
*/
const T& get_ideal_next_step_size() const { return ideal_next_step_size_; }
/**
Sets an integrator with error control to fixed step mode. If the integrator
runs in fixed step mode, it will always take the maximum step size
directed (which may be that determined by get_maximum_step_size(), or may
be smaller, as directed by, e.g., Simulator for event handling purposes).
@warning The error estimation process will still be active (so
get_error_estimate() will still return a correct result), meaning
that the additional (typically, but not necessarily small)
computation required for error estimation will still be performed.
@throws std::exception if integrator does not support error
estimation and @p flag is set to `false`.
*/
void set_fixed_step_mode(bool flag) {
if (!flag && !supports_error_estimation())
throw std::logic_error("Integrator does not support accuracy estimation");
fixed_step_mode_ = flag;
}
/**
Gets whether an integrator is running in fixed step mode. If the integrator
does not support error estimation, this function will always return `true`.
@sa set_fixed_step_mode()
*/
bool get_fixed_step_mode() const {
return (!supports_error_estimation() || fixed_step_mode_);
}
// @}
/**
@name Methods for weighting state variable errors \
(in the context of error control)
@anchor weighting-state-errors
@{
This group of methods describes how errors for state variables with
heterogeneous units are weighted in the context of error-controlled
integration. This is an advanced topic and most users can simply specify
desired accuracy and accept the default state variable weights.
A collection of state variables is generally defined in heterogeneous units
(e.g. length, angles, velocities, energy). Some of the state
variables cannot even be expressed in meaningful units, like
quaternions. Certain integrators provide an estimate of the absolute error
made in each state variable during an integration step. These errors must
be properly weighted to obtain an "accuracy" _with respect to each
particular variable_. These per-variable accuracy determinations can be
compared against the user's requirements and used to select an appropriate
size for the next step [Sherman 2011]. The weights are
normally determined automatically using the system's characteristic
dimensions, so *most users can stop reading now!* Custom weighting is
primarily useful for performance improvement; an optimal weighting would
allow an error-controlled integrator to provide the desired level of
accuracy across all state variables without wasting computation
achieving superfluous accuracy for some of those variables.
Users interested in more precise control over state variable weighting may
use the methods in this group to access and modify weighting factors for
individual state variables. Changes to these weights can only be made prior
to integrator initialization or as a result of an event being triggered
and then followed by re-initialization.
_Relative versus absolute accuracy_:
%State variable integration error, as estimated by an integrator, is an
absolute quantity with the same
units as the variable. At each time step we therefore need to determine
an absolute error that would be deemed "good enough", i.e. satisfies
the user's accuracy requirement. If a variable is maintained to a
_relative_ accuracy then that "good enough" value is defined to be the
required accuracy `a` (a fraction like 0.001) times the current value of
the variable, as long as that value
is far from zero. For variables maintained to an *absolute* accuracy, or
relative variables that are at or near zero (where relative accuracy would
be undefined or too strict, respectively), we need a different way to
determine the "good enough" absolute error. The methods in this section
control how that absolute error value is calculated.
_How to choose weights_:
The weight `wᵢ` for a state variable `xᵢ` should be
chosen so that the product `wᵢ * dxᵢ` is unitless, and in particular is 1
when `dxᵢ` represents a "unit effect" of state variable `xᵢ`; that is, the
change in `xᵢ` that produces a unit change in some quantity of interest in
the system being simulated. Why unity (1)? Aside from normalizing the
values, unity "grounds" the weighted error to the user-specified accuracy.
A weighting can be applied individually to each state variable, but
typically it is done approximately by combining the known type of the
variable (e.g. length, angle) with a "characteristic scale" for that
quantity. For example, if a "characteristic length" for the system being
simulated is 0.1 meters, and `x₀` is a length variable measured in meters,
then `w₀` should be 10 so that `w₀*dx₀=1` when `dx₀=0.1`. For angles
representing pointing accuracy (say a camera direction) we typically assume
a "characteristic angle" is one radian (about 60 degrees), so if x₁ is a
pointing direction then w₁=1 is an appropriate weight. We can now scale an
error vector `e=[dx₀ dx₁]` to a unitless fractional error vector
`f=[w₀*dx₀ w₁*dx₁]`. Now to achieve a given accuracy `a`, say `a=.0001`,
we need only check that `|fᵢ|<=a` for each element `i` of `f`. Further,
this gives us a quantitative measure of "worst accuracy" that we can use
to increase or reduce size of the next attempted step, so that we will just
achieve the required accuracy but not much more. We'll be more precise
about this below.
@anchor quasi_coordinates
_Some subtleties for second-order dynamic systems_:
Systems governed by 2nd-order differential equations are typically split
into second order (configuration) variables q, and rate (velocity)
variables v, where the time derivatives qdot of q are linearly related to
v by the kinematic differential equation `qdot = dq/dt = N(q)*v`.
Velocity variables are
chosen to be physically significant, but configuration variables
may be chosen for convenience and do not necessarily have direct physical
interpretation. For examples, quaternions are chosen as a numerically
stable orientation representation. This is problematic for choosing weights
which must be done by physical reasoning
as sketched above. We resolve this by introducing
the notion of "quasi-coordinates" ꝗ (pronounced "qbar") which are defined
by the equation `ꝗdot = dꝗ/dt = v`. Other than time scaling,
quasi-coordinates have the same units as their corresponding velocity
variables. That is, for weighting we need to think
of the configuration coordinates in the same physical space as the velocity
variables; weight those by their physical significance; and then map back
to an instantaneous weighting
on the actual configuration variables q. This mapping is performed
automatically; you need only to be concerned about physical weightings.
Note that generalized quasi-coordinates `ꝗ` can only be defined locally for
a particular configuration `q`. There is in general no meaningful set of
`n` generalized
coordinates which can be differentiated with respect to time to yield `v`.
For example, the Hairy Ball Theorem implies that it is not possible for
three orientation variables to represent all 3D rotations without
singularities, yet three velocity variables can represent angular velocity
in 3D without singularities.
To summarize, separate weights can be provided for each of
- `n` generalized quasi-coordinates `ꝗ` (configuration variables in the
velocity variable space), and
- `nz` miscellaneous continuous state variables `z`.
Weights on the generalized velocity variables `v (= dꝗ/dt)` are derived
directly from the weights on `ꝗ`, weighted by a characteristic time.
Weights on the actual `nq` generalized coordinates can
be calculated efficiently from weights on the quasi-coordinates (details
below).
_How the weights are used_:
The errors in the `ꝗ` and `z` variables are weighted by the diagonal
elements
of diagonal weighting matrices Wꝗ and Wz, respectively. (The block-diagonal
weighting matrix `Wq` on the original generalized coordinates `q` is
calculated from `N` and `Wꝗ`; see below.) In the absence of
other information, the default for all weighting values is one, so `Wꝗ` and
`Wz` are `n × n` and `nz × nz` identity matrices. The weighting matrix `Wv`
for the velocity variables is just `Wv = τ*Wꝗ` where `τ` is a
"characteristic time" for the system, that is, a quantity in time units
that represents a significant evolution of the trajectory. This serves to
control the accuracy with which velocity is determined relative to
configuration. Note that larger values of `τ` are more conservative since
they increase the velocity weights. Typically we use `τ=1.0` or `0.1`
seconds for human-scale mechanical systems.
<!-- TODO(sherm1): provide more guidance for velocity weighting. -->
The weighting matrices `Wq`, `Wv`, and `Wz` are used to compute a weighted
infinity norm as follows. Although `Wv` and `Wz` are constant, the actual
weightings may be state dependent for relative-error calculations.
Define block diagonal error weighting matrix `E=diag(Eq,Ev,Ez)` as follows:
<pre>
Eq = Wq
Ev: Ev(i,i) = { min(Wv(i,i), 1/|vᵢ|) if vᵢ is relative
{ Wv(i,i) if vᵢ is absolute
Ez: Ez(i,i) = { min(Wz(i,i), 1/|zᵢ|) if zᵢ is relative
{ Wz(i,i) if zᵢ is absolute
</pre>
(`Ev` and `Ez` are diagonal.) A `v` or `z` will be maintained to relative
accuracy unless (a) it is "close" to zero (less than 1), or (b) the
variable has been defined as requiring absolute accuracy. Position
variables `q` are always maintained to absolute accuracy (see
[Sherman 2011] for rationale).
Now given an error estimate vector `e=[eq ev ez]`, the vector `f=E*e`
can be considered to provide a unitless fractional error for each of the
state variables. To achieve a given user-specified accuracy `a`, we require
that norm_inf(`f`) <= `a`. That is, no element of `f` can have absolute
value larger than `a`. We also use `f` to determine an ideal next step
size using an appropriate integrator-specific computation.
_Determining weights for q_:
The kinematic differential equations `qdot=N(q)*v` employ an `nq × n`
matrix `N`. By construction, this relationship is invertible using `N`'s
left pseudo-inverse `N⁺` so that `v=N⁺ qdot` and `N⁺ N = I` (the identity
matrix); however, `N N⁺ != I`, as `N` has more rows than columns generally.
[Nikravesh 1988] shows how such a matrix `N` can be determined and provides
more information. Given this relationship between `N` and `N⁺`, we can
relate weighted errors in configuration coordinates `q` to weighted errors
in generalized quasi-coordinates `ꝗ`, as the following derivation shows:
<pre>
v = N⁺ qdot Inverse kinematic differential equation
dꝗ/dt = N⁺ dq/dt Use synonyms for v and qdot
dꝗ = N⁺ dq Change time derivatives to differentials
Wꝗ dꝗ = Wꝗ N⁺ dq Pre-multiply both sides by Wꝗ
N Wꝗ dꝗ = N Wꝗ N⁺ dq Pre-multiply both sides by N
N Wꝗ dꝗ = Wq dq Define Wq := N Wꝗ N⁺
N Wꝗ v = Wq qdot Back to time derivatives.
</pre>
The last two equations show that `Wq` as defined above provides the
expected relationship between the weighted `ꝗ` or `v` variables in velocity
space and the weighted `q` or `qdot` (resp.) variables in configuration
space.
Finally, note that a diagonal entry of one of the weighting matrices can
be set to zero to disable error estimation for that state variable
(i.e., auxiliary variable or configuration/velocity variable pair), but
that setting an entry to a negative value will cause an exception to be
thrown when the integrator is initialized.
- [Nikravesh 1988] P. Nikravesh. Computer-Aided Analysis of Mechanical
Systems. Prentice Hall, 1988. Sec. 6.3.
- [Sherman 2011] M. Sherman, et al. Procedia IUTAM 2:241-261 (2011),
Section 3.3.
http://dx.doi.org/10.1016/j.piutam.2011.04.023
@sa CalcStateChangeNorm()
*/
/**
Gets the weighting vector (equivalent to a diagonal matrix) applied to
weighting both generalized coordinate and velocity state variable errors,
as described in the group documentation. Only used for integrators that
support error estimation.
*/
const Eigen::VectorXd& get_generalized_state_weight_vector() const {
return qbar_weight_;
}
/**
Gets a mutable weighting vector (equivalent to a diagonal matrix) applied
to weighting both generalized coordinate and velocity state variable
errors, as described in the group documentation. Only used for
integrators that support error estimation. Returns a VectorBlock
to make the values mutable without permitting changing the size of
the vector. Requires re-initializing the integrator after calling
this method; if Initialize() is not called afterward, an exception will be
thrown when attempting to call IntegrateNoFurtherThanTime(). If the caller
sets one of the entries to a negative value, an exception will be thrown
when the integrator is initialized.
*/
Eigen::VectorBlock<Eigen::VectorXd>
get_mutable_generalized_state_weight_vector() {
initialization_done_ = false;
return qbar_weight_.head(qbar_weight_.rows());
}
/**
Gets the weighting vector (equivalent to a diagonal matrix) for
weighting errors in miscellaneous continuous state variables `z`. Only used
for integrators that support error estimation.
*/
const Eigen::VectorXd& get_misc_state_weight_vector() const {
return z_weight_;
}
/**
Gets a mutable weighting vector (equivalent to a diagonal matrix) for
weighting errors in miscellaneous continuous state variables `z`. Only used
for integrators that support error estimation. Returns a VectorBlock
to make the values mutable without permitting changing the size of
the vector. Requires re-initializing the integrator after calling this
method. If Initialize() is not called afterward, an exception will be
thrown when attempting to call IntegrateNoFurtherThanTime(). If the caller
sets one of the entries to a negative value, an exception will be thrown
when the integrator is initialized.
*/
Eigen::VectorBlock<Eigen::VectorXd> get_mutable_misc_state_weight_vector() {
initialization_done_ = false;
return z_weight_.head(z_weight_.rows());
}
// @}
/**
@anchor integrator-initial-step-size
@name Methods related to initial step size
From [Watts 1983], "One of the more critical issues in solving ordinary
differential equations by a step-by-step process occurs in the starting
phase. Somehow the procedure must be supplied with an initial step size that
is on scale for the problem at hand. It must be small enough to yield a
reliable solution by the process, but it should not be so small as to
significantly affect the efficiency of solution. The more important of these
two possibilities is obviously the reliability question. The first step taken
by the code must reflect how fast the solution changes near the initial
point. For general purpose computing, an automatic step size adjustment
procedure for choosing subsequent steps is essential to produce an accurate
solution efficiently. This step size control is usually based on estimates of
the local errors incurred by the numerical method. Because most codes also
employ algorithmic devices which restrict the step size control to be
moderately varying (for reliability), subsequent steps usually tend to stay
on scale of the problem. This is not always so, as sometimes happens when
working with crude tolerances on problems having rapidly varying components.
Nevertheless, most step size adjustment procedures deal reasonably well with
all but the most abrupt changes, leaving the most serious danger confined to
the starting step size."
Users may not have a good idea of an initial step size to take, so
integration codes usually attempt to automatically select an initial step
size. Sophisticated algorithms for initial step size selection are described
in [Hindmarsh 1980], [Watts 1983], [Gladwell 1987], and [Hairer 2008]. These
algorithms can fail to produce a good initial step size as well (see
discussion in [Watts 1983]). Drake's integrators use a fraction (generally
1/10th of the maximum step size) to set the initial step size. If you have a
problem that operates at wildly varying time scales, e.g., Robertson's
canonical stiff system problem (that requires a large maximum step size to
be efficient), consider setting both the initial and maximum step sizes
(i.e., not using the defaults) to keep from missing phenomena that occur over
small time scales near the beginning of the time interval being integrated.
- [Gladwell 1987] I. Gladwell, L. F. Shampine, and R. W. Brankin. Automatic
selection of the initial step size for an ODE solver. J.
Comp. Appl. Math., Vol. 18, pp. 175-192, 1987.
- [Hairer 2008] E. Hairer, S. P. Norsett, and G. Wanner. Solving Ordinary
Differential Equations I (Nonstiff Problems), 2nd ed.
Springer, 2008.
- [Hindmarsh 1980] A. C. Hindmarsh. LSODE and LSODI, two new initial value
ordinary differential equation solvers. ACM Signum
Newletter 15, 4, 1980.
- [Robertson 1966] H.H. Robertson. The solution of a set of reaction rate
equations, pp. 178–182. Academic Press, 1966.
- [Watts 1983] H. A. Watts. Starting stepsize for an ODE solver. J. Comp.
Appl. Math., Vol. 8, pp. 177-191, 1983.
@{
*/
/**
Request that the first attempted integration step have a particular size.
If no request is made, the integrator will estimate a suitable size
for the initial step attempt. *If the integrator does not support error
control*, this method will throw a std::exception (call
supports_error_estimation() to verify before calling this method). For
variable-step integration, the initial target will be treated as a maximum
step size subject to accuracy requirements and event occurrences. You can
find out what size *actually* worked with
`get_actual_initial_step_size_taken()`.
@throws std::exception If the integrator does not support error
estimation.
*/
void request_initial_step_size_target(const T& step_size) {
using std::isnan;
if (!supports_error_estimation())
throw std::logic_error(
"Integrator does not support error estimation and "
"user has initial step size target");
req_initial_step_size_ = step_size;
}
/**
Gets the target size of the first integration step. You can find out what
step size was *actually* used for the first integration step with
`get_actual_initial_step_size_taken()`.
@see request_initial_step_size_target()
*/
const T& get_initial_step_size_target() const {
return req_initial_step_size_;
}
// @}
/**
@anchor integrator-maxstep
@name Methods related to maximum integration step size
Sets the _nominal_ maximum step size- the actual maximum step size taken
may be slightly larger (see set_maximum_step_size() and
get_stretch_factor())- that an integrator will take. Each integrator has a
default maximum step size, which might be infinite.
@{
*/
/**
Sets the maximum step size that may be taken by this integrator. This setting
should be used if you know the maximum time scale of your problem. The
integrator may stretch the maximum step size by as much as 1% to reach a
discrete event. For fixed step integrators, all steps will be taken at the
maximum step size *unless* an event would be missed.
@warning See @ref integrator-initial-step-size "Initial step size selection"
*/
// TODO(edrumwri): Update this comment when stretch size is configurable.
void set_maximum_step_size(const T& max_step_size) {
DRAKE_ASSERT(max_step_size >= 0.0);
max_step_size_ = max_step_size;
}
/**
Gets the maximum step size that may be taken by this integrator. This is
a soft maximum: the integrator may stretch it by as much as 1% to hit a
discrete event.
@sa set_requested_minimum_step_size()
*/
// TODO(edrumwri): Update this comment when stretch size is configurable.
const T& get_maximum_step_size() const { return max_step_size_; }
/**
Gets the stretch factor (> 1), which is multiplied by the maximum
(typically user-designated) integration step size to obtain the amount
that the integrator is able to stretch the maximum time step toward
hitting an upcoming publish or update event in
IntegrateNoFurtherThanTime().
@sa IntegrateNoFurtherThanTime()
*/
double get_stretch_factor() const { return 1.01; }
// @}
/**
@anchor integrator-minstep
@name Methods related to minimum integration step size selection and behavior
Variable step integrators reduce their step sizes as needed to achieve
requirements such as specified accuracy or step convergence. However, it is
not possible to take an arbitrarily small step. Normally integrators choose
an appropriate minimum step and throw an exception if the requirements can't
be achieved without going below that. Methods in this section allow you to
influence two aspects of this procedure:
- you can increase the minimum step size, and
- you can control whether an exception is thrown if a smaller step would have
been needed to achieve the aforementioned integrator requirements.
By default, integrators allow a very small minimum step which can result in
long run times. Setting a larger minimum can be helpful as a diagnostic to
figure out what aspect of your simulation is requiring small steps. You can
set the minimum to what should be a "reasonable" minimum based on what you
know about the physical system. You will then get an std::runtime_error
exception thrown at any point in time where your model behaves unexpectedly
(due to, e.g., a discontinuity in the derivative evaluation function).
If you disable the exception (via
`set_throw_on_minimum_step_size_violation(false)`), the integrator will
simply proceed with a step of the minimum size: accuracy is guaranteed only
when the minimum step size is not violated. Beware that there can be no
guarantee about the magnitude of any errors introduced by violating the
accuracy "requirements" in this manner, so disabling the exception should be
done warily.
#### Details
Because time is maintained to finite precision, the integrator uses a scalar
`h_floor` to constrain time step h ≥ `h_floor` such that `current_time + h >
current_time` will be strictly satisfied. The integrator will never
automatically decrease its step below `h_floor`. We calculate `h_floor=max(ε,
ε⋅abs(t))`, where t is the current time and ε is a small multiple of machine
precision, typically a number like 1e-14. Note that `h_floor` necessarily
grows with time; if that is a concern you should limit how long your
simulations are allowed to run without resetting time.
You may request a larger minimum step size `h_min`. Then at every time t, the
integrator determines a "working" minimum `h_work=max(h_min, h_floor)`. If
the step size selection algorithm determines that a step smaller than
`h_work` is needed to meet accuracy or other needs, then a std::runtime_error
exception will be thrown and the simulation halted. On the other hand, if you
have suppressed the exception (again, via
`set_throw_on_minimum_step_size_violation(false)`), the integration will
continue, taking a step of size `h_work`.
Under some circumstances the integrator may legitimately take a step of size
`h` smaller than your specified `h_min`, although never smaller than
`h_floor`. For example, occasionally the integrator may reach an event or
time limit that occurs a very short time after the end of a previous step,
necessitating that a tiny "sliver" of a step be taken to complete the
interval. That does not indicate an error, and required accuracy and
convergence goals are achieved. Larger steps can resume immediately
afterwards. Another circumstance is when one of the integrator's stepping
methods is called directly requesting a very small step, for example
`IntegrateWithMultipleStepsToTime(h)`. No exception will be thrown in either
of these cases.
*/
//@{
/**
Sets the requested minimum step size `h_min` that may be taken by this
integrator. No step smaller than this will be taken except under
circumstances as described @ref integrator-minstep "above". This setting will
be ignored if it is smaller than the absolute minimum `h_floor` also
described above. Default value is zero.
@param min_step_size a non-negative value. Setting this value to zero
will cause the integrator to use a reasonable value
instead (see get_working_minimum_step_size()).
@sa get_requested_minimum_step_size()
@sa get_working_minimum_step_size()
*/
void set_requested_minimum_step_size(const T& min_step_size) {
DRAKE_ASSERT(min_step_size >= 0.0);
req_min_step_size_ = min_step_size;
}
/**
Gets the requested minimum step size `h_min` for this integrator.
@sa set_requested_minimum_step_size()
@sa get_working_minimum_step_size(T)
*/
const T& get_requested_minimum_step_size() const {
return req_min_step_size_; }
/**
Sets whether the integrator should throw a std::exception
when the integrator's step size selection algorithm determines that it
must take a step smaller than the minimum step size (for, e.g., purposes
of error control). Default is `true`. If `false`, the integrator will
advance time and state using the minimum specified step size in such
situations. See @ref integrator-minstep "this section" for more detail.
*/
void set_throw_on_minimum_step_size_violation(bool throws) {
min_step_exceeded_throws_ = throws;
}
/**
Reports the current setting of the throw_on_minimum_step_size_violation
flag.
@sa set_throw_on_minimum_step_size_violation().
*/
bool get_throw_on_minimum_step_size_violation() const {
return min_step_exceeded_throws_;
}
/**
Gets the current value of the working minimum step size `h_work(t)` for
this integrator, which may vary with the current time t as stored in the
integrator's context.
See @ref integrator-minstep "this section" for more detail.
*/
T get_working_minimum_step_size() const {
using std::max;
using std::abs;
// Tolerance is just a number close to machine epsilon.
const double tol = 1e-14;
const T smart_minimum = max(tol, abs(get_context().get_time()) * tol);
return max(smart_minimum, req_min_step_size_);
}
// @}
/**
Resets the integrator to initial values, i.e., default construction
values.
*/
void Reset() {
// Kill the error estimate and weighting matrices.
err_est_.reset();
qbar_weight_.setZero(0);
z_weight_.setZero(0);
pinvN_dq_change_.reset();
unweighted_substate_change_.setZero(0);
weighted_q_change_.reset();
// Drops dense output, if any.
dense_output_.reset();
// Integrator no longer operates in fixed step mode.
fixed_step_mode_ = false;
// Statistics no longer valid.
ResetStatistics();
// Wipe out settings.
req_min_step_size_ = 0;
max_step_size_ = nan();
accuracy_in_use_ = nan();
// Indicate values used for error controlled integration no longer valid.
prev_step_size_ = nan();
ideal_next_step_size_ = nan();
// Call the derived integrator reset routine.
DoReset();
// Indicate that initialization is necessary.
initialization_done_ = false;
}
/**
An integrator must be initialized before being used. The pointer to the
context must be set before Initialize() is called (or an std::exception
will be thrown). If Initialize() is not called, an exception will be
thrown when attempting to call IntegrateNoFurtherThanTime(). To reinitialize
the integrator, Reset() should be called followed by Initialize().
@throws std::exception If the context has not been set or a user-set
parameter has been set illogically (i.e., one of the
weighting matrix coefficients is set to a negative value- this
check is only performed for integrators that support error
estimation; the maximum step size is smaller than the minimum
step size; the requested initial step size is outside of the
interval [minimum step size, maximum step size]).
@sa Reset()
*/
void Initialize() {
if (!context_) throw std::logic_error("Context has not been set.");
// Verify that user settings are reasonable.
if constexpr (scalar_predicate<T>::is_bool) {
if (max_step_size_ < req_min_step_size_) {
throw std::logic_error("Integrator maximum step size is less than the "
"minimum step size");
}
if (req_initial_step_size_ > max_step_size_) {
throw std::logic_error("Requested integrator initial step size is "
"larger than the maximum step size.");
}
if (req_initial_step_size_ < req_min_step_size_) {
throw std::logic_error("Requested integrator initial step size is "
"smaller than the minimum step size.");
}
}
// TODO(edrumwri): Compute qbar_weight_, z_weight_ automatically.
// Set error weighting vectors if not already done.
if (supports_error_estimation()) {
// Allocate space for the error estimate.
err_est_ = system_.AllocateTimeDerivatives();
const auto& xc = context_->get_state().get_continuous_state();
const int gv_size = xc.get_generalized_velocity().size();
const int misc_size = xc.get_misc_continuous_state().size();
if (qbar_weight_.size() != gv_size) qbar_weight_.setOnes(gv_size);
if (z_weight_.size() != misc_size) z_weight_.setOnes(misc_size);
// Verify that minimum values of the weighting matrices are non-negative.
if ((qbar_weight_.size() && qbar_weight_.minCoeff() < 0) ||
(z_weight_.size() && z_weight_.minCoeff() < 0))
throw std::logic_error("Scaling coefficient is less than zero.");
}
// Statistics no longer valid.
ResetStatistics();
// Call the derived integrator initialization routine (if any)
DoInitialize();
initialization_done_ = true;
}
/**
(Internal use only) Integrates the system forward in time by a single step
with step size subject to integration error tolerances (assuming that the
integrator supports error estimation). The integrator must already have
been initialized or an exception will be thrown. The context will be
integrated to a time that will never exceed the minimum of
`publish_time`, `update_time`, and the current time plus
`1.01 * get_maximum_step_size()`.
@param publish_time The present or future time (exception will be thrown
if this is not the case) at which the next publish will occur.
@param update_time The present or future time (exception will be thrown
if this is not the case) at which the next update will occur.
@param boundary_time The present or future time (exception will be thrown
if this is not the case) marking the end of the user-designated
simulated interval.
@throws std::exception If the integrator has not been initialized or one
of publish_time, update_time, or boundary_time is
in the past.
@return The reason for the integration step ending.
@post The time in the context will be no greater than
`min(publish_time, update_time, boundary_time)`.
@warning Users should generally not call this function directly; within
simulation circumstances, users will typically call
`Simulator::AdvanceTo()`. In other circumstances, users will
typically call
`IntegratorBase::IntegrateWithMultipleStepsToTime()`.
This method at a glance:
- For integrating ODEs/DAEs via Simulator
- Supports fixed step and variable step integration schemes
- Takes only a single step forward.
*/
// TODO(edrumwri): Make the stretch size configurable.
StepResult IntegrateNoFurtherThanTime(
const T& publish_time, const T& update_time, const T& boundary_time);
/**
Stepping function for integrators operating outside of Simulator that
advances the continuous state exactly to `t_final`. This method is
designed for integrator users that do not wish to consider publishing or
discontinuous, mid-interval updates. This method will step the integrator
multiple times, as necessary, to attain requested error tolerances and
to ensure the integrator converges.
@warning Users should simulate systems using `Simulator::AdvanceTo()` in
place of this function (which was created for off-simulation
purposes), generally.
@param t_final The current or future time to integrate to.
@throws std::exception If the integrator has not been initialized or
t_final is in the past.
@sa IntegrateNoFurtherThanTime(), which is designed to be operated by
Simulator and accounts for publishing and state reinitialization.
@sa IntegrateWithSingleFixedStepToTime(), which is also designed to be
operated *outside of* Simulator, but throws an exception if the
integrator cannot advance time to `t_final` in a single step.