-
Notifications
You must be signed in to change notification settings - Fork 0
/
Integrating a Vector Field in Curvilinear Coordinates.html
1142 lines (1043 loc) · 44.6 KB
/
Integrating a Vector Field in Curvilinear Coordinates.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!doctype html>
<html>
<head>
<meta charset='utf-8'>
<script type='text/javascript' src='tryToFindMathJax.js'></script>
<title>Integrating a Vector Field in Curvilinear Coordinates</title>
</head>
<body>
<br>
<h3>Correct Finite Volume Integration of a Tensor Field on a Manifold</h3>
<br>
by Christopher Moore<br>
<br>
A lot of my sources are coming from my differential geometry notes at <a href='https://thenumbernine.github.io/'>https://thenumbernine.github.io/</a>, especially the one on 'parallel propagators'<br>
<br>
Integrating a vector field in Cartesian components, vs integrating a vector field wrt a fixed frame:<br>
<br>
I hope you can already see that integrating a vector with curvilinear components will not produce the same as integrating a vector field with Cartesian components.
Just do any of many quick examples to prove this. Integrate the $e_r$ vector through a curve in polar coordinates and compare it with some Cartesian integral results.<br>
<br>
What do we do when we integrate a vector field? We start with:<br>
<br>
$\int v(x) dV$<br>
<br>
Then we break it down into Cartesian components:<br>
<br>
$\int e_I(x) v^I(x) dV$<br>
<br>
Then we tell ourselves that the Cartesian basis is constant throughout the integral - and not dependent on any integrated variables - and therefore we can factor it out:<br>
<br>
$\int e_I(x) v^I(x) dV = \int e_I v^I(x) dV = e_I \int v^I(x) dV$<br>
<br>
And that is your typical vector field integral recipe.<br>
<br>
But what if we're dealing with a non-Cartesian coordinate basis? We break down our vector field:<br>
<br>
$\int e_\mu(x) v^\mu(x) dV$<br>
<br>
But here the typical error performed is that the basis is factored out, as it is done in the Cartesian basis case.
But the basis is dependent on the coordinate, so you can't do that:<br>
<br>
$\int e_\mu(x) v^\mu(x) dV \ne e_\mu(x) \int v^\mu(x) dV$<br>
<br>
This statement doesn't even make sense.
You just factored out one of your integrating variables in the parameter of the basis.
You can only do this if the basis doesn't depend on the integrating parameter, and while this is true for Cartesian, it isn't for polar or just about any other choice.
<br><br>
So what's the remedy?
For some reason everyone thinks the remedy is to multiply by the Jacobian. Go ahead and try to do that and see how far it gets you. No, what's the real remedy?
The concept of integrating a tangent space across a volume doesn't intuitively make sense. But the tangent space is dependent on the integral variable. We have to remove this dependency somehow.
We have to pick a fixed tangent space to evalute the vector integral in, then we will transport each vector inside the integral onto that tangent space.
Once we do this, the basis will be no longer dependent on the integral.
Will the integral result be skewed if the fixed basis is not an orthonormal basis, such as in polar picking a basis where $r \ne 1$? Nah, because the fixed tangent space will be a function of a fixed coordinate, not the integrated coordinates.
<br>
So using our equation above on transporting vector components we get:<br>
<br>
$\int v(x) dV$<br>
$\int e_\mu(x) v^\mu(x) dV$<br>
$= \int e_\mu(x) {P^{-1}(x,x_0)^\mu}_\alpha {P(x,x_0)^\alpha}_\nu v^\nu(x) dV$<br>
$= \int e_\alpha(x_0) {P(x,x_0)^\alpha}_\mu v^\mu(x) dV$<br>
...and now the basis is independent of the integral, so we can factor it out.<br>
$= e_\alpha(x_0) \int {P(x,x_0)^\alpha}_\mu v^\mu(x) dV$<br>
<br>
Tada! Now we have an equation for our integral whose components are with respect to a fixed tangent space $e_\alpha(x_0)$, and the equation doesn't use any background Cartesian basis components whatsoever.<br>
<br>
So from there if we want we can convert the equation into our Cartesian basis:<br>
<br>
$= e_I \cdot {e_\alpha}^I(x_0) \int {P(x,x_0)^\alpha}_\mu v^\mu(x) dV$<br>
<br>
Hopefully the example cases below agree with this, especially in the event that our basis isn't normalized, where the integrated fixed-basis vector components might be some factor of the integrated Cartesian-basis vector components.
I guess in that case the ratio would still be constant (since it is dependent on $x_0$, which is independent of the integral), and since integration is linear then we should be fine.<br>
<br>
<br>
<hr>
<br>
Now so far I have been speaking about the integral abstractly, but let's look at specific details of the integral.
<br>
What did they teach you in vector calculus class? Integrating a curve through a scalar field is equal to:<br>
<br>
$\int_C f(x(t)) |\frac{\partial x}{\partial t}| dt$<br>
<br>
Let $f : \mathcal{M} \rightarrow \mathbb{R}$ be our function mapping points on the manifold to real values.<br>
$x \in \mathbb{R}^n$ is a tuple in our coordinate chart domain.<br>
<br>
Of course this can be made more aesthetic by reparameterizing the curve by its arclength: $x(s) := x(t(s))$ for $s(t) = \int_{t_L}^{t_R} |\frac{\partial x(t)}{\partial t}| dt$.<br>
<br>
If we arclength-parameterize it up front then the integral simplifies:<br>
<br>
$\int_C f(x(s)) ds$<br>
<br>
...or just shorthand...<br>
<br>
$\int_C f(x) ds$<br>
<br>
...though this ambiguously hides in notation whether our curve is parameterized by arclength or by some other arbitrary vector - we have to infer it is arclength by the name of the differential ds.
I never did like how vector calculus uses the same function notation to denote evaluating a function with respect to a variable and to denote reparameterizing a function with respect to a variable - not from the very first time I learned the material.<br>
<br>
Why am I bringing up scalar fields now? Because integrating our Cartesian-component vector is equivalent of integrating the scalar values of the Cartesian components separately, so long as that Cartesian tangent space isn't moving throughout the integral (and it is fixed, because it is our global background Cartesian tangent space).<br>
<br>
So now let's look at a vector field integrated over a curve:<br>
<br>
$\int_C v(x) ds$<br>
<br>
This can be evaluated if we just integrate every component with respect to the global Cartesian tangent space:<br>
<br>
$\int_C v(x) ds$<br>
$= e_I \int_C v^I(x) ds$<br>
$= \left[\begin{matrix}
e_\hat{1} & | & ... & | & e_\hat{n}
\end{matrix}\right]
\int_C
\left[\begin{matrix}
v^\hat{1}(x) \\ --- \\ \vdots \\ ---- \\ v^\hat{n}(x)
\end{matrix}\right]
|\frac{\partial x}{\partial t}|
dt$<br>
$= \left[\begin{matrix}
e_\hat{1} & | & ... & | & e_\hat{n}
\end{matrix}\right]
\left[\begin{matrix}
\int_C v^\hat{1}(x) |\frac{\partial x}{\partial t}| dt \\
--- \\
\vdots \\
---- \\
\int_C v^\hat{n}(x) |\frac{\partial x}{\partial t}| dt
\end{matrix}\right]$<br>
<br>
Alright, looks good. In terms of real world applications, if we want to integrate the force acted on a particle as it travels along a curve, this should be the equation that we go to for help.<br>
<br>
But what if we don't want to evaluate the vector components in Cartesian? What if we want to keep them in polar or cylindrical or spherical coordinates?<br>
<br>
Then we just recall our useful 'paralell propagator' for help.<br>
<br>
$\int_C v(x) ds$<br>
$= e_\alpha(x_0) \int_C {P(x, x_0)^\alpha}_\mu v^\mu(x) ds$<br>
$= \left[\begin{matrix}
e_1(x_0) & | & ... & | & e_n(x_0)
\end{matrix}\right]
\int_C
\left[\begin{matrix}
{P(x, x_0)^1}_1 & ... & {P(x, x_0)^1}_n \\
\vdots & & \vdots \\
{P(x, x_0)^n}_1 & ... & {P(x, x_0)^n}_n \\
\end{matrix}\right]
\cdot
\left[\begin{matrix}
v^1(x) \\ --- \\ \vdots \\ ---- \\ v^n(x)
\end{matrix}\right]
|\frac{\partial x}{\partial t}|
dt$<br>
$= \left[\begin{matrix}
e_1(x_0) & | & ... & | & e_n(x_0)
\end{matrix}\right]
\int_C
\left[\begin{matrix}
{P(x, x_0)^1}_\mu v^\mu(x) \\
--- \\
\vdots \\
---- \\
{P(x, x_0)^n}_\mu v^\mu(x)
\end{matrix}\right]
|\frac{\partial x}{\partial t}|
dt$<br>
<br>
And now that we are evaluating our components in a tangent space that is not dependent on the integral, we can assert that the integral of the vector is the vector of the integral:<br>
<br>
$= \left[\begin{matrix}
e_1(x_0) & | & ... & | & e_n(x_0)
\end{matrix}\right]
\left[\begin{matrix}
\int_C {P(x, x_0)^1}_\mu v^\mu(x) |\frac{\partial x}{\partial t}| dt \\
--- \\
\vdots \\
---- \\
\int_C {P(x, x_0)^n}_\mu v^\mu(x) |\frac{\partial x}{\partial t}| dt
\end{matrix}\right]
$<br>
<br>
<br>
<hr>
<br>
Alright, what happens when you want to raise the stakes, and integrate over a volume in the manifold coordinate space:<br>
$\int_{x^1 = x^1_L}^{x^1 = x^1_R}
\int_{x^2 = x^2_L}^{x^2 = x^2_R}
v \cdot det|\frac{\partial (x^\hat{1}, x^\hat{2})}{\partial (x^1, x^2)}|
dx^2
dx^1
$<br>
<br>
Of course I'm including the chart Jacobian determinant:<br>
<br>
$J = det|\frac{\partial (x^\hat{1}, x^\hat{2})}{\partial (x^1, x^2)}|$<br>
<br>
In the event that we are using a coordinate basis, we can also say $|J| = |{e_\mu}^I|$<br>
<br>
If you don't know what your chart is then you can still derive the Jacobian determinant from the metric associated with the coordinate basis.
At this point I have to start making a distinction between those who work in a coordinate basis and those who work in a non-coordinate (typically orthonormal) basis.
All of my previous indexing and tensors has been with regards to a linear transform of the coordinate basis, potentially handling any non-coordinate basis.
Now I am going to introduce a specific definition of the respective coordinate basis metric and coordinate tensor components.<br>
<br>
$e_\bar\mu = \partial_\bar{\mu} u^I (x) = $ the coordinate basis, even if you are dealing with a non-coordinate orthonormal basis.
If you are dealing with polar coordiantes in an coordinate basis then your basis would be
$e_\bar{r} = \partial_\bar{r} u^I = \left[\begin{matrix} cos(\phi) \\ sin(\phi) \end{matrix}\right],
e_\bar{\phi} = \partial_\bar{\phi} u^I = \left[\begin{matrix} -r sin(\phi) \\ r cos(\phi) \end{matrix}\right]$,
while your orthonormal non-coordinate basis would be
$e_r = \partial_r u^I = \left[\begin{matrix} cos(\phi) \\ sin(\phi) \end{matrix}\right] = \partial_r,
e_\phi = \partial_\phi u^I = \left[\begin{matrix} -sin(\phi) \\ cos(\phi) \end{matrix}\right] = \partial_\phi$.
Equivalently in spherical, $e_r = \partial_\bar{r}, e_\theta = \frac{1}{r} \partial_\bar\theta, e_\phi = \frac{1}{r sin\theta} \partial_\bar\phi$.
<br><br>
Note that in my notation, since the coordinate basis is denoted with a bar, it means that partials only make sense to be written as $\partial_\bar\mu$.
If you are dealing with a non-coordinate partial then you must re-transform it to non-coordinate components $e_\mu = {e_\mu}^\bar\mu \partial_\bar\mu$.
Then notice that I have just gone from using two-point tensors (those with curvilinear component indexing $\alpha-\omega$ vs those a global Cartesian component indexing A-Z)
to three-point tensor (including those with coordinate-basis curvilinear components $\bar\alpha-\bar\omega$.
True to the previous definition of e, it will represent a transformation between coordinate spaces, where the 'to' index is first and the 'from' index is second, so ${e_\mu}^\bar\mu$
denotes the transformation from your coordinate basis to your non-coordinate basis.
TODO wait that's not true, my standard before was always coord-first, noncoord-second, but that's not compatible with 3-point tensors. So think through how well to-first from-second will work.
TODO TODO then again, look at my Differential Geometry notes, I think they all might be written as to-first, from-second.
<br>
Also take note that I am making no assumptions on your non-coordinate basis throughout these notes. Most people choose a non-coordinate basis to orthonormalize the basis,
and this brings with it the benefit that the metric is constant ($g_{\mu\nu} = \eta_{\mu\nu}$), and from there the partials are zero, raising and lowering are identical, and the connections are purely dependent on the commutation.
This is a nice simplification to enjoy, but I am not making this assumption.
If you want to use a non-coordinate basis that is not constant - like a conformal rescaling ${e_\mu}^{\bar\mu} = \frac{1}{(g)^{1/n}} \partial_\bar\mu$ - then knock yourself out.
<br>
So now to define our coordinate metric using coordinate components:<br>
$\bar{g}_{\bar{\mu}\bar{\nu}} = e_\bar\mu \cdot e_\bar\nu = \partial_\bar\mu \cdot \partial_\bar\nu = \partial_\bar\mu u^I(x) \cdot \partial_\bar\nu u^J(x) \eta_{IJ}$<br>
<br>
Let's go back to our arbitrary, non-coordinate metric. What is its determinant in terms of its transform from a Cartesian basis?<br>
$g_{\mu\nu} = {e_\mu}^I {e_\nu}^J \eta_{IJ}$<br>
$det(g_{\mu\nu}) = det({e_\mu}^I {e_\nu}^J \eta_{IJ}) = det({e_\mu}^I)^2 \cdot det(\eta_{IJ})$<br>
$g = e^2 \cdot \sigma$<br>
$|g| = |e|^2$<br>
Where $\sigma = det(\eta_{IJ})$ is the determinant of the metric signature: +1 for Riemannian, -1 for Lorentzian.<br>
<br>
(Should volume forms always be positive? I know it's related to handedness of the wedge basis one-form that come with it.)<br>
<br>
Now what's the determinant of our coordinate metric?<br>
$\bar{g}_{\bar{\mu}\bar{\nu}} = {e_\bar{\mu}}^I {e_\bar{\nu}}^J \eta_{IJ}$<br>
$det (\bar{g}_{\bar{\mu}\bar{\nu}}) = det({e_\bar{\mu}}^I {e_\bar{\nu}}^J \eta_{IJ}) = det({e_\bar{\mu}}^I)^2 \cdot \sigma
= det(\frac{\partial (x^\hat{1}, x^\hat{2})}{\partial (x^1, x^2)})^2 \cdot \sigma
$<br>
That looks like our Jacobian-of-coordinate-chart definition - which happens to later be equal to the volume element in our integral:
$\bar{g} = J^2 \cdot \sigma$<br>
$|\bar{g}| = |J|^2$<br>
<br>
<br>
So now we have the necessary definitions to write our vector field integral as:<br>
$\int_{x^1 = x^1_L}^{x^1 = x^1_R}
\int_{x^2 = x^2_L}^{x^2 = x^2_R}
v(x) \cdot J(x)
dx^2
dx^1
$<br>
$= e_\mu(x_0) \int_{x^1 = x^1_L}^{x^1 = x^1_R}
\int_{x^2 = x^2_L}^{x^2 = x^2_R}
{P(x, x_0)^\alpha}_\nu v^\nu(x) |J(x)|
dx^2
dx^1
$<br>
<br>
<br>
<hr>
<br>
Finite volume hyperbolic conservation laws.<br>
<br>
2004 LeVeque "Finite Volume Methods for Hyperbolic Problems" doesn't seem to deal with curvilinear vector components very often. I suppose they do in section 18.9, but only in the symmetric case, and only then showing how to scale terms by the volume element in order to get rid of the source term that otherwise emerges when switching from partial to covariant derivatives.
In section 23 he talks about curvilinear grids, but doesn't seem to mention curvilinear vector components. I barely see mention of a Jacobian in here, which even a scalar field needs.
<br><br>
2009 Trangenstein "Numeric Simulation of Hyperbolic Conservation Laws" does go through the gory detail of integrating the Euler ideal gas equations over a cell of a curvilinear grid - calculates correct source terms - but does not consider parallel transport. So his $v^r$ in one cell won't necessarily match his $v^r$ in the next cell, and I don't think he discusses this subtlety (verify plz).
<br><br>
It looks like 2011 Bonnement et al "Finite Volume Method in Curvilinear Coordinates for Hyperbolic Conservation Laws" appraoches this. They represent things as tensors, they include covariant derivatives.
But they fall short of approaching the concept of parallel propagators.
They do approximate it as a linear transform, which means their method will work just fine in absence of Riemann curvature, if your covariant derivative commutes (and therefore your order of integration of coordinates across your surface commutes), and as long as you are using an orthonormal basis (I think they can handle non-orthonormal basis, since in polar coordinates the propagator's exp map of the 1/r connection terms just ended up being rescaling terms between the tangent basis vector magnitudes.
Here's a challenge: find a zero-Riemann-curvature manifold that simply projecting from one basis to the next doesn't work.
I suspect anything with extrinsic curvature.
Yup, can't just reproject the normal component away -- that would lead to a constant diminishing of the surface, unless you renormalized the vectors manually ... in the end, a lot of hoops to jump through and still consider the accuracy of your method.).
<br><br>
Let's see if we can describe things in a more general form that works in all those cases: GL tangent-space basis, extrinsic curvature (connections normal to the surface), and Riemann curvature (commutation of covariant derivatives).
<br><br>
These are written in the form of:<br>
<br>
$\frac{\partial}{\partial t} u(x) + \frac{\partial}{\partial x^i} F^i(x) = S$<br>
<br>
That looks fine for a Cartesian manifold, but what if we are dealing with a curvilinear manifold? Then our divergence needs extra terms, which can be factored into the derivative if we replace our partial derivative with a covariant derivative:<br>
<br>
$\frac{\partial}{\partial t} u(x) + \nabla_i F^i(x) = S$<br>
<br>
Of course this PDE notation is neglecting the fact that finite-volume integration is dealing with a volume, and therefore it must be integrated over a volume.
So let's include the notation for our volume integral.
Let's assume that volume is flat in the past and future timelike hypersurfaces, i.e., we are using a global time,
and every point at the starting time of our cell shares the same time,
and every point at the end time of our cell shares the same time.<br>
<br>
Side note: if we consider conservation over a region of spacetime $\nabla_\mu F^\mu$, and simply consider the state variable u to be the flux in the time direction (remember, $E = mc^2$) then we can start from an even-more-unified perspective.
Hold onto that thought and maybe later I'll rewrite things to deal with arbitrarily-shaped spacetime control volumes, so we can deal with moving grids, etc.<br>
<br>
Also take note that this example is only considering the covariant derivatives to be spatial.
Our spacetime basis, connection, and chart all do not vary with regards to time. Time is treated like a separate variable.
We have constructed what is the equivalent of a spacetime manifold and chart where there is no connection in the time direction (extrinsic curvature), and where the time coordinate has no variation as you vary the spatial coordinate, so that the timelike hypersurface normal points directly along the time axis.
<br>
<br>
$\int_{V \times \Delta t} \left( \frac{\partial}{\partial t} u(x) + \nabla_i F^i(x) \right) dV = \int_{V \times \Delta t} S dV$<br>
<br>
And that usually includes the Jacobian determinant. Notice at this point we start mixing our non-coordinate tensor components with coordinate integral variables. Pay close attention. Don't cross the streams.<br>
<br>
$
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
\left( \frac{\partial}{\partial t} u(x) + \nabla_i F^i(x) \right)
J(x)
dx^n ... dx^1 dt
=
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
S(x) \cdot J(x)
dx^n ... dx^1 dt
$<br>
<br>
Now, once again, if you are integrating a scalar - or if you are integrating the Cartesian components of your vector as if they are scalars - this is where you pat yourself on the back because you assume you are done. Not so fast. We're talking about integrating vectors here.
Let's generalize things a bit further now. What if we are dealing with a (p, q) tensor? Let's add on some indexes:<br>
<br>
$
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
\left( \frac{\partial}{\partial t}
{u^{i_1 ... i_p}}_{j_1 .. j_q}(x)
+ \nabla_k {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x)
\right)
J(x)
dx^n ... dx^1 dt
=
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
{S^{i_1 ... i_p}}_{j_1 ... j_q}(x)
J(x)
dx^n ... dx^1 dt
$<br>
<br>
But remember from our introduction, we can't just integrate these or our basis will be moving along with our coordinates. So let's introduce our parallel propagator, and let's propagate to our cell center $x_C$.
Mind you, if you propagate to a point with our global Cartesian basis (or if you equivalently define a single point's basis to be global) then you can simply reproduce the equation of our integral using global Cartesian basis components.<br>
<br>
$
\otimes_{k=1}^p e_{l_k}(x_C)
\otimes_{k=1}^q e^{m_k}(x_C)
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
{P(x,x_C)^{l_1}}_{i_1} \cdot ...
\cdot {P(x,x_C)^{l_n}}_{i_n}
\cdot {P(x_C,x)^{j_1}}_{m_1} \cdot ...
\cdot {P(x_C,x)^{j_n}}_{m_n}
\left( \frac{\partial}{\partial t}
{u^{i_1 ... i_p}}_{j_1 .. j_q}(x)
+ \nabla_k {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x)
\right)
J(x)
dx^n ... dx^1 dt
\\
=
\otimes_{k=1}^p e_{l_k}(x_C)
\otimes_{k=1}^q e^{m_k}(x_C)
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
{P(x,x_C)^{l_1}}_{i_1} \cdot ...
\cdot {P(x,x_C)^{l_n}}_{i_n}
\cdot {P(x_C,x)^{j_1}}_{m_1} \cdot ...
\cdot {P(x_C,x)^{j_n}}_{m_n}
{S^{i_1 ... i_p}}_{j_1 ... j_q}(x)
J(x)
dx^n ... dx^1 dt
$<br>
<br>
From here on out, true to much differential geometry literature, I am going to omit our $\otimes_{k=1}^p e_{l_k}(x_C) \otimes_{k=1}^q e^{m_k}(x_C)$ basis.
But don't forget that this particular basis at $x_C$ is the basis being used.
The whole central point of this worksheet is to illustrate the importance of parallel transporting our vector to one particular basis, which for our example we have chosen to be at $x_C$.<br>
<br>
I'm also going to combine all those parallel-propagators with a giant outer product into one, using my generalized index notation at the end of the 'parallel propagators' worksheet.
Note the standard, that contravariant components are propagated as $v'^\mu(y) = {P(x,y)^\mu}_\nu v^\nu(x)$ while covariant are propagated as $w'_\nu(y) = {P(y,x)^\mu}_\nu w_\nu(x)$.<br>
<br>
<br>
$
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\left( \frac{\partial}{\partial t}
{u^{i_1 ... i_p}}_{j_1 .. j_q}(x)
+ \nabla_k {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x)
\right)
J(x)
dx^n ... dx^1 dt
\\
=
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
{S^{i_1 ... i_p}}_{j_1 ... j_q}(x)
J(x)
dx^n ... dx^1 dt
$<br>
<br>
I'm sure there's someone out there that has an opinion on whether the flux divergence index should go first or last, and I'm sure it has something to do with the one-form-meets-vector rule of matching first vector into the first form slot. I'll think about this more later.
Also, this is a good example of why the original Ricci index notation came up with the idea of index groups: $I = i_1 ... i_p$. I have already made use of capital letters to denote my global Cartesian tangent space, so I have already broken with that convention (more in line with the tetrad / Einstein-Cartan folks' convention fwiw).<br>
<br>
Now here a few people might be tempted to substitute the $\nabla_i F^i$ for $\frac{1}{J} e_i (J F^i)$. This is absolutely correct, and this is called the Voss-Weyl equality, but it is only true for vector fluxes, and therefore for scalar conservation law equations. As soon as we try to integrate a vector field, we need to use a bi-vector to represent the flux, and then our covariant derivative starts to add in extra terms that the Voss-Weyl form doesn't consider (I think, unless Wiki explained the Voss-Weyl equation incorrectly).<br>
<br>
If we do use the scalar u and vector F and do use the Voss-Weyl identity then you will notice that the Jacobian determinant cancells very conveniently.<br>
<br>
Let's see those connection terms:<br>
<br>
$
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\\
\left( \frac{\partial}{\partial t}
{u^{i_1 ... i_p}}_{j_1 .. j_q}(x)
+ e_k( {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x) )
+ {F^{m i_1 ... i_p}}_{j_1 ... j_q}(x) {\Gamma^k}_{km}(x)
+ \Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
J(x)
dx^n ... dx^1 dt
\\
=
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
{S^{i_1 ... i_p}}_{j_1 ... j_q}(x) J(x)
dx^n ... dx^1 dt
$<br>
<br>
Also notice at this point how I'm keeping the derivative index as the 2nd index of my connection coefficient - just as I did in the 'parallel propagators' worksheet.
This is no accident. A few of you are tempted to swap it interchangeably with the 3rd index, as you can safely do when using the Levi-Civita connection in a coordinate basis,
however most Mechanical Engineers reading this are probably used to representing your vectors in a non-coordinate orthonormalized basis and therefore cannot enjoy the benefits of the Levi-Civita connection.
Notice that, by my choice of conventions, the parallel propagator and this covariant derivative equation will work fine with which ever you choose to use.<br>
<br>
Alright, back to that Voss-Weyl statement. Let's explore that in more detail:<br>
<br>
$\nabla_i F^i = e_i (F^i) + F^k {\Gamma^i}_{ik}$<br>
<br>
I bet you would like to replace the trace of that with the metric determinant gradient, wouldn't you? Well you can't. Not yet. Back to the symmetry of the 2nd and 3rd indexes. The identity you're thinking of is this one:<br>
<br>
${\Gamma^i}_{ki} = e_k (ln \sqrt{|g|})$<br>
<br>
Also take note that, if you're dealing with a non-coordinate basis then that g isn't your friend. It's the non-coordinate metric g, which chances are is chosen to be identity but I'm not making assumptions, so least of all we should assume it is the coordinate metric $\bar{g}$.<br>
<br>
So first we have to introduce some commutation:<br>
<br>
${\Gamma^i}_{ki} = e_k (ln \sqrt{|g|})$<br>
${\Gamma^i}_{jk} = {\Gamma^i}_{kj} + {c_{jk}}^i$<br>
so we get:<br>
${\Gamma^i}_{ik} = {\Gamma^i}_{ki} + {c_{ik}}^i$<br>
${\Gamma^i}_{ik} = e_k (ln \sqrt{|g|}) + {c_{ik}}^i$<br>
<br>
And in the presence of commutation we get this:<br>
<br>
$\nabla_i F^i = e_i (F^i) + F^k (e_k (ln \sqrt{|g|}) + {c_{ik}}^i)$<br>
<br>
Also note that, in the commutation realm, most often the basis is orthonormalized. In that case your metric determinant $e_k(\sqrt{|g|})$ would become constant and its gradient would go away, and I assume the trace of the commutation would take on the role of the value that the gradient of the coordinate-based metric would give you.<br>
<br>
So if you want to take it one step further and write that covariant diverence using the coordinate metric, we can pull another trick out of our hat:<br>
<br>
${\Gamma^i}_{ki} = {e^i}_{\bar j} e_k({e_i}^{\bar j}) + e_k(log\sqrt{|\bar{g}|})$<br>
${\Gamma^i}_{ik} = e_k(log\sqrt{|\bar{g}|}) + {e^i}_{\bar j} e_k({e_i}^{\bar j}) + {c_{ik}}^i$<br>
$\nabla_i F^i = e_i (F^i) + F^k (e_k (ln \sqrt{|\bar{g}|}) + {e^i}_{\bar j} e_k({e_i}^{\bar j}) + {c_{ik}}^i)$<br>
$\nabla_i F^i = e_i (F^i) + F^k (e_k (ln J) + e_i({e_k}^\bar{k}) {e^i}_\bar{k})$<br>
<br>
If you want to go from non-coordinate metric determinant gradient to coordinate metric determinant gradient (the volume element), then we can make use of this.<br>
<br>
$\nabla_i F^i = e_i (F^i) + F^i \frac{1}{J} e_i (J) + F^k e_i({e_k}^\bar{k}) {e^i}_\bar{k}$<br>
$\nabla_i F^i = \frac{1}{J} e_i (J F^i) + F^k e_i({e_k}^\bar{k}) {e^i}_\bar{k}$<br>
you can keep going further and stumble upon<br>
$\nabla_i F^i = \frac{1}{J} \partial_\bar{i} (J {e_i}^\bar{i} F^i)$<br>
which is just the coordinate basis representation, but with tensor components transformed from non-coordinate to coordinate basis.<br>
<br>
So we have some risks and rewards in choosing our basis. If we choose a coordinate basis then we get the volume element easily, and no commutation arises. If we choose a non-coordinate then we have to deal with commutation, but an orthonormal basis means the metric determinant gradient is zero.<br>
<br>
The equation is starting to get unruly. Time to start making assumptions and cancelling terms.<br>
<br>
Next assumption is that we have a static unmoving grid. This means that the time derivatives of the basis, Jacobian, etc are all zero.<br>
<br>
$
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
J(x)
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\int_{t=t_L}^{t=t_R}
\left( \frac{\partial}{\partial t}
{u^{i_1 ... i_p}}_{j_1 .. j_q}(x)
\right)
dt
dx^n ... dx^1
\\
+
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
\int_{t=t_L}^{t=t_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\left(
e_k ( {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x) )
+ {F^{m i_1 ... i_p}}_{j_1 ... j_q}(x) {\Gamma^k}_{km}(x)
+ \Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
J(x)
dx^n ... dx^1 dt
\\
=
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
{S^{i_1 ... i_p}}_{j_1 ... j_q}(x) J(x)
dx^n ... dx^1 dt
$<br>
<br>
$
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x)
\left(
{u^{i_1 ... i_p}}_{j_1 .. j_q}(x, t_R)
- {u^{i_1 ... i_p}}_{j_1 .. j_q}(x, t_L)
\right)
dx^n ... dx^1
\\
+
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
\int_{t=t_L}^{t=t_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\left(
e_k ( {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x) )
+ {F^{m i_1 ... i_p}}_{j_1 ... j_q}(x) {\Gamma^k}_{km}(x)
+ \Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
J(x)
dx^n ... dx^1 dt
\\
=
\int_{t=t_L}^{t=t_R}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
{S^{i_1 ... i_p}}_{j_1 ... j_q}(x) J(x)
dx^n ... dx^1 dt
$<br>
<br>
I'll also assume the flux, source, and connections (and therefore propagators) are all constant throughout the cell wrt time, for the duration of $(t_L, t_R)$, so we can factor out the $\int_{t_L}^{t_R} dt = \Delta t$:<br>
<br>
$
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x)
\left(
{u^{i_1 ... i_p}}_{j_1 .. j_q}(x, t_R)
- {u^{i_1 ... i_p}}_{j_1 .. j_q}(x, t_L)
\right)
dx^n ... dx^1
\\
+
\Delta t
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\left(
e_k ( {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x) )
+ {F^{m i_1 ... i_p}}_{j_1 ... j_q}(x) {\Gamma^k}_{km}(x)
+ \Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
J(x)
dx^n ... dx^1
\\
=
\Delta t
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
{S^{i_1 ... i_p}}_{j_1 ... j_q}(x) J(x)
dx^n ... dx^1
$<br>
<br>
At this point we have two options: we can assume u is constant throughout the cell and factor it out to integrate the volume form, or we can assume the value at the cell is equal to the integral of the state value throughout the cell times its volume form.
I suppose we have a third option of simply considering the cell value to be the integral of u throughout the cell without its volume, but then we would have to figure out how to split it apart from the volume form integral, which we can't easily write out a general case for - we would have to calculate that uniquely for each choice of grid.<br>
<br>
Let's go down the rabbithole of assuming u and S are piecewise constant throughout their volume - so we can factor out the J.
We can't make this assumption for F though, we need to assume it is piecewise constant throughout the surface of flux that it represents.
To boot, when we start to deal with the added connections for non-scalar integrated varibales, we will have to also deal with the cell-centered flux values.
Let's also introduce the coordinate $x_C$ as the center of the cell, midway between $x_L$ and $x_R$.
Let's also define our cell volume as $\mathcal{V}(x_C) = \int_V J(x) dV$.
Notice that, if we don't treat our cell as piecewise constant, then we have to consider our parallel propagator's influence - in addition to the volume element - on the state and the source terms.
<br>
TODO or we can just define the cell state u and source S values as the integral across the region, divided by the cell volume. I think this is the same math in the end.
<br>
$
\left(
{u^{l_1 ... l_p}}_{m_1 .. m_q}(x_C, t_R)
- {u^{l_1 ... l_p}}_{m_1 .. m_q}(x_C, t_L)
\right) \mathcal{V}(x_C)
\\
+ \Delta t
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\left(
e_k ( {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x) )
+ {F^{m i_1 ... i_p}}_{j_1 ... j_q}(x) {\Gamma^k}_{km}(x)
+ \Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
J(x)
dx^n ... dx^1
\\
=
\Delta t
\cdot {S^{l_1 ... l_p}}_{m_1 ... m_q}(x_C) \cdot \mathcal{V}(x_C)
$<br>
<br>
Now let's substitute our Voss-Weyl identity:<br>
<br>
$\left(
{u^{l_1 ... l_p}}_{m_1 .. m_q}(x_C, t_R)
- {u^{l_1 ... l_p}}_{m_1 .. m_q}(x_C, t_L)
\right) \mathcal{V}(x_C)
\\
+ \Delta t
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\left(
\frac{1}{J(x)} \partial_\bar{k} ( J(x) {e_k}^\bar{k}(x) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x) )
+ \Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
J(x)
dx^n ... dx^1
\\
=
\Delta t
\cdot {S^{l_1 ... l_p}}_{m_1 ... m_q}(x_C) \cdot \mathcal{V}(x_C)
$<br>
<br>
And now let's rearrange this to be with respect to $u(t_R)$ (and cancel those J's):<br>
<br>
$
{u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_R)
= {u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_L)
+ \Delta t \left(
\\
- \frac{1}{\mathcal{V}(x_C)}
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\left(
\partial_\bar{k} ( J(x) {e_k}^\bar{k}(x) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x) )
+ J(x) \left(
\Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
\right)
dx^n ... dx^1
\\
+ {S^{l_1 ... l_p}}_{m_1 ... m_q}(x_C)
\right)$<br>
<br>
Now we can use the fundamental theorem of calculus:<br>
<br>
$
{u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_R)
= {u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_L)
+ \Delta t \left(
\\
- \frac{1}{\mathcal{V}(x_C)}
\left(
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\partial_\bar{k} ( J(x) {e_k}^\bar{k}(x) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x) )
dx^n ... dx^1
\\
+
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\left(
J(x) \left(
\Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
\right)
dx^n ... dx^1
\right)
\\
+ {S^{l_1 ... l_p}}_{m_1 ... m_q}(x_C)
\right)$<br>
<br>
Alright we can't make use of fundamental theorem of calculus with those propagators in the way. Time to use the Leibniz integration rule.<br>
<br>
$
{u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_R)
= {u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_L)
+ \Delta t \left(
\\
- \frac{1}{\mathcal{V}(x_C)}
\left(
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
\partial_\bar{k}
\left(
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x) {e_k}^\bar{k}(x) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x)
\right)
dx^n ... dx^1
\\
-
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
\partial_\bar{k}
\left(
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\right)
J(x) {e_k}^\bar{k}(x) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x)
dx^n ... dx^1
\\
+
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\left(
J(x) \left(
\Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
\right)
dx^n ... dx^1
\right)
\\
+ {S^{l_1 ... l_p}}_{m_1 ... m_q}(x_C)
\right)$<br>
<br>
NOW we can use the fundamental theorem of calculus.<br>
<br>
$
{u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_R)
= {u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_L)
+ \Delta t \left(
\\
- \frac{1}{\mathcal{V}(x_C)}
\left(
\int_{x^1_L}^{x^1_R}
\overset{-\{x^k\}}{...}
\int_{x^n_L}^{x^n_R}
\left(
P(x^k_R,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x^k_R) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x^k_R) {e_k}^\bar{k}(x^k_R) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x^k_R)
-
P(x^k_L,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x^k_L) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x^k_L) {e_k}^\bar{k}(x^k_L) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x^k_L)
\right)
dx^n
\overset{-\{x^k\}}{...}
dx^1
\\
-
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
\partial_\bar{k}
\left(
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\right)
J(x) {e_k}^\bar{k}(x) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x)
dx^n ... dx^1
\\
+
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
\left(
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x) \left(
\Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
\right)
dx^n ... dx^1
\right)
\\
+ {S^{l_1 ... l_p}}_{m_1 ... m_q}(x_C)
\right)$<br>
<br>
Here I use that $x^k_R$ and $x^k_L$ shorthand like I mentioned in the parallel propagator worksheet - sometimes it means a scalar value, sometimes it means the point with the k'th term replaced with that value.<br>
<br>
Once the fundamental theorem of calculus is applied, and since the flux is constant throughout the surface (treat those $x^k_L$ and $x^k_R$'s as if we are only replacing the k'th term of a coordinate otherwise filled with $x_C$), we now have a definite value within the weighted flux at the left and right sides, and we can factor this out of the integral over the other components:<br>
<br>
$
{u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_R)
= {u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_L)
+ \Delta t \left(
\\
- \frac{1}{\mathcal{V}(x_C)}
\left(
\int_{x^1_L}^{x^1_R}
\overset{-\{x^k\}}{...}
\int_{x^n_L}^{x^n_R}
\left(
P(x^k_R,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x^k_R) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x^k_R) {e_k}^\bar{k}(x^k_R) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x^k_R)
-
P(x^k_L,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x^k_L) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x^k_L) {e_k}^\bar{k}(x^k_L) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x^k_L)
\right)
dx^n
\overset{-\{x^k\}}{...}
dx^1
\\
-
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
\partial_\bar{k}
\left(
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
\right)
J(x) {e_k}^\bar{k}(x) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x)
dx^n ... dx^1
\\
+
\int_{x^1_L}^{x^1_R}
...
\int_{x^n_L}^{x^n_R}
\left(
P(x,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x) \left(
\Sigma_{m=1}^p {F^{k i_1 ... m ... i_p}}_{j_1 ... j_q}(x) {\Gamma^{i_m}}_{k m}(x)
- \Sigma_{m=1}^q {F^{k i_1 ... i_p}}_{j_1 ... m ... j_q}(x) {\Gamma^m}_{k j_m}(x)
\right)
\right)
dx^n ... dx^1
\right)
\\
+ {S^{l_1 ... l_p}}_{m_1 ... m_q}(x_C)
\right)$<br>
<br>
<!--
Now we can define a new geometric value for the size of our surface domain in coordinate space:<br>
<br>
$\mathcal{S}_k =
\int_{x^1_L}^{x^1_R}
\overset{-\{x^k\}}{...}
\int_{x^n_L}^{x^n_R}
dx^n
\overset{-\{x^k\}}{...}
dx^1
= \Pi_{j=1 \\ j \ne k}^n (x^j_R - x^j_L)
= \Pi_{j=1 \\ j \ne k}^n (\Delta x^j)
$<br>
<br>
TODO NO WE CAN'T because $P(x^k_R, x_C)$ only has the k'th element replaced. The rest of them vary.<br>
<br>
$
{u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_R)
= {u^{l_1 ... l_p}}_{m_1 ... m_q}(x_C, t_L)
+ \Delta t \left(
\\
- \frac{1}{\mathcal{V}(x_C)}
\left(
\mathcal{S}_k
\left(
P(x^k_R,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x^k_R) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x^k_R) {e_k}^\bar{k}(x^k_R) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x^k_R)
-
P(x^k_L,x_C) {{}^{l_1 ... l_p}}_{i_1 ... i_p} P(x_C,x^k_L) {{}^{j_1 ... j_q}}_{m_1 ... m_q}
J(x^k_L) {e_k}^\bar{k}(x^k_L) {F^{k i_1 ... i_p}}_{j_1 ... j_q}(x^k_L)
\right)