Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

General relativity #2558

Closed
wants to merge 4 commits into from
Closed

General relativity #2558

wants to merge 4 commits into from

Conversation

ozooxo
Copy link

@ozooxo ozooxo commented Oct 29, 2013

I add some functions for:
(1) Tensor calculations in the component form, includes (a) tensor product, (b) partial derivative which gives some higher ranked tensor, and (c) Einstein summation for a tensor with dummy indices.
(2) General Relativity related functions to calculate Riemann curvature tensor, Ricci tensor, and Ricci scalar.

I include a test file which shows that the calculation for Ricci scalar on the surface of a sphere, and for Schwarzschild metric are correct.

@skirpichev
Copy link
Contributor

There is already the galgebra module (with Schwarzschild solution somewhere in the examples/). @Krastanov, probably could comment this.

@@ -0,0 +1,25 @@
from sympy.tensor.components import *
from sympy.physics.gr.ricci import *
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about putting this file in sympy/physics/gr/tests ?

The code has to be put inside functions starting with test_

@Upabjojr
Copy link
Contributor

I believe this code should use other sympy modules that already exist. For example sympy.tensor.tensor and sympy.diffgeom. Unfortunately these modules are still unable to perform calculations for GR, and a lot of work has to be done.

In my opinion you should use the tensor objects in sympy.tensor.tensor. The current version uses abstract tensor notation (i.e. indices are abstract and cannot be repeated unless you want a contraction), this representation is basis-independent.

I am writing a PR to add support for ndarray data to the existing tensors in the tensor module. That could help to write code for GR:

#2514

Unfortunately, it depends on numpy, but it is possible to add an ndarray object to sympy to generalize matrices to any dimension.

Have a look at the tensor module documentation:

http://docs.sympy.org/dev/modules/tensor/tensor.html

There is also a diffgeom module in sympy, have a look at its documentation:

http://docs.sympy.org/dev/modules/diffgeom.html

Unfortunately tensors in the tensor module are not yet compatible with the diffgeom module, there is need to devise a way to have them interact.

Let me know if you find these points interesting.

@ozooxo
Copy link
Author

ozooxo commented Oct 29, 2013

@Krastanov : can you further explain the design for the sympy.tensor.tensor module? I formally plan to build my tensor as a subclass of yours (i.e., to add new feature of component form into your tensor); however, I found out it is really hard for me to understand your structure, and to avoid messing up everything.

@Upabjojr
Copy link
Contributor

sympy.tensor.tensor is complicated.

I think that Krastanov is very busy lately. I will try to explain.

The idea behind it is this:
http://en.wikipedia.org/wiki/Abstract_index_notation

Indices are not integers, they are types, so we have TensorIndexType and TensorIndex classes.

TensorIndexType defines the type of the index (e.g. Lorentz index, Dirac spinor index, color-index in strong interactions, etc...).

TensorIndex creates index symbols referring to a TensorIndexType.

When you define a tensor, you define a TensorHead object (there is a tensorhead function which takes slightly simpler arguments).

TensorHead means tensor without indices. TensorHead has a __call__ method which creates a TensMul object.

Suppose you have a TensorHead, say A. When you call A(i0, i1) you create a TensMul object, which is an object to handle multiplications of tensors, i0, i1 have to be TensorIndex instances. This represents tensor A^{i_0, i_1}.

A TensMul can contain more tensors with indices, just multiply them: A(i0, i1)*B(i2).

Indices with a minus are contravariant, without sign they are covariant:
A(i0, -i1) ===> LaTeX ===> A^{i_0}_{i_1}

When you create a TensorHead, you need to specify the index symmetries.

Index contraction: A(i, -i)

Indices can be repeated just once, unless they are contracted: A(i, i) ===> raises an error.

The point here is that there is still no way to handle tensors as matrices (I mean, with integer indices accessing data at a certain position). That's what I try to fix in the PR I just linked, I'm not sure how much time will it take before it is merged into master. The idea is to add the second notation with square brackets, so you would access tensor as a matrix by typing: A(i0, i1)[2, 3]

@Upabjojr
Copy link
Contributor

I formally plan to build my tensor as a subclass of yours (i.e., to add new feature of component form into your tensor); however, I found out it is really hard for me to understand your structure, and to avoid messing up everything.

Have a look at: #2514

It depends on the numpy library to store data.

Have a look at the tests to see how it works.

@ozooxo
Copy link
Author

ozooxo commented Oct 29, 2013

@Upabjojr I'll think a little bit more and may reply you more later. Here are just something I want to clarify first.

I show the class TensorIndexType but feel extremely confused about this concept. Lorentz index is the index with metric diag(1, -1, ..., -1), and both Dirac spinor index and color-index are just normal indices (i.e., the once with trivial metric diag(1, 1, ..., 1), therefore no covariance or contravariance). The only thing is that Dirac spinor and gauge field follow special algebraic properties (Clifford algebra and sn(N) Lie algebra separately); it is possible for you to represent them by some matrices (i.e., the representations of gamma matrices, etc), but beside that the indices can help us to point our some element of the matrices (which works exactly the same way as the x,y components of force vector in Newtonian mechanics), it's nothing to do with their indices.

That's the reason we need some general metric in Riemannian geometry (as Lorentz is just a special case of metric diag(1, -1, ..., -1)). However, it seems just more natural to think about metric as a rank-2 tensor, rather a property of the index. That's the reason I feel TensorIndexType quite confusing.

@Upabjojr
Copy link
Contributor

The tensor module is meant to work in an abstract manner, i.e. when you don't know anything about matrices. The idea is that you can still manipulate tensor expressions, even if the metric and tensor components are unknown to you. It's an abstract layer above what tensors are really used for in physics. For example, we merged into master branch an algorithm to compute traces of gamma matrices without specifying neither the metric matrix, nor the values of the gamma matrices. You can find it in sympy.physics.hep.gamma_matrices of the development branch.

I further plan to write a pattern matching algorithm in the future which would allow to perform complex tensor operations without knowledge of the array composition or matrix of metrics. In the end, that's what textbooks usually do, you handle tensor expressions by their properties, you don't calculate by hand a matrix product on every contraction, right?

TensorIndexType simply classifies the types of the indices. It's somewhat like types in C/C++, all types are basically bytes in the memory storing data, but they differ by the way you want to use them. An int is not a float, although they are all 4 bytes variables. The same way, TensorIndexType classifies indices, so you avoid contracting unrelated indices (contracting Lorentz and spinor indices raises an error, but also pairs of indices with compatible metric would raise an error if they refer to different TensorIndexType).

The metric values are considered only when you descend this layer of abstraction and start using data values inside a tensor. That's what my PR does, but the rules provided by the abstract layer and index types have still to hold (so we can raise an error if someone is trying to do something that has no sense, even if the matrix product would allow to do that).

Regarding normal indices: it's true, Dirac spinor index and color-index are equal if used as covariant or contravariant indices. But we don't know that when using tensors abstractly (only after you define the metric). Yet the gamma matrix algorithm works well.

@ozooxo
Copy link
Author

ozooxo commented Oct 29, 2013

Thanks for the explanation of TensorIndexType. Based on my understanding, its role here quite likes types in Haskell, they mainly help you to avoid making mistakes (as you said, avoid contracting unrelated indices), but on the computer side they are basically the same thing. Do I understand it correct?

For the other part, I still feel quite confusing. Gamma matrix have two kind of indices, one has the dimensional of spacetime, other one has dimensional 2^floor(spacetime/2) -- in normal 4d spacetime, both of them are 4. Normally when we write \gamma_\mu^{ab}, \mu is the spacetime index, and a,b are the Clifford algebra indices. I don't know which one are you talking about. But WE KNOW THE METRICS for both of them: the spacetime index has metric diag(1, -1, -1, -1) (because Quantum Field Theory assumes Special Relativity, i.e., in flat spacetime; things are more complicated in string theory, but people use gamma matrix under assumptions), and Clifford algebra indices just have diag(1, 1, 1, 1). You can do abstract calculation without knowing the detail of the metric form, but at that time your metric (which looks like g_\mu\nu) is just a normal rank-2 tensor in the manifold you care (and \mu, \nu have the dimensional of that manifold -- they may be the spacetime indices, Dirac spinor index, color-index, but they are just some special conditions and those conditions are much simpler than the general case).

@ozooxo
Copy link
Author

ozooxo commented Oct 29, 2013

By the way, by "you handle tensor expressions by their properties, you don't calculate by hand a matrix product on every contraction", I think you mean the typical gamma matrix calculations in QFT textbooks (you keep use the rules such as {\gamma_\mu, \gamma_\nu} = 2 g_\mu\nu). You can do it abstractly, because you are follow the definition of Clifford algebra, and at that level, you don't have Direc spinor indices at all. Only when you define your representation (to use a 4x4 matrix with complex number elements to represent your Clifford algebra structure -- we call it gamma matrix) you involve Direc spinor indices. And you defined the metric at the same time (which is the trivial one diag(1, 1, 1, 1)).

@Upabjojr
Copy link
Contributor

Thanks for the explanation of TensorIndexType. Based on my understanding, its role here quite likes types in Haskell, they mainly help you to avoid making mistakes (as you said, avoid contracting unrelated indices), but on the computer side they are basically the same thing. Do I understand it correct?

Yes, that's one of the reasons. So you can avoid contracting a Lorentz index and a spinor index in a gamma matrix tensor. Besides, TensorIndexType also specifies the metric tensor, the dimension, and the delta tensor. Maybe in the future it could be used to link indices to the representations of Lie groups or other algebraic structures.

For the other part, I still feel quite confusing. Gamma matrix have two kind of indices, one has the dimensional of spacetime, other one has dimensional 2^floor(spacetime/2) -- in normal 4d spacetime, both of them are 4. Normally when we write \gamma_\mu^{ab}, \mu is the spacetime index, and a,b are the Clifford algebra indices. I don't know which one are you talking about. But WE KNOW THE METRICS for both of them: the spacetime index has metric diag(1, -1, -1, -1) (because Quantum Field Theory assumes Special Relativity, i.e., in flat spacetime; things are more complicated in string theory, but people use gamma matrix under assumptions), and Clifford algebra indices just have diag(1, 1, 1, 1).

Gamma matrices have been merged into sympy as a 1-rank tensor representing matrices. The reason for this was that it is boring to write all spinor indices, but I am correcting this problem in this PR:

#2467

Here you can define auto-matrix indices, i.e. tensor indices which get automatically filled if missing. So you can easily handle matrix multiplication. There is another problem, the dimension is stored in Lorentz instead of DiracSpinor (as previously there were no such indices), that has to be corrected.

By the way, by "you handle tensor expressions by their properties, you don't calculate by hand a matrix product on every contraction", I think you mean the typical gamma matrix calculations in QFT textbooks (you keep use the rules such as {\gamma_\mu, \gamma_\nu} = 2 g_\mu\nu). You can do it abstractly, because you are follow the definition of Clifford algebra, and at that level, you don't have Direc spinor indices at all.

Dirac spinor indices will be added. Rule-based tensor manipulation can be extended even outside of gamma matrix algebra. In any case before trying rule-based approaches, it is better to write a simple pattern matching function for tensors, otherwise rules involve lots of for-loops.

I think the best that can be done now, is to define GR tensors as subclasses of TensorHead. Besides that, we still need partial and covariant derivatives objects. I think they should be defined as abstract tensors first. Then after the ndarray will be available, proceed to perform calculations.

In the physics module, there is already a DifferentialOperator object, unfortunately it is not indexed.

@ozooxo
Copy link
Author

ozooxo commented Nov 1, 2013

@Upabjojr I'll wait to see your work.


Examples
========
>>> from sympy import symbols, sin, cos
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

insert a blank line before this line

@ozooxo
Copy link
Author

ozooxo commented Nov 2, 2013

Thanks @smichr , I'll correct them when redesign/reorganizing them after seeing @Upabjojr 's work.

@jrioux
Copy link
Member

jrioux commented Nov 20, 2013

SymPy Bot Summary: 🔴 Failed after merging ozooxo/general_relativity (91ad9c6) into master (ced4df3).
@ozooxo: Please fix the test failures.
🔴 PyPy 2.0.0-beta-1; 2.7.3-final-42: fail
🔴 Python 2.7.2-final-0: fail
🔴 Python 3.2.1-final-0: fail
Sphinx 1.1.3: pass
Doc Coverage: decreased
36.859% of functions have doctests (compared to 36.733% in master)
41.755% of functions are imported into Sphinx (compared to 41.916% in master)

@ozooxo
Copy link
Author

ozooxo commented Nov 20, 2013

@jrioux Thanks. I'll fix the simple code problems (like __init__ to __new__, docs format, etc) later today or tomorrow. Hopefully it can pass the test by then.

@Upabjojr
Copy link
Contributor

Hi @ozooxo , my PRs concerning data on tensors have so far been merged into master. If you want, you can try to use the tensor module. Unfortunately it is still not possible to have partial and covariant derivatives of tensor expressions. I am currently working on a code clean up of the tensor module, after that I'll probably go on and define something like TensorOperator, but this will take a while.

If I were to approach GR, I would try to insert objects from the diffgeom module into a tensor expression, and see if there's a way to easily apply partial/covariant derivatives.

@ozooxo
Copy link
Author

ozooxo commented Nov 21, 2013

I tried to fix the format problems. All the single blank line/double blank lines should be right now. Every class is a subclass of Basic now. I changed from __init__ to __new__ for the Index class, it works. However, when I want to do the same thing for Tensor (and also its subclass Metric), something goes wrong, and the error message finally goes to

  File "/home/beta/Documents/C/Science/sympy/sympy/core/function.py", line 1526, in diff
    return Derivative(f, *symbols, **kwargs)
  File "/home/beta/Documents/C/Science/sympy/sympy/core/function.py", line 925, in __new__
    Can\'t differentiate wrt the variable: %s, %s''' % (v, count)))
ValueError: 
Can't differentiate wrt the variable: 0, 1

and I completely don't know why. Can anybody tell me what's the possible problem for that?

@ozooxo
Copy link
Author

ozooxo commented Nov 21, 2013

I also moved test_ricci.py to the right position.

@jrioux
Copy link
Member

jrioux commented Nov 25, 2013

SymPy Bot Summary: 🔴 Failed after merging ozooxo/general_relativity (32d79f0) into master (518f83b).
@ozooxo: Please fix the test failures.
🔴 PyPy 2.0.0-beta-1; 2.7.3-final-42: fail
🔴 Python 2.7.2-final-0: fail
🔴 Python 3.2.1-final-0: fail
Sphinx 1.1.3: pass
Doc Coverage: decreased
36.851% of functions have doctests (compared to 36.725% in master)
41.728% of functions are imported into Sphinx (compared to 41.888% in master)

@jrioux
Copy link
Member

jrioux commented Nov 26, 2013

SymPy Bot Summary: 🔴 Failed after merging ozooxo/general_relativity (19cba75) into master (5e822ef).
@ozooxo: Please fix the test failures.
🔴 PyPy 2.0.0-beta-1; 2.7.3-final-42: fail
🔴 Python 2.7.2-final-0: fail
🔴 Python 3.2.1-final-0: fail
Sphinx 1.1.3: pass
Doc Coverage: decreased
36.872% of functions have doctests (compared to 36.721% in master)
41.714% of functions are imported into Sphinx (compared to 41.902% in master)

@Upabjojr
Copy link
Contributor

Take a look at this: http://krastanov.files.wordpress.com/2012/07/schwarzschild.pdf

Apparently there's a bug on the diffgeom module and some functions are not documented. I missed that Riemann and Ricci tensor component calculations have already been included into SymPy since 2012.

@czgdp1807
Copy link
Member

@Upabjojr IMHO, general relativity can be a project idea for GSoC in future? Can we include this in ideas list?

@Upabjojr
Copy link
Contributor

General relativity is a very broad subject. In hindsight, my suggestion is to avoid trying implementing general relativity, it's better to extend the current SymPy modules in a way to allow computations to be easier.

Relativity has a lot to do with multilinear algebra, SymPy has a more robust support than 5 years ago.

@czgdp1807
Copy link
Member

Closing this PR in reference to the above comment. Though, if there is a scope of continuing this work, please let us know in the comments below, or if you can, feel free to re-open this PR. Thanks for the contributions.

@czgdp1807 czgdp1807 closed this Aug 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants