Skip to content

Commit

Permalink
transitioned DirectSum utilities
Browse files Browse the repository at this point in the history
  • Loading branch information
chakravala committed Aug 25, 2020
1 parent f33861f commit c90c0ee
Show file tree
Hide file tree
Showing 10 changed files with 968 additions and 183 deletions.
2 changes: 2 additions & 0 deletions appveyor.yml → .appveyor.yml
Expand Up @@ -4,6 +4,8 @@ environment:
- julia_version: 1.1
- julia_version: 1.2
- julia_version: 1.3
- julia_version: 1.4
- julia_version: 1.5
- julia_version: nightly

platform:
Expand Down
2 changes: 2 additions & 0 deletions .travis.yml
Expand Up @@ -9,6 +9,8 @@ julia:
- 1.1
- 1.2
- 1.3
- 1.4
- 1.5
- nightly
matrix:
allow_failures:
Expand Down
10 changes: 4 additions & 6 deletions Project.toml
@@ -1,19 +1,17 @@
name = "Leibniz"
uuid = "edad4870-8a01-11e9-2d75-8f02e448fc59"
authors = ["Michael Reed"]
version = "0.0.5"
version = "0.1.0"

[deps]
AbstractTensors = "a8e43f4a-99b7-5565-8bf1-0165161caaea"
DirectSum = "22fd7b30-a8c0-5bf2-aabe-97783860d07c"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
Combinatorics = "861a8166-3701-5b0c-9a16-15d98fcdc6aa"

[compat]
julia = "1"
DirectSum = "0"
AbstractTensors = "0"
StaticArrays = "0"
Combinatorics = "1"
AbstractTensors = "0.5"

[extras]
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
Expand Down
23 changes: 18 additions & 5 deletions README.md
@@ -1,6 +1,6 @@
# Leibniz.jl

*Operator algebras for multivariate differentiable Julia expressions*
*Bit entanglements for tensor algebra derivations and hypergraphs*

[![Build Status](https://travis-ci.org/chakravala/Leibniz.jl.svg?branch=master)](https://travis-ci.org/chakravala/Leibniz.jl)
[![Build status](https://ci.appveyor.com/api/projects/status/xb03dyfvhni6vrj5?svg=true)](https://ci.appveyor.com/project/chakravala/leibniz-jl)
Expand All @@ -9,11 +9,26 @@
[![Gitter](https://badges.gitter.im/Grassmann-jl/community.svg)](https://gitter.im/Grassmann-jl/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
[![Liberapay patrons](https://img.shields.io/liberapay/patrons/chakravala.svg)](https://liberapay.com/chakravala)

Compatibility of [Grassmann.jl](https://github.com/chakravala/Grassmann.jl) for multivariable differential operators and tensor field operations.
Although intended for compatibility use with the [Grassmann.jl](https://github.com/chakravala/Grassmann.jl) package for multivariable differential operators and tensor field operations, `Leibniz` can be used independently.

### Extended dual index printing with full alphanumeric characters #62'

To help provide a commonly shared and readable indexing to the user, some print methods are provided:
```julia
julia> Leibniz.printindices(stdout,Leibniz.indices(UInt(2^62-1)),false,"v")
v₁₂₃₄₅₆₇₈₉₀abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ

julia> Leibniz.printindices(stdout,Leibniz.indices(UInt(2^62-1)),false,"w")
w¹²³⁴⁵⁶⁷⁸⁹⁰ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz
```
An application of this is in `Grassmann` and `DirectSum`, where dual indexing is used.

# Derivation

Generates the tensor algebra of multivariable symmetric Leibniz differentials and interfaces `using Reduce, Grassmann` to provide the `∇,Δ` vector field operators, enabling mixed-symmetry tensors with arbitrary multivariate `Grassmann` manifolds.

```Julia
julia> using Leibniz, Grassmann
Reduce (Free CSL version, revision 4980), 06-May-19 ...

julia> V = tangent(ℝ^3,4,3)
+++
Expand All @@ -37,6 +52,4 @@ julia> ∇, Δ
(∂ₖvₖ, ∂ₖ²v)
```

Generates the tensor algebra of multivariable symmetric Leibniz differentials and interfaces `using Reduce, Grassmann` to provide the `∇,Δ` vector field operators, enabling mixed-symmetry tensors with arbitrary multivariate `Grassmann` manifolds.

This is an initial undocumented pre-release registration for testing with other packages.
241 changes: 85 additions & 156 deletions src/Leibniz.jl
@@ -1,171 +1,90 @@
module Leibniz

# This file is part of Leibniz.jl. It is licensed under the GPL license
# This file is part of Leibniz.jl. It is licensed under the AGPL license
# Leibniz Copyright (C) 2019 Michael Reed

using DirectSum, StaticArrays #, Requires
using LinearAlgebra, AbstractTensors
import Base: *, ^, +, -, /, \, show, zero
import DirectSum: value, V0, mixedmode, pre, diffmode
export Manifold, Differential, Derivation, d, ∂, ∇, Δ
import Base: getindex, convert, @pure, +, *, , , , , ==, show, zero
import LinearAlgebra: det, rank

export Differential, Monomial, Derivation, d, ∂, ∇, Δ, @operator
## Manifold{N}

abstract type Operator{V} end #<: TensorAlgebra{V} end
import AbstractTensors: TensorAlgebra, Manifold, TensorGraded, scalar, isscalar, involute
import AbstractTensors: vector, isvector, bivector, isbivector, volume, isvolume, , mdims
import AbstractTensors: value, valuetype, interop, interform, even, odd, isnull, norm
import AbstractTensors: TupleVector, Values, Variables, FixedVector, SVector, MVector
import AbstractTensors: basis, complementleft, complementlefthodge, unit, involute, clifford
abstract type TensorTerm{V,G} <: TensorGraded{V,G} end

abstract type Polynomial{V,G} <: Operator{V} end
## utilities

+(d::O) where O<:Operator = d
+(r,d::O) where O<:Operator = d+r
include("utilities.jl")

struct Monomial{V,G,D,O,T} <: Polynomial{V,G}
v::T
end
#="""
floatprecision(s)
Monomial{V,G,D}() where {V,G,D} = Monomial{V,G,D}(true)
Monomial{V,G,D}(v::T) where {V,G,D,T} = Monomial{V,G,D,1,T}(v)
Monomial{V,G,D,O}() where {V,G,D,O,T} = Monomial{V,G,D,O}(true)
Monomial{V,G,D,O}(v::T) where {V,G,D,O,T} = Monomial{V,G,D,O,T}(v)

zero(::Monomial) = Monomial{V0,0,0,0}()

value(d::Monomial{V,G,D,T} where {V,G,D}) where T = d.v

sups(O) = O 1 ? DirectSum.sups[O] : ""

show(io::IO,d::Monomial{V,G,D,O,Bool} where G) where {V,D,O} = print(io,value(d) ? "" : "-",pre[mixedmode(V)>0 ? 4 : 3],[DirectSum.subs[k] for k DirectSum.shift_indices(V,UInt(D))]...,sups(O))
show(io::IO,d::Monomial{V,G,D,O} where G) where {V,D,O} = print(io,value(d),pre[mixedmode(V)>0 ? 4 : 3],[DirectSum.subs[k] for k DirectSum.shift_indices(V,UInt(D))]...,sups(O))
show(io::IO,d::Monomial{V,G,D,0,Bool} where {V,G,D}) = print(io,value(d) ? 1 : -1)
show(io::IO,d::Monomial{V,G,0,O,Bool} where {V,G,O}) = print(io,value(d) ? 1 : -1)
show(io::IO,d::Monomial{V,0,D,O,Bool} where {V,D,O}) = print(io,value(d) ? 1 : -1)
show(io::IO,d::Monomial{V,G,D,0} where {V,G,D}) = print(io,value(d))
show(io::IO,d::Monomial{V,G,0} where {V,G}) = print(io,value(d))
show(io::IO,d::Monomial{V,0} where V) = print(io,value(d))
show(io::IO,d::Monomial{V,G,D,UInt(0),Bool} where {V,G,D}) = print(io,value(d) ? 1 : -1)
show(io::IO,d::Monomial{V,G,UInt(0),O,Bool} where {V,G,O}) = print(io,value(d) ? 1 : -1)
show(io::IO,d::Monomial{V,UInt(0),D,O,Bool} where {V,D,O}) = print(io,value(d) ? 1 : -1)
show(io::IO,d::Monomial{V,G,D,UInt(0)} where {V,G,D}) = print(io,value(d))
show(io::IO,d::Monomial{V,G,UInt(0)} where {V,G}) = print(io,value(d))
show(io::IO,d::Monomial{V,UInt(0)} where V) = print(io,value(d))

indexint(D) = DirectSum.bit2int(DirectSum.indexbits(max(D...),D))

#∂(D::T...) where T<:Integer = Monomial{V0,length(D),indexint(D)}()
#∂(V::S,D::T...) where {S<:Manifold,T<:Integer} = Monomial{V,length(D),indexint(D)}()

*(r,d::Monomial) = d*r
*(d::Monomial{V,G,D,0} where {V,G,D},r) = r
*(d::Monomial{V,G,0,O} where {V,G,O},r) = r
*(d::Monomial{V,0,D,O} where {V,D,O},r) = r
function *(a::Monomial{V,1,D,O1},b::Monomial{V,1,D,O2}) where {V,D,O1,O2}
O = O1+O2
O > diffmode(V) && (return 0)
c = a.v*b.v
iszero(c) ? 0 : Monomial{V,2,D,O1+O2}(c)
end
function *(a::Monomial{V,1,D,1},b::Monomial{V,1,D,1}) where {V,D,O1,O2}
2 > diffmode(V) && (return 0)
c = a.v*b.v
iszero(c) ? 0 : Monomial{V,2,D,2}(c)
end
function *(a::Monomial{V,1,D1,1},b::Monomial{V,1,D2,1}) where {V,D1,D2}
2 > diffmode(V) && (return 0)
c = a.v*b.v
iszero(c) ? 0 : Monomial{V,2,D1|D2}(c)
end
*(a::Monomial{V,G,D,O,Bool},b::I) where {V,G,D,O,I<:Number} = isone(b) ? a : Monomial{V,G,D,O,I}(value(a) ? b : -b)
*(a::Monomial{V,G,D,O,T},b::I) where {V,G,D,O,T,I<:Number} = isone(b) ? a : Monomial{V,G,D,O}(value(a)*b)
+(a::Monomial{V,G,D,O},b::Monomial{V,G,D,O}) where {V,G,D,O} = (c=a.v+b.v; iszero(c) ? 0 : Monomial{V,G,D,O}(c))
-(a::Monomial{V,G,D,O},b::Monomial{V,G,D,O}) where {V,G,D,O} = (c=a.v-b.v; iszero(c) ? 0 : Monomial{V,G,D,O}(c))
#-(d::Monomial{V,G,D,O,Bool}) where {V,G,D,O} = Monomial{V,G,D,O,Bool}(!value(d))
-(d::Monomial{V,G,D,O}) where {V,G,D,O} = Monomial{V,G,D,O}(-value(d))
function ^(d::Monomial{V,G,D,O},o::T) where {V,G,D,O,T<:Integer}
Oo = O*o
GOo = G+Oo
GOo > diffmode(V) && (return 0)
iszero(o) ? 1 : Monomial{V,GOo,D,Oo}(value(d)^o)
end
function ^(d::Monomial{V,G,D,O,Bool},o::T) where {V,G,D,O,T<:Integer}
Oo = O*o
GOo = G+Oo
GOo > diffmode(V) && (return 0)
iszero(o) ? 1 : Monomial{V,GOo,D,Oo}(value(d) ? true : iseven(o))
end
Set float precision for display Float64 coefficents.
struct OperatorExpr{T} <: Operator{T}
expr::T
end
Float coefficients `f` are printed as `@sprintf(s,f)`.
macro operator(expr)
OperatorExpr(expr)
end
If `s == ""` (default), then `@sprintf` is not used.
"""
const floatprecision = ( () -> begin
gs::String = ""
return (tf=gs)->(gs≠tf && (gs=tf); return gs)
end)()
export floatprecision
macro fprintf()
s = floatprecision()
isempty(s) ? :(m.v) : :(Printf.@sprintf($s,m.v))
end=#

# symbolic print types

show(io::IO,d::OperatorExpr) = print(io,'(',d.expr,')')

add(d,n) = OperatorExpr(Expr(:call,:+,d,n))

function plus(d::OperatorExpr{T},n) where T
iszero(n) && (return d)
if T == Expr
if d.expr.head == :call
if d.expr.args[1] == :+
return OperatorExpr(Expr(:call,:+,push!(copy(d.expr.args[2:end]),n)...))
elseif d.expr.args[1] == :- && length(d.expr.args) == 2 && d.expr.args[2] == n
return 0
else
return OperatorExpr(Expr(:call,:+,d.expr,n))
end
else
throw(error("Operator expression not implemented"))
end
else
OperatorExpr(d.expr+n)
parval = (Expr,Complex,Rational,TensorAlgebra)

# number fields

const Fields = (Real,Complex)
const Field = Fields[1]
const ExprField = Union{Expr,Symbol}

extend_field(Field=Field) = (global parval = (parval...,Field))

for T Fields
@eval begin
Base.:(==)(a::T,b::TensorTerm{V,G} where V) where {T<:$T,G} = G==0 ? a==value(b) : 0==a==value(b)
Base.:(==)(a::TensorTerm{V,G} where V,b::T) where {T<:$T,G} = G==0 ? value(a)==b : 0==value(a)==b
end
end

+(d::Monomial,n::T) where T<:Number = iszero(n) ? d : add(d,n)
+(d::Monomial,n::Monomial) = add(d,n)
+(d::OperatorExpr,n) = plus(d,n)
+(d::OperatorExpr,n::O) where O<:Operator = plus(d,n)
-(d::OperatorExpr) = OperatorExpr(Expr(:call,:-,d))

#add(d,n) = OperatorExpr(Expr(:call,:+,d,n))

function times(d::OperatorExpr{T},n) where T
iszero(n) && (return 0)
isone(n) && (return d)
if T == Expr
if d.expr.head == :call
if d.expr.args[1] (:+,:-)
return OperatorExpr(Expr(:call,:+,(d.expr.args[2:end] .* Ref(n))...))
elseif d.expr.args[1] == :*
(d.expr.args[2]*n)*d.expr.args[3] + d.expr.args[2]*(d.expr.args[3]*n)
elseif d.expr.args[1] == :/
(d.expr.args[2]*n)/d.expr.args[3] - (d.expr.args[2]*(d.expr.args[3]*n))/(d.expr.args[3]^2)
else
return OperatorExpr(Expr(:call,:*,d.expr,n))
end
else
throw(error("Operator expression not implemented"))
end
else
OperatorExpr(d.expr*n)
Base.:(==)(a::TensorTerm,b::TensorTerm) = 0 == value(a) == value(b)

for T (Fields...,Symbol,Expr)
@eval begin
Base.isapprox(a::S,b::T) where {S<:TensorAlgebra,T<:$T} = Base.isapprox(a,Simplex{Manifold(a)}(b))
Base.isapprox(a::S,b::T) where {S<:$T,T<:TensorAlgebra} = Base.isapprox(b,a)
end
end

*(d::OperatorExpr,n::Monomial) = times(d,n)
*(d::OperatorExpr,n::OperatorExpr) = OperatorExpr(Expr(:call,:*,d,n))
*(n::T,d::OperatorExpr) where T<:Number = OperatorExpr(DirectSum.(n,d.expr))
## fundamentals

"""
getbasis(V::Manifold,v)
*(a::Monomial{V,1},b::Monomial{V,1,D,O}) where {V,D,O} = Monomial{V,1,D,O}(a*OperatorExpr(b.v))
*(a::Monomial,b::Monomial{V,G,D,O}) where {V,G,D,O} = Monomial{V,G,D,O}(a*OperatorExpr(b.v))
*(a::Monomial{V,G,D,O},b::OperatorExpr) where {V,G,D,O} = Monomial{V,G,D,O}(value(a)*b.expr)
Fetch a specific `SubManifold{G,V}` element from an optimal `SubAlgebra{V}` selection.
"""
@inline getbasis(V,b) = getbasis(V,UInt(b))

^(d::OperatorExpr,n::T) where T<:Integer = iszero(n) ? 1 : isone(n) ? d : OperatorExpr(Expr(:call,:^,d,n))
Base.one(V::T) where T<:TensorGraded = one(Manifold(V))
Base.zero(V::T) where T<:TensorGraded = zero(Manifold(V))

## generic
@pure g_one(::Type{T}) where T = one(T)
@pure g_zero(::Type{T}) where T = zero(T)

Base.signbit(::O) where O<:Operator = false
Base.abs(d::O) where O<:Operator = d
## Derivation

struct Derivation{T,O}
v::UniformScaling{T}
Expand All @@ -174,19 +93,19 @@ end
Derivation{T}(v::UniformScaling{T}) where T = Derivation{T,1}(v)
Derivation(v::UniformScaling{T}) where T = Derivation{T}(v)

show(io::IO,v::Derivation{Bool,O}) where O = print(io,(v.v.λ ? "" : "-"),"∂ₖ",O==1 ? "" : DirectSum.sups[O],"v",isodd(O) ? "" : "")
show(io::IO,v::Derivation{T,O}) where {T,O} = print(io,v.v.λ,"∂ₖ",O==1 ? "" : DirectSum.sups[O],"v",isodd(O) ? "" : "")
show(io::IO,v::Derivation{Bool,O}) where O = print(io,(v.v.λ ? "" : "-"),"∂ₖ",O==1 ? "" : AbstractTensors.sups[O],"v",isodd(O) ? "" : "")
show(io::IO,v::Derivation{T,O}) where {T,O} = print(io,v.v.λ,"∂ₖ",O==1 ? "" : AbstractTensors.sups[O],"v",isodd(O) ? "" : "")

-(v::Derivation{Bool,O}) where {T,O} = Derivation{Bool,O}(UniformScaling{Bool}(!v.v.λ))
-(v::Derivation{T,O}) where {T,O} = Derivation{T,O}(UniformScaling{T}(-v.v.λ))
Base.:-(v::Derivation{Bool,O}) where {T,O} = Derivation{Bool,O}(UniformScaling{Bool}(!v.v.λ))
Base.:-(v::Derivation{T,O}) where {T,O} = Derivation{T,O}(UniformScaling{T}(-v.v.λ))

function ^(v::Derivation{T,O},n::S) where {T,O,S<:Integer}
function Base.:^(v::Derivation{T,O},n::S) where {T,O,S<:Integer}
x = T<:Bool ? (isodd(n) ? v.v.λ : true ) : v.v.λ^n
t = typeof(x)
Derivation{t,O*n}(UniformScaling{t}(x))
end

for op (:+,:-,:*)
for op (:(Base.:+),:(Base.:-),:(Base.:*))
@eval begin
$op(a::Derivation{A,O},b::Derivation{B,O}) where {A,B,O} = Derivation{promote_type(A,B),O}($op(a.v,b.v))
$op(a::Derivation{A,O},b::B) where {A,B<:Number,O} = Derivation{promote_type(A,B),O}($op(a.v,b))
Expand All @@ -196,16 +115,16 @@ end

unitype(::UniformScaling{T}) where T = T

/(a::Derivation{A,O},b::Derivation{B,O}) where {A,B,O} = (x=a.v/b.v; Derivation{unitype(x),O}(x))
/(a::Derivation{A,O},b::B) where {A,B<:Number,O} = (x=a.v/b; Derivation{unitype(x),O}(x))
#/(a::A,b::Derivation{B,O}) where {A<:Number,B,O} = (x=a/b.v; Derivation{typeof(x),O}(x))
\(a::Derivation{A,O},b::Derivation{B,O}) where {A,B,O} = (x=a.v\b.v; Derivation{unitype(x),O}(x))
\(a::A,b::Derivation{B,O}) where {A<:Number,B,O} = (x=a\b.v; Derivation{unitype(x),O}(x))
Base.:/(a::Derivation{A,O},b::Derivation{B,O}) where {A,B,O} = (x=Base.:/(a.v,b.v); Derivation{unitype(x),O}(x))
Base.:/(a::Derivation{A,O},b::B) where {A,B<:Number,O} = (x=Base.:/(a.v,b); Derivation{unitype(x),O}(x))
#Base.:/(a::A,b::Derivation{B,O}) where {A<:Number,B,O} = (x=Base.:/(a,b.v); Derivation{typeof(x),O}(x))
Base.:\(a::Derivation{A,O},b::Derivation{B,O}) where {A,B,O} = (x=a.v\b.v; Derivation{unitype(x),O}(x))
Base.:\(a::A,b::Derivation{B,O}) where {A<:Number,B,O} = (x=a\b.v; Derivation{unitype(x),O}(x))

import AbstractTensors: ,
import LinearAlgebra: dot, cross

for op (:+,:-,:*,:/,:\,:,:,:dot,:cross)
for op (:(Base.:+),:(Base.:-),:(Base.:*),:(Base.:/),:(Base.:\),:,:,:dot,:cross)
@eval begin
$op(a::Derivation,b::B) where B<:TensorAlgebra = $op(Manifold(b)(a),b)
$op(a::A,b::Derivation) where A<:TensorAlgebra = $op(a,Manifold(a)(b))
Expand All @@ -216,6 +135,16 @@ const ∇ = Derivation(LinearAlgebra.I)
const Δ =^2

function d end
functionend

include("generic.jl")
include("operations.jl")
include("indices.jl")

bladeindex(cache_limit,one(UInt))
basisindex(cache_limit,one(UInt))

indexbasis(Int((sparse_limit+cache_limit)/2),1)

#=function __init__()
@require Reduce="93e0c654-6965-5f22-aba9-9c1ae6b3c259" include("symbolic.jl")
Expand Down

2 comments on commit c90c0ee

@chakravala
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator register()

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request updated: JuliaRegistries/General/20162

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.1.0 -m "<description of version>" c90c0ee5a3dcfd083f90e71df7f20b136cd0b1ef
git push origin v0.1.0

Please sign in to comment.