Skip to content

Commit

Permalink
Merge pull request #49 from gyrdym/ml_linalg-5.6.0
Browse files Browse the repository at this point in the history
ML prefix removed, equality operator for vector overridden, uniqueRows for matrix added
  • Loading branch information
gyrdym committed Mar 1, 2019
2 parents 4402bf7 + 087eb23 commit 7b280e2
Show file tree
Hide file tree
Showing 31 changed files with 492 additions and 345 deletions.
2 changes: 1 addition & 1 deletion .travis.yml
@@ -1,6 +1,6 @@
language: dart
dart:
- "2.1.0"
- "2.2.0"
dart_task:
- test: --platform vm
- dartanalyzer: true
Expand Down
5 changes: 5 additions & 0 deletions CHANGELOG.md
@@ -1,5 +1,10 @@
# Changelog

## 6.0.0
- prefix `ML` removed from entities' names
- Float32x4Vector: equality operator override added
- Matrix: `uniqueRows` method added

## 5.5.1
- MLMatrix: fixed bug in `setColumn` method when rows cache was not cleared

Expand Down
99 changes: 50 additions & 49 deletions README.md
Expand Up @@ -56,12 +56,13 @@ computation architecture. Actually, the main purpose of the library - connect su
the pure math. So, the library contains a high performance SIMD vector class, based on
[Float32x4](https://api.dartlang.org/stable/2.1.0/dart-typed_data/Float32x4-class.html) -
[Float32x4Vector](https://github.com/gyrdym/linalg/blob/master/lib/src/vector/float32x4/float32x4_vector.dart).
Most of operations in the vector are performed in four "threads". This kind of concurrency is reached by special
Most of operations in the vector class are performed in four "threads". This kind of concurrency is reached by special
128-bit processor registers, which are used directly by program code. For better understanding of the topic please read
the [article](https://www.dartlang.org/articles/dart-vm/simd). It is also possible to implement [Float64x2](https://api.dartlang.org/stable/2.1.0/dart-typed_data/Float64x2-class.html)-based
version of the vector using existing codebase, but so far there is no need to do so.
The class [Float32x4Vector] is hidden from the library's user. You can create an [Float32x4Vector] instance
via [MLVector] factory (see examples below).
version of vector using existing codebase, but so far there is no need to do so.
The class [Float32x4Vector](https://github.com/gyrdym/linalg/blob/master/lib/src/vector/float32x4/float32x4_vector.dart)
is hidden from the library's users. You can create a [Float32x4Vector](https://github.com/gyrdym/linalg/blob/master/lib/src/vector/float32x4/float32x4_vector.dart)
instance via [Vector](https://github.com/gyrdym/ml_linalg/blob/master/lib/vector.dart) factory (see examples below).

#### Vector operations examples
At the present moment most common vector operations are implemented:
Expand All @@ -70,8 +71,8 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector2 = MLVector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final vector1 = Vector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector2 = Vector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final result = vector1 + vector2;
print(result.toList()); // [3.0, 5.0, 7.0, 9.0, 11.0]
````
Expand All @@ -80,8 +81,8 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([4.0, 5.0, 6.0, 7.0, 8.0]);
final vector2 = MLVector.from([2.0, 3.0, 2.0, 3.0, 2.0]);
final vector1 = Vector.from([4.0, 5.0, 6.0, 7.0, 8.0]);
final vector2 = Vector.from([2.0, 3.0, 2.0, 3.0, 2.0]);
final result = vector1 - vector2;
print(result.toList()); // [2.0, 2.0, 4.0, 4.0, 6.0]
````
Expand All @@ -90,8 +91,8 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector2 = MLVector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final vector1 = Vector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector2 = Vector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final result = vector1 * vector2;
print(result.toList()); // [2.0, 6.0, 12.0, 20.0, 30.0]
````
Expand All @@ -100,8 +101,8 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([6.0, 12.0, 24.0, 48.0, 96.0]);
final vector2 = MLVector.from([3.0, 4.0, 6.0, 8.0, 12.0]);
final vector1 = Vector.from([6.0, 12.0, 24.0, 48.0, 96.0]);
final vector2 = Vector.from([3.0, 4.0, 6.0, 8.0, 12.0]);
final result = vector1 / vector2;
print(result.toList()); // [2.0, 3.0, 4.0, 6.0, 8.0]
````
Expand All @@ -110,7 +111,7 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final vector1 = Vector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final result = vector1.norm();
print(result); // sqrt(2^2 + 3^2 + 4^2 + 5^2 + 6^2) = sqrt(90) ~~ 9.48
````
Expand All @@ -119,7 +120,7 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final vector1 = Vector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final result = vector1.norm(Norm.manhattan);
print(result); // 2 + 3 + 4 + 5 + 6 = 20.0
````
Expand All @@ -128,7 +129,7 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final vector1 = Vector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final result = vector1.mean();
print(result); // (2 + 3 + 4 + 5 + 6) / 5 = 4.0
````
Expand All @@ -137,7 +138,7 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final vector1 = Vector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final result = vector1.sum();
print(result); // 2 + 3 + 4 + 5 + 6 = 20.0 (equivalent to Manhattan norm)
````
Expand All @@ -146,8 +147,8 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector2 = MLVector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final vector1 = Vector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector2 = Vector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final result = vector1.dot(vector2);
print(result); // 1.0 * 2.0 + 2.0 * 3.0 + 3.0 * 4.0 + 4.0 * 5.0 + 5.0 * 6.0 = 70.0
````
Expand All @@ -156,7 +157,7 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector1 = Vector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final scalar = 5.0;
final result = vector1 + scalar;
print(result.toList()); // [6.0, 7.0, 8.0, 9.0, 10.0]
Expand All @@ -166,7 +167,7 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector1 = Vector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final scalar = 5.0;
final result = vector1 - scalar;
print(result.toList()); // [-4.0, -3.0, -2.0, -1.0, 0.0]
Expand All @@ -176,7 +177,7 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector1 = Vector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final scalar = 5.0;
final result = vector1 * scalar;
print(result.toList()); // [5.0, 10.0, 15.0, 20.0, 25.0]
Expand All @@ -186,7 +187,7 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([25.0, 50.0, 75.0, 100.0, 125.0]);
final vector1 = Vector.from([25.0, 50.0, 75.0, 100.0, 125.0]);
final scalar = 5.0;
final result = vector1.scalarDiv(scalar);
print(result.toList()); // [5.0, 10.0, 15.0, 20.0, 25.0]
Expand All @@ -196,8 +197,8 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector2 = MLVector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final vector1 = Vector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector2 = Vector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final result = vector1.distanceTo(vector2);
print(result); // ~~2.23
````
Expand All @@ -206,8 +207,8 @@ At the present moment most common vector operations are implemented:
````Dart
import 'package:ml_linalg/linalg.dart';
final vector1 = MLVector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector2 = MLVector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final vector1 = Vector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector2 = Vector.from([2.0, 3.0, 4.0, 5.0, 6.0]);
final result = vector1.distanceTo(vector2, Norm.manhattan);
print(result); // 5.0
````
Expand All @@ -219,7 +220,7 @@ Performs mapping from one vector to another in efficient way (using simd computa
````Dart
import 'package:ml_linalg/linalg.dart';
final vector = MLVector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final vector = Vector.from([1.0, 2.0, 3.0, 4.0, 5.0]);
final result = vector.fastMap<Float32x4>((Float32x4 element, int offsetStart, int offsetEnd) {
// offsetStart - start index for the current vectorized element, e.g. if `element` is second in the inner collection,
// the offsetStart will be 4 (because Float32x4 contains 4 elements)
Expand All @@ -240,12 +241,12 @@ Also, a class for matrix is available. It is based on Float32x4 and Float32x4Vec
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix1 = MLMatrix.from([
final matrix1 = Matrix.from([
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, .0, -2.0, -3.0],
]);
final matrix2 = MLMatrix.from([
final matrix2 = Matrix.from([
[10.0, 20.0, 30.0, 40.0],
[-5.0, 16.0, 2.0, 18.0],
[2.0, -1.0, -2.0, -7.0],
Expand All @@ -262,7 +263,7 @@ print(matrix1 + matrix2);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, .0, -2.0, -3.0],
Expand All @@ -279,12 +280,12 @@ print(matrix + 7);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, .0, -2.0, -3.0],
]);
final vector = MLVector.from([2.0, 3.0, 4.0, 5.0]);
final vector = Vector.from([2.0, 3.0, 4.0, 5.0]);
final result = matrix * vector;
print(result);
// a vector-column [
Expand All @@ -298,12 +299,12 @@ print(matrix + 7);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix1 = MLMatrix.from([
final matrix1 = Matrix.from([
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, .0, -2.0, -3.0],
]);
final matrix2 = MLMatrix.from([
final matrix2 = Matrix.from([
[1.0, 2.0],
[5.0, 6.0],
[9.0, .0],
Expand All @@ -322,7 +323,7 @@ print(matrix + 7);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, .0, -2.0, -3.0],
Expand All @@ -339,12 +340,12 @@ print(matrix * 3);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix1 = MLMatrix.from([
final matrix1 = Matrix.from([
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, .0, -2.0, -3.0],
]);
final matrix2 = MLMatrix.from([
final matrix2 = Matrix.from([
[10.0, 20.0, 30.0, 40.0],
[-5.0, 16.0, 2.0, 18.0],
[2.0, -1.0, -2.0, -7.0],
Expand All @@ -361,7 +362,7 @@ print(matrix1 - matrix2);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, .0, -2.0, -3.0],
Expand All @@ -380,7 +381,7 @@ print(matrix1 - matrix2);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
]);
Expand All @@ -392,7 +393,7 @@ print(matrix1 - matrix2);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[11.0, 12.0, 13.0, 14.0],
[15.0, 16.0, 17.0, 18.0],
[21.0, 22.0, 23.0, 24.0],
Expand All @@ -405,11 +406,11 @@ print(matrix1 - matrix2);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
]);
final modifier = MLVector.filled(4, 2.0);
final modifier = Vector.filled(4, 2.0);
final newMatrix = matrix.rowsMap((row) => row + modifier);
print(newMatrix);
// [
Expand All @@ -422,11 +423,11 @@ print(matrix1 - matrix2);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
]);
final modifier = MLVector.filled(2, 2.0);
final modifier = Vector.filled(2, 2.0);
final newMatrix = matrix.columnsMap((column) => column + modifier);
print(newMatrix);
// [
Expand All @@ -439,7 +440,7 @@ print(matrix1 - matrix2);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[11.0, 12.0, 13.0, 14.0],
[15.0, 16.0, 17.0, 18.0],
[21.0, 22.0, 23.0, 24.0],
Expand All @@ -457,7 +458,7 @@ print(matrix1 - matrix2);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[11.0, 12.0, 13.0, 14.0],
[15.0, 16.0, 17.0, 18.0],
[21.0, 22.0, 23.0, 24.0],
Expand All @@ -472,7 +473,7 @@ print(matrix1 - matrix2);
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[11.0, 12.0, 13.0, 14.0],
[15.0, 16.0, 0.0, 18.0],
[21.0, 22.0, -23.0, 24.0],
Expand All @@ -490,7 +491,7 @@ Performs mapping from one matrix to another in efficient way (using simd computa
````Dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[11.0, 12.0, 13.0, 14.0],
[15.0, 16.0, 0.0, 18.0],
[21.0, 22.0, -23.0, 24.0],
Expand Down Expand Up @@ -536,7 +537,7 @@ print(result);
````dart
import 'package:ml_linalg/linalg.dart';
final matrix = MLMatrix.from([
final matrix = Matrix.from([
[11.0, 12.0, 13.0, 14.0],
[15.0, 16.0, 0.0, 18.0],
[21.0, 22.0, -23.0, 24.0],
Expand Down

0 comments on commit 7b280e2

Please sign in to comment.