From bca829a6618c80a24bc089a658a56cf4279511af Mon Sep 17 00:00:00 2001
From: seanmor5 Activation functions. Activation functions are element-wise, (typically) non-linear
functions called on the output of another layer, such as
a dense layer: Activation functions output the "activation" or how active
+|> dense(weight, bias)
+|> relu() Activation functions output the "activation" or how active
a given layer's neurons are in learning a representation
of the data-generating distribution. Some activations are commonly used as output activations. For
example Other activations such as Other activations such as Generally, the choice of activation function is arbitrary;
although some activations work better than others in certain
@@ -442,26 +442,26 @@ Given an Axon model: You can define input templates for each input: And then display the execution flow of the model: Given an Axon model: You can define input templates for each input: And then display the execution flow of the model: Given an Axon model: You can define input templates for each input: And then display the execution flow of the model: Given an Axon model: You can define input templates for each input: And then display the execution flow of the model:
x
-|> dense(weight, bias)
-|> relu()
softmax
is often used as the output in multiclass
classification problems because it returns a categorical
-probability distribution:iex> Axon.Activations.softmax(Nx.tensor([[1, 2, 3]], type: {:f, 32}))
-#Nx.Tensor<
- f32[1][3]
- [
- [0.09003057330846786, 0.2447284758090973, 0.6652409434318542]
- ]
->
tanh
or sigmoid
are used because
+probability distribution:iex> Axon.Activations.softmax(Nx.tensor([[1, 2, 3]], type: {:f, 32}))
+#Nx.Tensor<
+ f32[1][3]
+ [
+ [0.09003057330846786, 0.2447284758090973, 0.6652409434318542]
+ ]
+>
tanh
or sigmoid
are used because
they have desirable properties, such as keeping the output
tensor constrained within a certain range.celu(x, opts \\ [])
Examples
-iex> Axon.Activations.celu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]))
-#Nx.Tensor<
- f32[7]
- [-0.9502129554748535, -0.8646647334098816, -0.6321205496788025, 0.0, 1.0, 2.0, 3.0]
->
-
-iex> Axon.Activations.celu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}))
-#Nx.Tensor<
- bf16[2][3]
- [
- [-0.62890625, -0.86328125, -0.94921875],
- [1.0, 2.0, 3.0]
- ]
->
+
iex> Axon.Activations.celu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]))
+#Nx.Tensor<
+ f32[7]
+ [-0.9502129554748535, -0.8646647334098816, -0.6321205496788025, 0.0, 1.0, 2.0, 3.0]
+>
+
+iex> Axon.Activations.celu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}))
+#Nx.Tensor<
+ bf16[2][3]
+ [
+ [-0.62890625, -0.86328125, -0.94921875],
+ [1.0, 2.0, 3.0]
+ ]
+>
Error cases
-iex> Axon.Activations.celu(Nx.tensor([0.0, 1.0, 2.0], type: {:f, 32}), alpha: 0.0)
+
iex> Axon.Activations.celu(Nx.tensor([0.0, 1.0, 2.0], type: {:f, 32}), alpha: 0.0)
** (ArgumentError) :alpha must be non-zero in CELU activation
@@ -506,20 +506,20 @@
-elu(x, opts \\ [])
Examples
iex> Axon.Activations.elu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]))
-#Nx.Tensor<
- f32[7]
- [-0.9502129554748535, -0.8646647334098816, -0.6321205496788025, 0.0, 1.0, 2.0, 3.0]
->
-
-iex> Axon.Activations.elu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}))
-#Nx.Tensor<
- bf16[2][3]
- [
- [-0.62890625, -0.86328125, -0.94921875],
- [1.0, 2.0, 3.0]
- ]
->
+
-iex> Axon.Activations.elu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]))
+#Nx.Tensor<
+ f32[7]
+ [-0.9502129554748535, -0.8646647334098816, -0.6321205496788025, 0.0, 1.0, 2.0, 3.0]
+>
+
+iex> Axon.Activations.elu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}))
+#Nx.Tensor<
+ bf16[2][3]
+ [
+ [-0.62890625, -0.86328125, -0.94921875],
+ [1.0, 2.0, 3.0]
+ ]
+>
@@ -555,20 +555,20 @@
-exp(x)
Examples
+iex> Axon.Activations.exp(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [0.049787066876888275, 0.1353352814912796, 0.3678794503211975, 1.0, 2.7182817459106445, 7.389056205749512, 20.08553695678711]
->
-
-iex> Axon.Activations.exp(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [0.3671875, 0.134765625, 0.049560546875],
- [2.703125, 7.375, 20.0]
- ]
->
iex> Axon.Activations.exp(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [0.049787066876888275, 0.1353352814912796, 0.3678794503211975, 1.0, 2.7182817459106445, 7.389056205749512, 20.08553695678711]
+>
+
+iex> Axon.Activations.exp(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [0.3671875, 0.134765625, 0.049560546875],
+ [2.703125, 7.375, 20.0]
+ ]
+>
gelu(x)
Examples
iex> Axon.Activations.gelu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-0.0040496885776519775, -0.04550027847290039, -0.15865525603294373, 0.0, 0.8413447141647339, 1.9544997215270996, 2.995950222015381]
->
-
-iex> Axon.Activations.gelu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-0.16015625, -0.046875, -0.005859375],
- [0.83984375, 1.953125, 2.984375]
- ]
->
+
-iex> Axon.Activations.gelu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-0.0040496885776519775, -0.04550027847290039, -0.15865525603294373, 0.0, 0.8413447141647339, 1.9544997215270996, 2.995950222015381]
+>
+
+iex> Axon.Activations.gelu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-0.16015625, -0.046875, -0.005859375],
+ [0.83984375, 1.953125, 2.984375]
+ ]
+>
@@ -647,20 +647,20 @@
-hard_sigmoid(x, opts \\ [])
Examples
+iex> Axon.Activations.hard_sigmoid(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [0.0, 0.0, 0.0, 0.20000000298023224, 0.4000000059604645, 0.6000000238418579, 0.800000011920929]
->
-
-iex> Axon.Activations.hard_sigmoid(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [7.781982421875e-4, 0.0, 0.0],
- [0.3984375, 0.59765625, 0.796875]
- ]
->
iex> Axon.Activations.hard_sigmoid(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [0.0, 0.0, 0.0, 0.20000000298023224, 0.4000000059604645, 0.6000000238418579, 0.800000011920929]
+>
+
+iex> Axon.Activations.hard_sigmoid(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [7.781982421875e-4, 0.0, 0.0],
+ [0.3984375, 0.59765625, 0.796875]
+ ]
+>
hard_silu(x, opts \\ [])
Examples
+iex> Axon.Activations.hard_silu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-0.0, -0.0, -0.0, 0.0, 0.4000000059604645, 1.2000000476837158, 2.4000000953674316]
->
-
-iex> Axon.Activations.hard_silu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-7.781982421875e-4, -0.0, -0.0],
- [0.3984375, 1.1953125, 2.390625]
- ]
->
iex> Axon.Activations.hard_silu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-0.0, -0.0, -0.0, 0.0, 0.4000000059604645, 1.2000000476837158, 2.4000000953674316]
+>
+
+iex> Axon.Activations.hard_silu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-7.781982421875e-4, -0.0, -0.0],
+ [0.3984375, 1.1953125, 2.390625]
+ ]
+>
hard_tanh(x)
Examples
-
+iex> Axon.Activations.hard_tanh(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-1.0, -1.0, -1.0, 0.0, 1.0, 1.0, 1.0]
->
-
-iex> Axon.Activations.hard_tanh(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-1.0, -1.0, -1.0],
- [1.0, 1.0, 1.0]
- ]
->
iex> Axon.Activations.hard_tanh(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-1.0, -1.0, -1.0, 0.0, 1.0, 1.0, 1.0]
+>
+
+iex> Axon.Activations.hard_tanh(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-1.0, -1.0, -1.0],
+ [1.0, 1.0, 1.0]
+ ]
+>
leaky_relu(x, opts \\ [])
Examples
-
+iex> Axon.Activations.leaky_relu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]), alpha: 0.5)
-#Nx.Tensor<
- f32[data: 7]
- [-1.5, -1.0, -0.5, 0.0, 1.0, 2.0, 3.0]
->
-
-iex> Axon.Activations.leaky_relu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], names: [:batch, :data]), alpha: 0.5)
-#Nx.Tensor<
- f32[batch: 2][data: 3]
- [
- [-0.5, -1.0, -1.5],
- [1.0, 2.0, 3.0]
- ]
->
iex> Axon.Activations.leaky_relu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]), alpha: 0.5)
+#Nx.Tensor<
+ f32[data: 7]
+ [-1.5, -1.0, -0.5, 0.0, 1.0, 2.0, 3.0]
+>
+
+iex> Axon.Activations.leaky_relu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], names: [:batch, :data]), alpha: 0.5)
+#Nx.Tensor<
+ f32[batch: 2][data: 3]
+ [
+ [-0.5, -1.0, -1.5],
+ [1.0, 2.0, 3.0]
+ ]
+>
linear(x)
Examples
-
+iex> Axon.Activations.linear(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]
->
-
-iex> Axon.Activations.linear(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-1.0, -2.0, -3.0],
- [1.0, 2.0, 3.0]
- ]
->
iex> Axon.Activations.linear(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]
+>
+
+iex> Axon.Activations.linear(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-1.0, -2.0, -3.0],
+ [1.0, 2.0, 3.0]
+ ]
+>
log_sigmoid(x)
Examples
-
+iex> Axon.Activations.log_sigmoid(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], type: {:f, 32}, names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-3.0485873222351074, -2.1269280910491943, -1.3132617473602295, -0.6931471824645996, -0.3132616877555847, -0.12692801654338837, -0.04858734831213951]
->
-
-iex> Axon.Activations.log_sigmoid(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-1.3125, -2.125, -3.046875],
- [-0.3125, -0.1259765625, -0.04833984375]
- ]
->
iex> Axon.Activations.log_sigmoid(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], type: {:f, 32}, names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-3.0485873222351074, -2.1269280910491943, -1.3132617473602295, -0.6931471824645996, -0.3132616877555847, -0.12692801654338837, -0.04858734831213951]
+>
+
+iex> Axon.Activations.log_sigmoid(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-1.3125, -2.125, -3.046875],
+ [-0.3125, -0.1259765625, -0.04833984375]
+ ]
+>
log_softmax(x, opts \\ [])
Examples
-
+iex> Axon.Activations.log_softmax(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], type: {:f, 32}, names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-6.457762718200684, -5.457762718200684, -4.457762718200684, -3.4577627182006836, -2.4577627182006836, -1.4577628374099731, -0.45776283740997314]
->
-
-iex> Axon.Activations.log_softmax(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-0.404296875, -1.3984375, -2.390625],
- [-2.390625, -1.3984375, -0.404296875]
- ]
->
iex> Axon.Activations.log_softmax(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], type: {:f, 32}, names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-6.457762718200684, -5.457762718200684, -4.457762718200684, -3.4577627182006836, -2.4577627182006836, -1.4577628374099731, -0.45776283740997314]
+>
+
+iex> Axon.Activations.log_softmax(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-0.404296875, -1.3984375, -2.390625],
+ [-2.390625, -1.3984375, -0.404296875]
+ ]
+>
log_sumexp(x, opts \\ [])
Examples
-
+iex> Axon.Activations.log_sumexp(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 1]
- [3.4577627182006836]
->
-
-iex> Axon.Activations.log_sumexp(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 1]
- [
- [-0.59375],
- [3.390625]
- ]
->
iex> Axon.Activations.log_sumexp(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 1]
+ [3.4577627182006836]
+>
+
+iex> Axon.Activations.log_sumexp(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 1]
+ [
+ [-0.59375],
+ [3.390625]
+ ]
+>
mish(x)
Examples
-
+iex> Axon.Activations.mish(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], type: {:f, 32}, names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-0.14564745128154755, -0.2525014877319336, -0.30340147018432617, 0.0, 0.8650984168052673, 1.9439589977264404, 2.98653507232666]
->
-
-iex> Axon.Activations.mish(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-0.30078125, -0.25, -0.1435546875],
- [0.86328125, 1.9375, 2.96875]
- ]
->
iex> Axon.Activations.mish(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], type: {:f, 32}, names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-0.14564745128154755, -0.2525014877319336, -0.30340147018432617, 0.0, 0.8650984168052673, 1.9439589977264404, 2.98653507232666]
+>
+
+iex> Axon.Activations.mish(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-0.30078125, -0.25, -0.1435546875],
+ [0.86328125, 1.9375, 2.96875]
+ ]
+>
relu6(x)
Examples
-iex> Axon.Activations.relu6(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]))
-#Nx.Tensor<
- f32[7]
- [0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 3.0]
->
-
-iex> Axon.Activations.relu6(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [0.0, 0.0, 0.0],
- [1.0, 2.0, 3.0]
- ]
->
+
iex> Axon.Activations.relu6(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]))
+#Nx.Tensor<
+ f32[7]
+ [0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 3.0]
+>
+
+iex> Axon.Activations.relu6(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [0.0, 0.0, 0.0],
+ [1.0, 2.0, 3.0]
+ ]
+>
@@ -1099,20 +1099,20 @@
-relu(x)
Examples
+iex> Axon.Activations.relu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 3.0]
->
-
-iex> Axon.Activations.relu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [0.0, 0.0, 0.0],
- [1.0, 2.0, 3.0]
- ]
->
iex> Axon.Activations.relu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 3.0]
+>
+
+iex> Axon.Activations.relu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [0.0, 0.0, 0.0],
+ [1.0, 2.0, 3.0]
+ ]
+>
selu(x, opts \\ [])
Examples
-iex> Axon.Activations.selu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-1.670568823814392, -1.5201665163040161, -1.1113307476043701, 0.0, 1.0507010221481323, 2.1014020442962646, 3.1521029472351074]
->
-
-iex> Axon.Activations.selu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-1.09375, -1.5078125, -1.6640625],
- [1.046875, 2.09375, 3.140625]
- ]
->
+
iex> Axon.Activations.selu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-1.670568823814392, -1.5201665163040161, -1.1113307476043701, 0.0, 1.0507010221481323, 2.1014020442962646, 3.1521029472351074]
+>
+
+iex> Axon.Activations.selu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-1.09375, -1.5078125, -1.6640625],
+ [1.046875, 2.09375, 3.140625]
+ ]
+>
@@ -1202,20 +1202,20 @@
-sigmoid(x)
Examples
+iex> Axon.Activations.sigmoid(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [0.04742587357759476, 0.11920291930437088, 0.2689414322376251, 0.5, 0.7310585975646973, 0.8807970881462097, 0.9525741338729858]
->
-
-iex> Axon.Activations.sigmoid(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [0.267578125, 0.119140625, 0.04736328125],
- [0.73046875, 0.87890625, 0.94921875]
- ]
->
iex> Axon.Activations.sigmoid(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [0.04742587357759476, 0.11920291930437088, 0.2689414322376251, 0.5, 0.7310585975646973, 0.8807970881462097, 0.9525741338729858]
+>
+
+iex> Axon.Activations.sigmoid(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [0.267578125, 0.119140625, 0.04736328125],
+ [0.73046875, 0.87890625, 0.94921875]
+ ]
+>
silu(x)
Examples
-iex> Axon.Activations.silu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-0.14227762818336487, -0.23840583860874176, -0.2689414322376251, 0.0, 0.7310585975646973, 1.7615941762924194, 2.857722282409668]
->
-
-iex> Axon.Activations.silu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-0.267578125, -0.23828125, -0.1416015625],
- [0.73046875, 1.7578125, 2.84375]
- ]
->
+
iex> Axon.Activations.silu(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-0.14227762818336487, -0.23840583860874176, -0.2689414322376251, 0.0, 0.7310585975646973, 1.7615941762924194, 2.857722282409668]
+>
+
+iex> Axon.Activations.silu(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-0.267578125, -0.23828125, -0.1416015625],
+ [0.73046875, 1.7578125, 2.84375]
+ ]
+>
@@ -1306,22 +1306,22 @@
-softmax(x, opts \\ [])
Examples
+iex> Axon.Activations.softmax(Nx.tensor([[-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]], names: [:batch, :data]))
-#Nx.Tensor<
- f32[batch: 1][data: 7]
- [
- [0.0015683004166930914, 0.004263082519173622, 0.011588259600102901, 0.03150015324354172, 0.08562629669904709, 0.23275642096996307, 0.6326975226402283]
- ]
->
-
-iex> Axon.Activations.softmax(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [0.6640625, 0.2431640625, 0.08935546875],
- [0.08935546875, 0.2431640625, 0.6640625]
- ]
->
iex> Axon.Activations.softmax(Nx.tensor([[-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]], names: [:batch, :data]))
+#Nx.Tensor<
+ f32[batch: 1][data: 7]
+ [
+ [0.0015683004166930914, 0.004263082519173622, 0.011588259600102901, 0.03150015324354172, 0.08562629669904709, 0.23275642096996307, 0.6326975226402283]
+ ]
+>
+
+iex> Axon.Activations.softmax(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [0.6640625, 0.2431640625, 0.08935546875],
+ [0.08935546875, 0.2431640625, 0.6640625]
+ ]
+>
softplus(x)
Examples
-
+iex> Axon.Activations.softplus(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [0.04858734831213951, 0.12692801654338837, 0.3132616877555847, 0.6931471824645996, 1.3132617473602295, 2.1269280910491943, 3.0485873222351074]
->
-
-iex> Axon.Activations.softplus(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [0.3125, 0.1259765625, 0.04833984375],
- [1.3125, 2.125, 3.046875]
- ]
->
iex> Axon.Activations.softplus(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [0.04858734831213951, 0.12692801654338837, 0.3132616877555847, 0.6931471824645996, 1.3132617473602295, 2.1269280910491943, 3.0485873222351074]
+>
+
+iex> Axon.Activations.softplus(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [0.3125, 0.1259765625, 0.04833984375],
+ [1.3125, 2.125, 3.046875]
+ ]
+>
softsign(x)
Examples
-
+iex> Axon.Activations.softsign(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-0.75, -0.6666666865348816, -0.5, 0.0, 0.5, 0.6666666865348816, 0.75]
->
-
-iex> Axon.Activations.softsign(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-0.5, -0.6640625, -0.75],
- [0.5, 0.6640625, 0.75]
- ]
->
iex> Axon.Activations.softsign(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-0.75, -0.6666666865348816, -0.5, 0.0, 0.5, 0.6666666865348816, 0.75]
+>
+
+iex> Axon.Activations.softsign(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-0.5, -0.6640625, -0.75],
+ [0.5, 0.6640625, 0.75]
+ ]
+>
tanh(x)
Examples
-
+iex> Axon.Activations.tanh(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
-#Nx.Tensor<
- f32[data: 7]
- [-0.9950547814369202, -0.9640275835990906, -0.7615941762924194, 0.0, 0.7615941762924194, 0.9640275835990906, 0.9950547814369202]
->
-
-iex> Axon.Activations.tanh(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
-#Nx.Tensor<
- bf16[batch: 2][data: 3]
- [
- [-0.7578125, -0.9609375, -0.9921875],
- [0.7578125, 0.9609375, 0.9921875]
- ]
->
iex> Axon.Activations.tanh(Nx.tensor([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], names: [:data]))
+#Nx.Tensor<
+ f32[data: 7]
+ [-0.9950547814369202, -0.9640275835990906, -0.7615941762924194, 0.0, 0.7615941762924194, 0.9640275835990906, 0.9950547814369202]
+>
+
+iex> Axon.Activations.tanh(Nx.tensor([[-1.0, -2.0, -3.0], [1.0, 2.0, 3.0]], type: {:bf, 16}, names: [:batch, :data]))
+#Nx.Tensor<
+ bf16[batch: 2][data: 3]
+ [
+ [-0.7578125, -0.9609375, -0.9921875],
+ [0.7578125, 0.9609375, 0.9921875]
+ ]
+>
as_graph(axon, input_templates, opts \\ [])
Examples
-
model = Axon.input("input") |> Axon.dense(32)
input = Nx.template({1, 16}, :f32)
+Axon.Display.as_graph(model, input, direction: :top_down)
model = Axon.input("input") |> Axon.dense(32)
input = Nx.template({1, 16}, :f32)
Axon.Display.as_graph(model, input, direction: :top_down)
as_table(axon, input_templates)
Examples
-model = Axon.input("input") |> Axon.dense(32)
input = Nx.template({1, 16}, :f32)
+Axon.Display.as_table(model, input)
model = Axon.input("input") |> Axon.dense(32)
input = Nx.template({1, 16}, :f32)
Axon.Display.as_table(model, input)
small enough to avoid exploding values. The initializers in
this module have a default scale known to work well with
the initialization strategy.
The functions in this module return initialization functions which -take shapes and types and return tensors:
init_fn = Axon.Initializers.zeros()
-init_fn.({1, 2}, {:f, 32})
You may use these functions from within defn
or outside.
init_fn = Axon.Initializers.zeros()
+init_fn.({1, 2}, {:f, 32})
You may use these functions from within defn
or outside.
iex> init_fn = Axon.Initializers.full(1.00)
-iex> out = init_fn.({2, 2}, {:f, 32})
+iex> init_fn = Axon.Initializers.full(1.00)
+iex> out = init_fn.({2, 2}, {:f, 32})
iex> out
-#Nx.Tensor<
- f32[2][2]
- [
- [1.0, 1.0],
- [1.0, 1.0]
- ]
->
+#Nx.Tensor<
+ f32[2][2]
+ [
+ [1.0, 1.0],
+ [1.0, 1.0]
+ ]
+>
iex> init_fn = Axon.Initializers.glorot_normal()
-iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:f, 32}
-
-iex> init_fn = Axon.Initializers.glorot_normal(scale: 1.0e-3)
-iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:bf, 16}
iex> init_fn = Axon.Initializers.glorot_normal()
+iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:f, 32}
+
+iex> init_fn = Axon.Initializers.glorot_normal(scale: 1.0e-3)
+iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:bf, 16}
iex> init_fn = Axon.Initializers.glorot_uniform()
-iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:f, 32}
-
-iex> init_fn = Axon.Initializers.glorot_uniform(scale: 1.0e-3)
-iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:bf, 16}
iex> init_fn = Axon.Initializers.glorot_uniform()
+iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:f, 32}
+
+iex> init_fn = Axon.Initializers.glorot_uniform(scale: 1.0e-3)
+iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:bf, 16}
iex> init_fn = Axon.Initializers.he_normal()
-iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:f, 32}
-
-iex> init_fn = Axon.Initializers.he_normal(scale: 1.0e-3)
-iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:bf, 16}
iex> init_fn = Axon.Initializers.he_normal()
+iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:f, 32}
+
+iex> init_fn = Axon.Initializers.he_normal(scale: 1.0e-3)
+iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:bf, 16}
iex> init_fn = Axon.Initializers.he_uniform()
-iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:f, 32}
-
-iex> init_fn = Axon.Initializers.he_uniform(scale: 1.0e-3)
-iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:bf, 16}
iex> init_fn = Axon.Initializers.he_uniform()
+iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:f, 32}
+
+iex> init_fn = Axon.Initializers.he_uniform(scale: 1.0e-3)
+iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:bf, 16}
iex> init_fn = Axon.Initializers.identity()
-iex> out = init_fn.({2, 2}, {:f, 32})
+iex> init_fn = Axon.Initializers.identity()
+iex> out = init_fn.({2, 2}, {:f, 32})
iex> out
-#Nx.Tensor<
- f32[2][2]
- [
- [1.0, 0.0],
- [0.0, 1.0]
- ]
->
+#Nx.Tensor<
+ f32[2][2]
+ [
+ [1.0, 0.0],
+ [0.0, 1.0]
+ ]
+>
iex> init_fn = Axon.Initializers.lecun_normal()
-iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:f, 32}
-
-iex> init_fn = Axon.Initializers.lecun_normal(scale: 1.0e-3)
-iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:bf, 16}
iex> init_fn = Axon.Initializers.lecun_normal()
+iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:f, 32}
+
+iex> init_fn = Axon.Initializers.lecun_normal(scale: 1.0e-3)
+iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:bf, 16}
iex> init_fn = Axon.Initializers.lecun_uniform()
-iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:f, 32}
-
-iex> init_fn = Axon.Initializers.lecun_uniform(scale: 1.0e-3)
-iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:bf, 16}
iex> init_fn = Axon.Initializers.lecun_uniform()
+iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:f, 32}
+
+iex> init_fn = Axon.Initializers.lecun_uniform(scale: 1.0e-3)
+iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:bf, 16}
iex> init_fn = Axon.Initializers.normal()
-iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:f, 32}
-
-iex> init_fn = Axon.Initializers.normal(mean: 1.0, scale: 1.0)
-iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:bf, 16}
+iex> init_fn = Axon.Initializers.normal()
+iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:f, 32}
+
+iex> init_fn = Axon.Initializers.normal(mean: 1.0, scale: 1.0)
+iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:bf, 16}
iex> init_fn = Axon.Initializers.ones()
-iex> out = init_fn.({2, 2}, {:f, 32})
+iex> init_fn = Axon.Initializers.ones()
+iex> out = init_fn.({2, 2}, {:f, 32})
iex> out
-#Nx.Tensor<
- f32[2][2]
- [
- [1.0, 1.0],
- [1.0, 1.0]
- ]
->
+#Nx.Tensor<
+ f32[2][2]
+ [
+ [1.0, 1.0],
+ [1.0, 1.0]
+ ]
+>
iex> init_fn = Axon.Initializers.orthogonal()
-iex> t = init_fn.({3, 3}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.type(t)
-{:f, 32}
-iex> Nx.shape(t)
-{3, 3}
-
-iex> init_fn = Axon.Initializers.orthogonal()
-iex> t = init_fn.({1, 2, 3, 4}, {:f, 64}, Nx.Random.key(1))
-iex> Nx.type(t)
-{:f, 64}
-iex> Nx.shape(t)
-{1, 2, 3, 4}
+iex> init_fn = Axon.Initializers.orthogonal()
+iex> t = init_fn.({3, 3}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.type(t)
+{:f, 32}
+iex> Nx.shape(t)
+{3, 3}
+
+iex> init_fn = Axon.Initializers.orthogonal()
+iex> t = init_fn.({1, 2, 3, 4}, {:f, 64}, Nx.Random.key(1))
+iex> Nx.type(t)
+{:f, 64}
+iex> Nx.shape(t)
+{1, 2, 3, 4}
iex> init_fn = Axon.Initializers.uniform()
-iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:f, 32}
-
-iex> init_fn = Axon.Initializers.uniform(scale: 1.0e-3)
-iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:bf, 16}
+iex> init_fn = Axon.Initializers.uniform()
+iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:f, 32}
+
+iex> init_fn = Axon.Initializers.uniform(scale: 1.0e-3)
+iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:bf, 16}
iex> init_fn = Axon.Initializers.variance_scaling()
-iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:f, 32}
-
-iex> init_fn = Axon.Initializers.variance_scaling(mode: :fan_out, distribution: :truncated_normal)
-iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{2, 2}
-iex> Nx.type(t)
-{:bf, 16}
-
-iex> init_fn = Axon.Initializers.variance_scaling(mode: :fan_out, distribution: :normal)
-iex> t = init_fn.({64, 3, 32, 32}, {:f, 32}, Nx.Random.key(1))
-iex> Nx.shape(t)
-{64, 3, 32, 32}
-iex> Nx.type(t)
-{:f, 32}
+iex> init_fn = Axon.Initializers.variance_scaling()
+iex> t = init_fn.({2, 2}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:f, 32}
+
+iex> init_fn = Axon.Initializers.variance_scaling(mode: :fan_out, distribution: :truncated_normal)
+iex> t = init_fn.({2, 2}, {:bf, 16}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{2, 2}
+iex> Nx.type(t)
+{:bf, 16}
+
+iex> init_fn = Axon.Initializers.variance_scaling(mode: :fan_out, distribution: :normal)
+iex> t = init_fn.({64, 3, 32, 32}, {:f, 32}, Nx.Random.key(1))
+iex> Nx.shape(t)
+{64, 3, 32, 32}
+iex> Nx.type(t)
+{:f, 32}
iex> init_fn = Axon.Initializers.zeros()
-iex> out = init_fn.({2, 2}, {:f, 32})
+iex> init_fn = Axon.Initializers.zeros()
+iex> out = init_fn.({2, 2}, {:f, 32})
iex> out
-#Nx.Tensor<
- f32[2][2]
- [
- [0.0, 0.0],
- [0.0, 0.0]
- ]
->
+#Nx.Tensor<
+ f32[2][2]
+ [
+ [0.0, 0.0],
+ [0.0, 0.0]
+ ]
+>
Basic neural networks can be seen as a composition of functions:
input
-|> dense(w1, b1)
-|> relu()
-|> dense(w2, b2)
-|> softmax()
These kinds of models are often referred to as deep feedforward networks +|> dense(w1, b1) +|> relu() +|> dense(w2, b2) +|> softmax()
These kinds of models are often referred to as deep feedforward networks or multilayer perceptrons (MLPs) because information flows forward through the network with no feedback connections. Mathematically, a feedforward network can be represented as:
$$ f(x) = f^{(3)}(f^{(2)}(f^{(1)}(x))) $$
You can see a similar pattern emerge if we condense the call stack -in the previous example:
softmax(dense(relu(dense(input, w1, b1)), w2, b2))
The chain structure shown here is the most common structure used +in the previous example:
softmax(dense(relu(dense(input, w1, b1)), w2, b2))
The chain structure shown here is the most common structure used in neural networks. You can consider each function $f^{(n)}$ as a layer in the neural network - for example $f^{(2)} is the 2nd layer in the network. The number of function calls in the @@ -158,7 +158,7 @@
Neural networks are often written as the mapping:
$$ y = f(x; \theta) $$
Where $x$ is the input to the neural network and $\theta$ are the -set of learned parameters. In Elixir, you would write this:
y = model(input, params)
From the previous example, params
would represent the collection:
{w1, b1, w2, b2}
where w1
and w2
are layer kernels, and b1
and b2
are layer
+set of learned parameters. In Elixir, you would write this:
y = model(input, params)
From the previous example, params
would represent the collection:
{w1, b1, w2, b2}
where w1
and w2
are layer kernels, and b1
and b2
are layer
biases.
iex> inp1 = Nx.iota({3, 2}, type: {:f, 32})
-iex> inp2 = Nx.iota({3, 4}, type: {:f, 32})
-iex> kernel = Nx.iota({1, 2, 4}, type: {:f, 32})
-iex> bias = Nx.tensor(1.0)
-iex> Axon.Layers.bilinear(inp1, inp2, kernel, bias)
-#Nx.Tensor<
- f32[3][1]
- [
- [39.0],
- [455.0],
- [1319.0]
- ]
->
+iex> inp1 = Nx.iota({3, 2}, type: {:f, 32})
+iex> inp2 = Nx.iota({3, 4}, type: {:f, 32})
+iex> kernel = Nx.iota({1, 2, 4}, type: {:f, 32})
+iex> bias = Nx.tensor(1.0)
+iex> Axon.Layers.bilinear(inp1, inp2, kernel, bias)
+#Nx.Tensor<
+ f32[3][1]
+ [
+ [39.0],
+ [455.0],
+ [1319.0]
+ ]
+>
A dense layer or fully connected layer transforms the input using the given kernel matrix and bias -to compute:
Nx.dot(input, kernel) + bias
Typically, both kernel
and bias
are learnable
+to compute:
Nx.dot(input, kernel) + bias
Typically, both kernel
and bias
are learnable
parameters trained using gradient-based optimization.
iex> input = Nx.tensor([[1.0, 0.5, 1.0, 0.5], [0.0, 0.0, 0.0, 0.0]], type: {:f, 32})
-iex> kernel = Nx.tensor([[0.2], [0.3], [0.5], [0.8]], type: {:f, 32})
-iex> bias = Nx.tensor([1.0], type: {:f, 32})
-iex> Axon.Layers.dense(input, kernel, bias)
-#Nx.Tensor<
- f32[2][1]
- [
- [2.25],
- [1.0]
- ]
->
+iex> input = Nx.tensor([[1.0, 0.5, 1.0, 0.5], [0.0, 0.0, 0.0, 0.0]], type: {:f, 32})
+iex> kernel = Nx.tensor([[0.2], [0.3], [0.5], [0.8]], type: {:f, 32})
+iex> bias = Nx.tensor([1.0], type: {:f, 32})
+iex> Axon.Layers.dense(input, kernel, bias)
+#Nx.Tensor<
+ f32[2][1]
+ [
+ [2.25],
+ [1.0]
+ ]
+>
iex> input = Nx.tensor([[1, 2, 4, 5], [4, 3, 2, 9]])
-iex> kernels = Nx.tensor([
-...> [0.46299999952316284, 0.5562999844551086, 0.18170000612735748],
-...> [0.9801999926567078, 0.09780000150203705, 0.5333999991416931],
-...> [0.6980000138282776, 0.9240999817848206, 0.23479999601840973],
-...> [0.31929999589920044, 0.42250001430511475, 0.7865999937057495],
-...> [0.5519000291824341, 0.5662999749183655, 0.20559999346733093],
-...> [0.1898999959230423, 0.9311000108718872, 0.8356000185012817],
-...> [0.6383000016212463, 0.8794000148773193, 0.5282999873161316],
-...> [0.9523000121116638, 0.7597000002861023, 0.08250000327825546],
-...> [0.6622999906539917, 0.02329999953508377, 0.8205999732017517],
-...> [0.9855999946594238, 0.36419999599456787, 0.5372999906539917]
-...> ])
-iex> Axon.Layers.embedding(input, kernels)
-#Nx.Tensor<
- f32[2][4][3]
- [
- [
- [0.9801999926567078, 0.09780000150203705, 0.5333999991416931],
- [0.6980000138282776, 0.9240999817848206, 0.23479999601840973],
- [0.5519000291824341, 0.5662999749183655, 0.20559999346733093],
- [0.1898999959230423, 0.9311000108718872, 0.8356000185012817]
- ],
- [
- [0.5519000291824341, 0.5662999749183655, 0.20559999346733093],
- [0.31929999589920044, 0.42250001430511475, 0.7865999937057495],
- [0.6980000138282776, 0.9240999817848206, 0.23479999601840973],
- [0.9855999946594238, 0.36419999599456787, 0.5372999906539917]
- ]
- ]
->
+iex> input = Nx.tensor([[1, 2, 4, 5], [4, 3, 2, 9]])
+iex> kernels = Nx.tensor([
+...> [0.46299999952316284, 0.5562999844551086, 0.18170000612735748],
+...> [0.9801999926567078, 0.09780000150203705, 0.5333999991416931],
+...> [0.6980000138282776, 0.9240999817848206, 0.23479999601840973],
+...> [0.31929999589920044, 0.42250001430511475, 0.7865999937057495],
+...> [0.5519000291824341, 0.5662999749183655, 0.20559999346733093],
+...> [0.1898999959230423, 0.9311000108718872, 0.8356000185012817],
+...> [0.6383000016212463, 0.8794000148773193, 0.5282999873161316],
+...> [0.9523000121116638, 0.7597000002861023, 0.08250000327825546],
+...> [0.6622999906539917, 0.02329999953508377, 0.8205999732017517],
+...> [0.9855999946594238, 0.36419999599456787, 0.5372999906539917]
+...> ])
+iex> Axon.Layers.embedding(input, kernels)
+#Nx.Tensor<
+ f32[2][4][3]
+ [
+ [
+ [0.9801999926567078, 0.09780000150203705, 0.5333999991416931],
+ [0.6980000138282776, 0.9240999817848206, 0.23479999601840973],
+ [0.5519000291824341, 0.5662999749183655, 0.20559999346733093],
+ [0.1898999959230423, 0.9311000108718872, 0.8356000185012817]
+ ],
+ [
+ [0.5519000291824341, 0.5662999749183655, 0.20559999346733093],
+ [0.31929999589920044, 0.42250001430511475, 0.7865999937057495],
+ [0.6980000138282776, 0.9240999817848206, 0.23479999601840973],
+ [0.9855999946594238, 0.36419999599456787, 0.5372999906539917]
+ ]
+ ]
+>
iex> Axon.Layers.global_avg_pool(Nx.iota({3, 2, 3}, type: {:f, 32}), channels: :first)
-#Nx.Tensor<
- f32[3][2]
- [
- [1.0, 4.0],
- [7.0, 10.0],
- [13.0, 16.0]
- ]
->
-
-iex> Axon.Layers.global_avg_pool(Nx.iota({1, 3, 2, 2}, type: {:f, 32}), channels: :first, keep_axes: true)
-#Nx.Tensor<
- f32[1][3][1][1]
- [
- [
- [
- [1.5]
- ],
- [
- [5.5]
- ],
- [
- [9.5]
- ]
- ]
- ]
->
+iex> Axon.Layers.global_avg_pool(Nx.iota({3, 2, 3}, type: {:f, 32}), channels: :first)
+#Nx.Tensor<
+ f32[3][2]
+ [
+ [1.0, 4.0],
+ [7.0, 10.0],
+ [13.0, 16.0]
+ ]
+>
+
+iex> Axon.Layers.global_avg_pool(Nx.iota({1, 3, 2, 2}, type: {:f, 32}), channels: :first, keep_axes: true)
+#Nx.Tensor<
+ f32[1][3][1][1]
+ [
+ [
+ [
+ [1.5]
+ ],
+ [
+ [5.5]
+ ],
+ [
+ [9.5]
+ ]
+ ]
+ ]
+>
iex> Axon.Layers.global_lp_pool(Nx.iota({3, 2, 3}, type: {:f, 32}), norm: 1, channels: :first)
-#Nx.Tensor<
- f32[3][2]
- [
- [3.0, 12.0],
- [21.0, 30.0],
- [39.0, 48.0]
- ]
->
-
-iex> Axon.Layers.global_lp_pool(Nx.iota({1, 3, 2, 2}, type: {:f, 16}), keep_axes: true, channels: :first)
-#Nx.Tensor<
- f16[1][3][1][1]
- [
- [
- [
- [3.7421875]
- ],
- [
- [11.2265625]
- ],
- [
- [19.125]
- ]
- ]
- ]
->
+iex> Axon.Layers.global_lp_pool(Nx.iota({3, 2, 3}, type: {:f, 32}), norm: 1, channels: :first)
+#Nx.Tensor<
+ f32[3][2]
+ [
+ [3.0, 12.0],
+ [21.0, 30.0],
+ [39.0, 48.0]
+ ]
+>
+
+iex> Axon.Layers.global_lp_pool(Nx.iota({1, 3, 2, 2}, type: {:f, 16}), keep_axes: true, channels: :first)
+#Nx.Tensor<
+ f16[1][3][1][1]
+ [
+ [
+ [
+ [3.7421875]
+ ],
+ [
+ [11.2265625]
+ ],
+ [
+ [19.125]
+ ]
+ ]
+ ]
+>
iex> Axon.Layers.global_max_pool(Nx.iota({3, 2, 3}, type: {:f, 32}), channels: :first)
-#Nx.Tensor<
- f32[3][2]
- [
- [2.0, 5.0],
- [8.0, 11.0],
- [14.0, 17.0]
- ]
->
-
-iex> Axon.Layers.global_max_pool(Nx.iota({1, 3, 2, 2}, type: {:f, 32}), keep_axes: true, channels: :first)
-#Nx.Tensor<
- f32[1][3][1][1]
- [
- [
- [
- [3.0]
- ],
- [
- [7.0]
- ],
- [
- [11.0]
- ]
- ]
- ]
->
+iex> Axon.Layers.global_max_pool(Nx.iota({3, 2, 3}, type: {:f, 32}), channels: :first)
+#Nx.Tensor<
+ f32[3][2]
+ [
+ [2.0, 5.0],
+ [8.0, 11.0],
+ [14.0, 17.0]
+ ]
+>
+
+iex> Axon.Layers.global_max_pool(Nx.iota({1, 3, 2, 2}, type: {:f, 32}), keep_axes: true, channels: :first)
+#Nx.Tensor<
+ f32[1][3][1][1]
+ [
+ [
+ [
+ [3.0]
+ ],
+ [
+ [7.0]
+ ],
+ [
+ [11.0]
+ ]
+ ]
+ ]
+>
iex> t = Nx.tensor([[[0.9450, 0.4684, 1.8146], [1.2663, 0.4354, -0.0781], [-0.4759, 0.3251, 0.8742]]], type: {:f, 32})
-iex> Axon.Layers.lp_pool(t, kernel_size: 2, norm: 2, channels: :first)
-#Nx.Tensor<
- f32[1][3][1]
- [
- [
- [1.0547149181365967],
- [1.3390626907348633],
- [0.5763426423072815]
- ]
- ]
->
+iex> t = Nx.tensor([[[0.9450, 0.4684, 1.8146], [1.2663, 0.4354, -0.0781], [-0.4759, 0.3251, 0.8742]]], type: {:f, 32})
+iex> Axon.Layers.lp_pool(t, kernel_size: 2, norm: 2, channels: :first)
+#Nx.Tensor<
+ f32[1][3][1]
+ [
+ [
+ [1.0547149181365967],
+ [1.3390626907348633],
+ [0.5763426423072815]
+ ]
+ ]
+>
iex> t = Nx.tensor([[
-...> [0.051500000059604645, -0.7042999863624573, -0.32899999618530273],
-...> [-0.37130001187324524, 1.6191999912261963, -0.11829999834299088],
-...> [0.7099999785423279, 0.7282999753952026, -0.18639999628067017]]], type: {:f, 32})
-iex> Axon.Layers.max_pool(t, kernel_size: 2, channels: :first)
-#Nx.Tensor<
- f32[1][3][1]
- [
- [
- [0.051500000059604645],
- [1.6191999912261963],
- [0.7282999753952026]
- ]
- ]
->
+iex> t = Nx.tensor([[
+...> [0.051500000059604645, -0.7042999863624573, -0.32899999618530273],
+...> [-0.37130001187324524, 1.6191999912261963, -0.11829999834299088],
+...> [0.7099999785423279, 0.7282999753952026, -0.18639999628067017]]], type: {:f, 32})
+iex> Axon.Layers.max_pool(t, kernel_size: 2, channels: :first)
+#Nx.Tensor<
+ f32[1][3][1]
+ [
+ [
+ [0.051500000059604645],
+ [1.6191999912261963],
+ [0.7282999753952026]
+ ]
+ ]
+>
iex> Axon.Layers.flatten(Nx.iota({1, 2, 2}, type: {:f, 32}))
-#Nx.Tensor<
- f32[1][4]
- [
- [0.0, 1.0, 2.0, 3.0]
- ]
->
+iex> Axon.Layers.flatten(Nx.iota({1, 2, 2}, type: {:f, 32}))
+#Nx.Tensor<
+ f32[1][4]
+ [
+ [0.0, 1.0, 2.0, 3.0]
+ ]
+>
iex> img = Nx.iota({1, 1, 3, 3}, type: {:f, 32})
-iex> Axon.Layers.resize(img, size: {4, 4}, channels: :first)
-#Nx.Tensor<
- f32[1][1][4][4]
- [
- [
- [
- [0.0, 1.0, 1.0, 2.0],
- [3.0, 4.0, 4.0, 5.0],
- [3.0, 4.0, 4.0, 5.0],
- [6.0, 7.0, 7.0, 8.0]
- ]
- ]
- ]
->
iex> img = Nx.iota({1, 1, 3, 3}, type: {:f, 32})
+iex> Axon.Layers.resize(img, size: {4, 4}, channels: :first)
+#Nx.Tensor<
+ f32[1][1][4][4]
+ [
+ [
+ [
+ [0.0, 1.0, 1.0, 2.0],
+ [3.0, 4.0, 4.0, 5.0],
+ [3.0, 4.0, 4.0, 5.0],
+ [6.0, 7.0, 7.0, 8.0]
+ ]
+ ]
+ ]
+>
iex> img = Nx.iota({1, 1, 3, 3}, type: {:f, 32})
-iex> Axon.Layers.resize(img, size: {4, 4}, method: :foo)
+iex> img = Nx.iota({1, 1, 3, 3}, type: {:f, 32})
+iex> Axon.Layers.resize(img, size: {4, 4}, method: :foo)
** (ArgumentError) expected :method to be either of :nearest, :bilinear, :bicubic, :lanczos3, :lanczos5, got: :foo
iex> input = Nx.tensor([[[0.1294, -0.6638, 1.0251]], [[ 0.9182, 1.1512, -1.6149]]], type: {:f, 32})
-iex> kernel = Nx.tensor([[[-1.5475, 1.2425]], [[0.1871, 0.5458]], [[-0.4488, 0.8879]]], type: {:f, 32})
-iex> bias = Nx.tensor([0.7791, 0.1676, 1.5971], type: {:f, 32})
-iex> Axon.Layers.conv(input, kernel, bias, channels: :first)
-#Nx.Tensor<
- f32[2][3][2]
- [
- [
- [-0.24591797590255737, 3.08001708984375],
- [-0.1704912781715393, 0.6029025316238403],
- [0.9496372938156128, 2.80519962310791]
- ],
- [
- [0.7885514497756958, -3.0088953971862793],
- [0.9677201509475708, -0.4984228312969208],
- [2.207162380218506, -0.3534282445907593]
- ]
- ]
->
iex> input = Nx.tensor([[[0.1294, -0.6638, 1.0251]], [[ 0.9182, 1.1512, -1.6149]]], type: {:f, 32})
+iex> kernel = Nx.tensor([[[-1.5475, 1.2425]], [[0.1871, 0.5458]], [[-0.4488, 0.8879]]], type: {:f, 32})
+iex> bias = Nx.tensor([0.7791, 0.1676, 1.5971], type: {:f, 32})
+iex> Axon.Layers.conv(input, kernel, bias, channels: :first)
+#Nx.Tensor<
+ f32[2][3][2]
+ [
+ [
+ [-0.24591797590255737, 3.08001708984375],
+ [-0.1704912781715393, 0.6029025316238403],
+ [0.9496372938156128, 2.80519962310791]
+ ],
+ [
+ [0.7885514497756958, -3.0088953971862793],
+ [0.9677201509475708, -0.4984228312969208],
+ [2.207162380218506, -0.3534282445907593]
+ ]
+ ]
+>
iex> input = Nx.tensor([[[[-1.0476, -0.5041], [-0.9336, 1.5907]]]], type: {:f, 32})
-iex> kernel = Nx.tensor([
-...> [[[0.7514, 0.7356], [1.3909, 0.6800]]],
-...> [[[-0.3450, 0.4551], [-0.6275, -0.9875]]],
-...> [[[1.8587, 0.4722], [0.6058, -1.0301]]]
-...> ], type: {:f, 32})
-iex> bias = Nx.tensor([1.9564, 0.2822, -0.5385], type: {:f, 32})
-iex> Axon.Layers.conv(input, kernel, bias, channels: :first)
-#Nx.Tensor<
- f32[1][3][1][1]
- [
- [
- [
- [0.5815491676330566]
- ],
- [
- [-0.5707762241363525]
- ],
- [
- [-4.927865028381348]
- ]
- ]
- ]
->
iex> input = Nx.tensor([[[[-1.0476, -0.5041], [-0.9336, 1.5907]]]], type: {:f, 32})
+iex> kernel = Nx.tensor([
+...> [[[0.7514, 0.7356], [1.3909, 0.6800]]],
+...> [[[-0.3450, 0.4551], [-0.6275, -0.9875]]],
+...> [[[1.8587, 0.4722], [0.6058, -1.0301]]]
+...> ], type: {:f, 32})
+iex> bias = Nx.tensor([1.9564, 0.2822, -0.5385], type: {:f, 32})
+iex> Axon.Layers.conv(input, kernel, bias, channels: :first)
+#Nx.Tensor<
+ f32[1][3][1][1]
+ [
+ [
+ [
+ [0.5815491676330566]
+ ],
+ [
+ [-0.5707762241363525]
+ ],
+ [
+ [-4.927865028381348]
+ ]
+ ]
+ ]
+>
iex> input = Nx.tensor([[[[[-0.6497], [1.0939]], [[-2.5465], [0.7801]]]]], type: {:f, 32})
-iex> kernel = Nx.tensor([
-...> [[[[ 0.7390], [-0.0927]], [[-0.8675], [-0.9209]]]],
-...> [[[[-0.6638], [0.4341]], [[0.6368], [1.1846]]]]
-...> ], type: {:f, 32})
-iex> bias = Nx.tensor([-0.4101, 0.1776], type: {:f, 32})
-iex> Axon.Layers.conv(input, kernel, bias, channels: :first)
-#Nx.Tensor<
- f32[1][2][1][1][1]
- [
- [
- [
- [
- [0.49906185269355774]
- ]
- ],
- [
- [
- [0.38622811436653137]
- ]
- ]
- ]
- ]
->
+iex> input = Nx.tensor([[[[[-0.6497], [1.0939]], [[-2.5465], [0.7801]]]]], type: {:f, 32})
+iex> kernel = Nx.tensor([
+...> [[[[ 0.7390], [-0.0927]], [[-0.8675], [-0.9209]]]],
+...> [[[[-0.6638], [0.4341]], [[0.6368], [1.1846]]]]
+...> ], type: {:f, 32})
+iex> bias = Nx.tensor([-0.4101, 0.1776], type: {:f, 32})
+iex> Axon.Layers.conv(input, kernel, bias, channels: :first)
+#Nx.Tensor<
+ f32[1][2][1][1][1]
+ [
+ [
+ [
+ [
+ [0.49906185269355774]
+ ]
+ ],
+ [
+ [
+ [0.38622811436653137]
+ ]
+ ]
+ ]
+ ]
+>
iex> input = Nx.iota({1, 3, 3}, type: {:f, 32})
-iex> kernel = Nx.iota({6, 3, 2}, type: {:f, 32})
-iex> bias = Nx.tensor(1.0, type: {:f, 32})
-iex> Axon.Layers.conv_transpose(input, kernel, bias, channels: :first)
-#Nx.Tensor<
- f32[1][6][4]
- [
- [
- [40.0, 79.0, 94.0, 43.0],
- [94.0, 205.0, 256.0, 133.0],
- [148.0, 331.0, 418.0, 223.0],
- [202.0, 457.0, 580.0, 313.0],
- [256.0, 583.0, 742.0, 403.0],
- [310.0, 709.0, 904.0, 493.0]
- ]
- ]
->
iex> input = Nx.iota({1, 3, 3}, type: {:f, 32})
+iex> kernel = Nx.iota({6, 3, 2}, type: {:f, 32})
+iex> bias = Nx.tensor(1.0, type: {:f, 32})
+iex> Axon.Layers.conv_transpose(input, kernel, bias, channels: :first)
+#Nx.Tensor<
+ f32[1][6][4]
+ [
+ [
+ [40.0, 79.0, 94.0, 43.0],
+ [94.0, 205.0, 256.0, 133.0],
+ [148.0, 331.0, 418.0, 223.0],
+ [202.0, 457.0, 580.0, 313.0],
+ [256.0, 583.0, 742.0, 403.0],
+ [310.0, 709.0, 904.0, 493.0]
+ ]
+ ]
+>
Accumulated state in an Axon.Loop.
Loop state is a struct:
%State{
- epoch: integer(),
- max_epoch: integer(),
- iteration: integer(),
- max_iteration: integer(),
- metrics: map(string(), container()),
- times: map(integer(), integer()),
- step_state: container(),
- handler_metadata: container()
-}
epoch
is the current epoch, starting at 0, of the nested loop.
+
Accumulated state in an Axon.Loop.
Loop state is a struct:
%State{
+ epoch: integer(),
+ max_epoch: integer(),
+ iteration: integer(),
+ max_iteration: integer(),
+ metrics: map(string(), container()),
+ times: map(integer(), integer()),
+ step_state: container(),
+ handler_metadata: container()
+}
epoch
is the current epoch, starting at 0, of the nested loop.
Defaults to 0.
max_epoch
is the maximum number of epochs the loop should run
for. Defaults to 1.
iteration
is the current iteration of the inner loop. In supervised
settings, this will be the current batch. Defaults to 0.
max_iteration
is the maximum number of iterations the loop should
diff --git a/Axon.Loop.html b/Axon.Loop.html
index dc51a909..1fe923e7 100644
--- a/Axon.Loop.html
+++ b/Axon.Loop.html
@@ -135,66 +135,66 @@
Abstraction for modeling a reduction of a dataset with an accumulated state for a number of epochs.
Inspired heavily by PyTorch Ignite.
The main abstraction is the %Axon.Loop{}
struct, which controls a nested
-reduction of the form:
Enum.reduce(1..max_epochs, state, fn epoch, state ->
- Enum.reduce(data, state, &batch_step/2)
-end)
data
is assumed to be an Enumerable
or Stream
of input data which is
+reduction of the form:
Enum.reduce(1..max_epochs, state, fn epoch, state ->
+ Enum.reduce(data, state, &batch_step/2)
+end)
data
is assumed to be an Enumerable
or Stream
of input data which is
handled by a processing function, batch_step
. The purpose of the loop
abstraction is to take away much of the boilerplate code used in solving machine
learning tasks. Tasks such as normalizing a dataset, hyperparameter optimization,
-or training machine learning models boil down to writing one function:
defn batch_step(batch, state) do
+or training machine learning models boil down to writing one function:defn batch_step(batch, state) do
# ...do something with batch...
updated_state
-end
For tasks such as training a neural network, state
will encapsulate things
+
end
For tasks such as training a neural network, state
will encapsulate things
such as model and optimizer state. For supervised learning tasks, batch_step
-might look something like:
defn batch_step({inputs, targets}, state) do
- %{parameters: params, optimizer_state: optim_state} = state
+might look something like:defn batch_step({inputs, targets}, state) do
+ %{parameters: params, optimizer_state: optim_state} = state
- gradients = grad(params, objective_fn.(&1, inputs, targets))
- {updates, new_optim_state} = optimizer.(optim_state, params, gradients)
+ gradients = grad(params, objective_fn.(&1, inputs, targets))
+ {updates, new_optim_state} = optimizer.(optim_state, params, gradients)
- new_params = apply_updates(params, updates)
+ new_params = apply_updates(params, updates)
- %{parameters: new_params, optimizer_state: optim_state}
-end
batch_step
takes a batch of {input, target}
pairs and the current state,
+
%{parameters: new_params, optimizer_state: optim_state}
+end
batch_step
takes a batch of {input, target}
pairs and the current state,
and updates the model parameters based on the gradients received from some arbitrary
objective function. This function will run in a nested loop, iterating over the entire
dataset for N
epochs before finally returning the trained model state. By defining
1 function, we've created a training loop that works for most machine learning models.
In actuality, the loop abstraction accumulates a struct, %Axon.Loop.State{}
, which looks
-like (assuming container
is a generic Elixir container of tensors, e.g. map, tuple, etc.):
%Axon.Loop.State{
- epoch: integer(),
- max_epoch: integer(),
- iteration: integer(),
- max_iteration: integer(),
- metrics: map(string(), container()),
- times: map(integer(), integer()),
- step_state: container()
-}
batch_step
takes in the batch and the step state field and returns a step_state
,
+like (assuming container
is a generic Elixir container of tensors, e.g. map, tuple, etc.):
%Axon.Loop.State{
+ epoch: integer(),
+ max_epoch: integer(),
+ iteration: integer(),
+ max_iteration: integer(),
+ metrics: map(string(), container()),
+ times: map(integer(), integer()),
+ step_state: container()
+}
batch_step
takes in the batch and the step state field and returns a step_state
,
which is a generic container of state accumulated at each iteration. The rest of the fields
in the state struct are updated automatically behind the scenes.
The loop must start from some initial step state, thus most tasks must also provide an additional initialization function to provide some starting point for the step state. For machine learning tasks, the initialization function will return things like initial model parameters and optimizer state.
Typically, the final output of the loop is the accumulated final state; however, you
may optionally apply an output transform to extract specific values at the end of the
-loop. For example, Axon.Loop.trainer/4
by default extracts trained model state:
output_transform = fn state ->
- state.step_state[:model_state]
-end
Axon.Loop.trainer/4
by default extracts trained model state:output_transform = fn state ->
+ state.step_state[:model_state]
+end
The core of the Axon loop are the init and step functions. The initialization is an -arity-0 function which provides an initial step state:
init = fn ->
- %{params: Axon.init(model)}
-end
While the step function is the batch_step
function mentioned earlier:
step = fn data, state ->
+arity-0 function which provides an initial step state:init = fn ->
+ %{params: Axon.init(model)}
+end
While the step function is the batch_step
function mentioned earlier:
step = fn data, state ->
new_state = # ...do something...
new_state
-end
Note that any optimization and training anonymous functions that need to be used in the
-batch_step
function can be passed as extra arguments. For example:
step_with_training_arguments = fn data, state, optimizer_update_fn, state_update_fn ->
+end
Note that any optimization and training anonymous functions that need to be used in the
+batch_step
function can be passed as extra arguments. For example:
step_with_training_arguments = fn data, state, optimizer_update_fn, state_update_fn ->
# ...do something...
-end
+end
-step = &(step_with_training_arguments.(&1, &2, actual_optimizer_update_fn, actual_state_update_fn))
+
step = &(step_with_training_arguments.(&1, &2, actual_optimizer_update_fn, actual_state_update_fn))
Often times you want to compute metrics associated with your training iterations.
To accomplish this, you can attach metrics to each Axon.Loop
. Assuming a batch_step
-function which looks like:
defn batch_step({inputs, targets}, state) do
- %{parameters: params, optimizer_state: optim_state} = state
+function which looks like:defn batch_step({inputs, targets}, state) do
+ %{parameters: params, optimizer_state: optim_state} = state
- gradients = grad(params, objective_fn.(&1, inputs, targets))
- {updates, new_optim_state} = optimizer.(optim_state, params, gradients)
+ gradients = grad(params, objective_fn.(&1, inputs, targets))
+ {updates, new_optim_state} = optimizer.(optim_state, params, gradients)
- new_params = apply_updates(params, updates)
+ new_params = apply_updates(params, updates)
# Shown for simplicity, you can optimize this by calculating preds
# along with the gradient calculation
- preds = model_fn.(params, inputs)
+ preds = model_fn.(params, inputs)
- %{
+ %{
y_true: targets,
y_pred: preds,
parameters: new_params,
optimizer_state: optim_state
- }
-end
You can attach metrics to this by using Axon.Loop.metric/4
:
Axon.Loop.loop(&batch_step/2)
-|> Axon.Loop.metric("Accuracy", :accuracy, fn %{y_true: y_, y_pred: y} -> [y_, y] end)
-|> Axon.Loop.run(data)
Because metrics work directly on step_state
, you typically need to provide an output
+
}
+end
You can attach metrics to this by using Axon.Loop.metric/4
:
Axon.Loop.loop(&batch_step/2)
+|> Axon.Loop.metric("Accuracy", :accuracy, fn %{y_true: y_, y_pred: y} -> [y_, y] end)
+|> Axon.Loop.run(data)
Because metrics work directly on step_state
, you typically need to provide an output
transform to indicate which values should be passed to your metric function. By default,
Axon assumes a supervised training task with the fields :y_true
and :y_pred
present
in the step state. See Axon.Loop.metric/4
for more information.
Metrics will be tracked in the loop state using the user-provided key. Metrics integrate @@ -234,24 +234,24 @@
You can instrument several points in the loop using event handlers. By default, several events -are fired when running a loop:
events = [
+are fired when running a loop:events = [
:started, # After loop state initialization
:epoch_started, # On epoch start
:iteration_started, # On iteration start
:iteration_completed, # On iteration complete
:epoch_completed, # On epoch complete
:epoch_halted, # On epoch halt, if early halted
-]
You can attach event handlers to events using Axon.Loop.handle_event/4
:
loop
-|> Axon.Loop.handle_event(:iteration_completed, &log_metrics/1, every: 100)
-|> Axon.Loop.run(data)
The above will trigger log_metrics/1
every 100 times the :iteration_completed
event
+
]
You can attach event handlers to events using Axon.Loop.handle_event/4
:
loop
+|> Axon.Loop.handle_event(:iteration_completed, &log_metrics/1, every: 100)
+|> Axon.Loop.run(data)
The above will trigger log_metrics/1
every 100 times the :iteration_completed
event
is fired. Event handlers must return a tuple {status, state}
, where status
is an
atom with one of the following values:
:continue # Continue epoch, continue looping
:halt_epoch # Halt the epoch, continue looping
:halt_loop # Halt looping
And state
is an updated Axon.Loop.State
struct. Handler functions take as input
the current loop state.
It's important to note that event handlers are triggered in the order they are attached to the loop. If you have two handlers on the same event, they will trigger in order:
loop
-|> Axon.Loop.handle_event(:epoch_completed, &normalize_state/1) # Runs first
-|> Axon.Loop.handle_event(:epoch_completed, &log_state/1) # Runs second
You may provide filters to filter when event handlers trigger. See Axon.Loop.handle_event/4
+|> Axon.Loop.handle_event(:epoch_completed, &normalize_state/1) # Runs first
+|> Axon.Loop.handle_event(:epoch_completed, &log_state/1) # Runs second
You may provide filters to filter when event handlers trigger. See Axon.Loop.handle_event/4
for more details on valid filters.
In order to execute a loop, you should use Axon.Loop.run/3
:
Axon.Loop.run(loop, data, epochs: 10)
In order to execute a loop, you should use Axon.Loop.run/3
:
Axon.Loop.run(loop, data, epochs: 10)
At times you may want to resume a loop from some previous state. You can accomplish this
with Axon.Loop.from_state/2
:
loop
-|> Axon.Loop.from_state(state)
-|> Axon.Loop.run(data)
+|> Axon.Loop.from_state(state)
+|> Axon.Loop.run(data)
Axon.Loop.serialize_state/2
. Serialization
options will be forwarded to Axon.Loop.serialize_state/2
.You can customize checkpoint events by passing :event
and :filter
options:
loop
-|> Axon.Loop.checkpoint(event: :iteration_completed, filter: [every: 50])
Checkpoints are saved under the checkpoint/
directory with a pattern
+|> Axon.Loop.checkpoint(event: :iteration_completed, filter: [every: 50])
Checkpoints are saved under the checkpoint/
directory with a pattern
of checkpoint_{epoch}.ckpt
. You can customize the path and pattern
with the :path
and :file_pattern
options:
my_file_pattern =
- fn %Axon.Loop.State{epoch: epoch, iteration: iter} ->
- "checkpoint_#{epoch}_#{iter}"
- end
+ fn %Axon.Loop.State{epoch: epoch, iteration: iter} ->
+ "checkpoint_#{epoch}_#{iter}"
+ end
loop
-|> Axon.Loop.checkpoint(path: "my_checkpoints", file_pattern: my_file_pattern)
If you'd like to only save checkpoints based on some metric criteria, +|> Axon.Loop.checkpoint(path: "my_checkpoints", file_pattern: my_file_pattern)
If you'd like to only save checkpoints based on some metric criteria,
you can specify the :criteria
option. :criteria
must be a valid key
in metrics:
loop
-|> Axon.Loop.checkpoint(criteria: "validation_loss")
The default criteria mode is :min
, meaning the min score metric will
+|> Axon.Loop.checkpoint(criteria: "validation_loss")
The default criteria mode is :min
, meaning the min score metric will
be considered "best" when deciding to save on a given event. Valid modes
are :min
and :max
:
loop
-|> Axon.Loop.checkpoint(criteria: "validation_accuracy", mode: :max)
You must specify a metric to monitor and the metric must be present in the loop state. Typically, this will be a validation metric:
model
-|> Axon.Loop.trainer(loss, optim)
-|> Axon.Loop.metric(:accuracy)
-|> Axon.Loop.validate(val_data)
-|> Axon.Loop.early_stop("validation_accuracy")
It's important to remember that handlers are executed in the +|> Axon.Loop.trainer(loss, optim) +|> Axon.Loop.metric(:accuracy) +|> Axon.Loop.validate(val_data) +|> Axon.Loop.early_stop("validation_accuracy")
It's important to remember that handlers are executed in the order they are added to the loop. For example, if you'd like to checkpoint a loop after every epoch and use early stopping, most likely you want to add the checkpoint handler before the early stopping handler:
model
-|> Axon.Loop.trainer(loss, optim)
-|> Axon.Loop.metric(:accuracy)
-|> Axon.Loop.checkpoint()
-|> Axon.Loop.early_stop("accuracy")
That will ensure checkpoint is always fired, even if the loop +|> Axon.Loop.trainer(loss, optim) +|> Axon.Loop.metric(:accuracy) +|> Axon.Loop.checkpoint() +|> Axon.Loop.early_stop("accuracy")
That will ensure checkpoint is always fired, even if the loop exited early.
@@ -673,18 +673,18 @@Creates a supervised evaluator from a model.
An evaluator can be used for things such as testing and validation of models
after or during training. It assumes model
is an Axon struct, container of
structs, or a tuple of init
/ apply
functions. model_state
must be a
-container usable from within model
.
The evaluator returns a step state of the form:
%{
+container usable from within model
.The evaluator returns a step state of the form:
%{
y_true: labels,
y_pred: predictions
-}
Such that you can attach any number of supervised metrics to the evaluation
+
}
Such that you can attach any number of supervised metrics to the evaluation loop:
model
-|> Axon.Loop.evaluator()
-|> Axon.Loop.metric("Accuracy", :accuracy)
You must pass a compatible trained model state to Axon.Loop.run/4
when using
+|> Axon.Loop.evaluator()
+|> Axon.Loop.metric("Accuracy", :accuracy)
You must pass a compatible trained model state to Axon.Loop.run/4
when using
supervised evaluation loops. For example, if you've binded the result of a training
run to trained_model_state
, you can run the trained model through an evaluation
run like this:
model
-|> Axon.Loop.evaluator()
-|> Axon.Loop.run(data, trained_model_state, compiler: EXLA)
This function applies an output transform which returns the map of metrics accumulated +|> Axon.Loop.evaluator() +|> Axon.Loop.run(data, trained_model_state, compiler: EXLA)
This function applies an output transform which returns the map of metrics accumulated over the given loop.
@@ -709,7 +709,7 @@Attaches state
to the given loop in order to resume looping
from a previous state.
It's important to note that a loop's attached state takes precedence -over defined initialization functions. Given initialization function:
defn init_state(), do: %{foo: 1, bar: 2}
And an attached state:
state = %State{step_state: %{foo: 2, bar: 3}}
init_state/0
will never execute, and instead the initial step state
+over defined initialization functions. Given initialization function:
defn init_state(), do: %{foo: 1, bar: 2}
And an attached state:
state = %State{step_state: %{foo: 2, bar: 3}}
init_state/0
will never execute, and instead the initial step state
of %{foo: 2, bar: 3}
will be used.
Adds a handler function to the loop which will be triggered on event
with an optional filter.
Events take place at different points during loop execution. The default -events are:
events = [
+events are:events = [
:started, # After loop state initialization
:epoch_started, # On epoch start
:iteration_started, # On iteration start
:iteration_completed, # On iteration complete
:epoch_completed, # On epoch complete
:epoch_halted, # On epoch halt, if early halted
-]
Generally, event handlers are side-effecting operations which provide some
+
]
Generally, event handlers are side-effecting operations which provide some sort of inspection into the loop's progress. It's important to note that if you define multiple handlers to be triggered on the same event, they will execute in order from when they were attached to the training loop:
loop
-|> Axon.Loop.handle_event(:epoch_started, &normalize_step_state/1) # executes first
-|> Axon.Loop.handle_event(:epoch_started, &log_step_state/1) # executes second
Thus, if you have separate handlers which alter or depend on loop state, +|> Axon.Loop.handle_event(:epoch_started, &normalize_step_state/1) # executes first +|> Axon.Loop.handle_event(:epoch_started, &log_step_state/1) # executes second
Thus, if you have separate handlers which alter or depend on loop state, you need to ensure they are ordered correctly, or combined into a single event handler for maximum control over execution.
event
must be an atom representing the event to trigger handler
or a
list of atoms indicating handler
should be triggered on multiple events.
@@ -790,16 +790,16 @@
Adds a handler function which updates a Kino.VegaLite
plot.
By default, this will run after every iteration.
You must specify a plot to push to and a metric to track. The :x
axis will be the iteration count, labeled "step"
. The metric must match the name given to the :y
axis in your VegaLite
plot:
plot =
- Vl.new()
- |> Vl.mark(:line)
- |> Vl.encode_field(:x, "step", type: :quantitative)
- |> Vl.encode_field(:y, "loss", type: :quantitative)
- |> Kino.VegaLite.new()
- |> Kino.render()
+ Vl.new()
+ |> Vl.mark(:line)
+ |> Vl.encode_field(:x, "step", type: :quantitative)
+ |> Vl.encode_field(:y, "loss", type: :quantitative)
+ |> Kino.VegaLite.new()
+ |> Kino.render()
model
-|> Axon.Loop.trainer(loss, optim)
-|> Axon.Loop.kino_vega_lite_plot(plot, "loss")
Creates a loop from step_fn
, an optional init_fn
, and an
optional output_transform
.
step_fn
is an arity-2 function which takes a batch and state
-and returns an updated step state:
defn batch_step(batch, step_state) do
+and returns an updated step state:defn batch_step(batch, step_state) do
step_state + 1
-end
init_fn
by default is an identity function which forwards its
+
end
init_fn
by default is an identity function which forwards its
initial arguments as the model state. You should define a custom
-initialization function if you require a different behavior:
defn init_step_state(state) do
- Map.merge(%{foo: 1}, state)
-end
You may use state
in conjunction with initialization functions in
+initialization function if you require a different behavior:
defn init_step_state(state) do
+ Map.merge(%{foo: 1}, state)
+end
You may use state
in conjunction with initialization functions in
init_fn
. For example, train_step/3
uses initial state as initial
model parameters to allow initializing models from partial parameterizations.
step_batch/2
and init_step_state/1
are typically called from
within Nx.Defn.jit/3
. While JIT-compilation will work with anonymous functions,
@@ -908,20 +908,20 @@
Adds a metric of the given name to the loop.
A metric is a function which tracks or measures some value with respect to values in the step state. For example, when training classification models, it's common to track the model's accuracy during training:
loop
-|> Axon.Loop.metric(:accuracy, "Accuracy")
By default, metrics assume a supervised learning task and extract the fields +|> Axon.Loop.metric(:accuracy, "Accuracy")
By default, metrics assume a supervised learning task and extract the fields
[:y_true, :y_pred]
from the step state. If you wish to work on a different
value, you can use an output transform. An output transform is a list of keys
to extract from the output state, or a function which returns a flattened list
of values to pass to the given metric function. Values received from output
-transforms are passed to the given metric using:
value = output_transform.(step_state)
-apply(metric, value)
Thus, even if you want your metric to work on a container, your output transform +transforms are passed to the given metric using:
value = output_transform.(step_state)
+apply(metric, value)
Thus, even if you want your metric to work on a container, your output transform must return a list.
metric
must be an atom which matches the name of a metric in Axon.Metrics
, or
an arbitrary function which returns a tensor or container.
name
must be a string or atom used to store the computed metric in the loop
state. If names conflict, the last attached metric will take precedence:
loop
-|> Axon.Loop.metric(:mean_squared_error, "Error") # Will be overwritten
-|> Axon.Loop.metric(:mean_absolute_error, "Error") # Will be used
By default, metrics keep a running average of the metric calculation. You can +|> Axon.Loop.metric(:mean_squared_error, "Error") # Will be overwritten +|> Axon.Loop.metric(:mean_absolute_error, "Error") # Will be used
By default, metrics keep a running average of the metric calculation. You can
override this behavior by changing accumulate
:
loop
-|> Axon.Loop.metric(:true_negatives, "tn", :running_sum)
Accumulation function can be one of the accumulation combinators in Axon.Metrics +|> Axon.Loop.metric(:true_negatives, "tn", :running_sum)
Accumulation function can be one of the accumulation combinators in Axon.Metrics
or an arity-3 function of the form: accumulate(acc, obs, i) :: new_acc
.
You must specify a metric to monitor and the metric must be present in the loop state. Typically, this will be a validation metric:
model
-|> Axon.Loop.trainer(loss, optim)
-|> Axon.Loop.metric(:accuracy)
-|> Axon.Loop.validate(model, val_data)
-|> Axon.Loop.reduce_lr_on_plateau("accuracy", mode: :max)
Polaris.Updates
for more information on building
optimizers.This function creates a step function which outputs a map consisting of the following
-fields for step_state
:
%{
- y_pred: tensor() | container(tensor()), # Model predictions for use in metrics
- y_true: tensor() | container(tensor()), # True labels for use in metrics
- loss: tensor(), # Running average of loss over epoch
- model_state: container(tensor()), # Model parameters and state
- optimizer_state: container(tensor()) # Optimizer state associated with each parameter
-}
step_state
:%{
+ y_pred: tensor() | container(tensor()), # Model predictions for use in metrics
+ y_true: tensor() | container(tensor()), # True labels for use in metrics
+ loss: tensor(), # Running average of loss over epoch
+ model_state: container(tensor()), # Model parameters and state
+ optimizer_state: container(tensor()) # Optimizer state associated with each parameter
+}
data = Stream.zip(input, target)
+data = Stream.zip(input, target)
-model = Axon.input("input", shape: {nil, 32}) |> Axon.dense(1, activation: :sigmoid)
+model = Axon.input("input", shape: {nil, 32}) |> Axon.dense(1, activation: :sigmoid)
model
-|> Axon.Loop.trainer(:binary_cross_entropy, :adam)
-|> Axon.Loop.run(data)
+
|> Axon.Loop.trainer(:binary_cross_entropy, :adam)
+|> Axon.Loop.run(data)
model
-|> Axon.Loop.trainer(:binary_cross_entropy, Polaris.Optimizers.adam(learning_rate: 0.05))
-|> Axon.Loop.run(data)
loss_fn = fn y_true, y_pred -> Nx.cos(y_true, y_pred) end
+loss_fn = fn y_true, y_pred -> Nx.cos(y_true, y_pred) end
model
-|> Axon.Loop.trainer(loss_fn, Polaris.Optimizers.rmsprop(learning_rate: 0.01))
-|> Axon.Loop.run(data)
+
|> Axon.Loop.trainer(loss_fn, Polaris.Optimizers.rmsprop(learning_rate: 0.01))
+|> Axon.Loop.run(data)
model = {Axon.input("input_0", shape: {nil, 1}), Axon.input("input_1", shape: {nil, 2})}
-loss_weights = [mean_squared_error: 0.5, mean_absolute_error: 0.5]
+model = {Axon.input("input_0", shape: {nil, 1}), Axon.input("input_1", shape: {nil, 2})}
+loss_weights = [mean_squared_error: 0.5, mean_absolute_error: 0.5]
model
-|> Axon.Loop.trainer(loss_weights, :sgd)
-|> Axon.Loop.run(data)
+
|> Axon.Loop.trainer(loss_weights, :sgd)
+|> Axon.Loop.run(data)
This handler assumes the loop state matches the state initialized in a supervised training loop. Typically, you'd call this immediately after creating a supervised training loop:
model
-|> Axon.Loop.trainer(:mean_squared_error, :sgd)
-|> Axon.Loop.validate(model, validation_data)
Please note that you must pass the same (or an equivalent) model +|> Axon.Loop.trainer(:mean_squared_error, :sgd) +|> Axon.Loop.validate(model, validation_data)
Please note that you must pass the same (or an equivalent) model into this method so it can be used during the validation loop. The metrics which are computed are those which are present BEFORE the validation handler was added to the loop. For the following loop:
model
-|> Axon.Loop.trainer(:mean_squared_error, :sgd)
-|> Axon.Loop.metric(:mean_absolute_error)
-|> Axon.Loop.validate(model, validation_data)
-|> Axon.Loop.metric(:binary_cross_entropy)
only :mean_absolute_error
will be computed at validation time.
The returned loop state is altered to contain validation +|> Axon.Loop.trainer(:mean_squared_error, :sgd) +|> Axon.Loop.metric(:mean_absolute_error) +|> Axon.Loop.validate(model, validation_data) +|> Axon.Loop.metric(:binary_cross_entropy)
only :mean_absolute_error
will be computed at validation time.
The returned loop state is altered to contain validation metrics for use in later handlers such as early stopping and model checkpoints. Since the order of execution of event handlers is in the same order they are declared in the training loop, you MUST call this method before any other handler which expects or may use validation metrics.
By default the validation loop runs after every epoch; however, you can customize it by overriding the default event and event filters:
model
-|> Axon.Loop.trainer(:mean_squared_error, :sgd)
-|> Axon.Loop.metric(:mean_absolute_error)
-|> Axon.Loop.validate(model, validation_data, event: :iteration_completed, filter: [every: 10_000])
-|> Axon.Loop.metric(:binary_cross_entropy)
+|> Axon.Loop.trainer(:mean_squared_error, :sgd)
+|> Axon.Loop.metric(:mean_absolute_error)
+|> Axon.Loop.validate(model, validation_data, event: :iteration_completed, filter: [every: 10_000])
+|> Axon.Loop.metric(:binary_cross_entropy)
diff --git a/Axon.LossScale.html b/Axon.LossScale.html
index f3b6f8e5..b8dfe7cd 100644
--- a/Axon.LossScale.html
+++ b/Axon.LossScale.html
@@ -136,7 +136,7 @@ Implementations of loss-scalers for use in mixed precision training.
Loss scaling is used to prevent underflow when using mixed precision during the model training process. Each loss-scale -implementation here returns a 3-tuple of the functions:
{init_fn, scale_fn, unscale_fn, adjust_fn} = Axon.LossScale.static(Nx.pow(2, 15))
You can use these to scale/unscale loss and gradients as well +implementation here returns a 3-tuple of the functions:
{init_fn, scale_fn, unscale_fn, adjust_fn} = Axon.LossScale.static(Nx.pow(2, 15))
You can use these to scale/unscale loss and gradients as well as adjust the loss scale state.
Axon.Loop.trainer/3
builds loss-scaling in by default. You
can reference the Axon.Loop.train_step/3
implementation to
see how loss-scaling is applied in practice.
y_true
and input prediction y_pred
. As an example, the mean_squared_error/2
loss function produces a tensor whose values are the mean squared
-error between targets and predictions:iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.mean_squared_error(y_true, y_pred)
-#Nx.Tensor<
- f32[2]
- [0.5, 0.5]
->
It's common to compute the loss across an entire minibatch. +error between targets and predictions:
iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.mean_squared_error(y_true, y_pred)
+#Nx.Tensor<
+ f32[2]
+ [0.5, 0.5]
+>
It's common to compute the loss across an entire minibatch.
You can easily do so by specifying a :reduction
mode, or
-by composing one of these with an Nx
reduction method:
iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.mean_squared_error(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+by composing one of these with an Nx
reduction method:iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.mean_squared_error(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
0.5
->
You can even compose loss functions:
defn my_strange_loss(y_true, y_pred) do
+>
You can even compose loss functions:
defn my_strange_loss(y_true, y_pred) do
y_true
- |> Axon.Losses.mean_squared_error(y_pred)
- |> Axon.Losses.binary_cross_entropy(y_pred)
- |> Nx.sum()
-end
Or, more commonly, you can combine loss functions with penalties for
-regularization:
defn regularized_loss(params, y_true, y_pred) do
- loss = Axon.mean_squared_error(y_true, y_pred)
- penalty = l2_penalty(params)
- Nx.sum(loss) + penalty
-end
All of the functions in this module are implemented as
+
|> Axon.Losses.mean_squared_error(y_pred)
+ |> Axon.Losses.binary_cross_entropy(y_pred)
+ |> Nx.sum()
+end
Or, more commonly, you can combine loss functions with penalties for +regularization:
defn regularized_loss(params, y_true, y_pred) do
+ loss = Axon.mean_squared_error(y_true, y_pred)
+ penalty = l2_penalty(params)
+ Nx.sum(loss) + penalty
+end
All of the functions in this module are implemented as
numerical functions and can be JIT or AOT compiled with
any supported Nx
compiler.
:reduction
- reduction mode. One of :mean
, :sum
, or :none
.
-Defaults to :none
.
:negative_weights
- class weight for 0
class useful for scaling loss
-by importance of class. Defaults to 1.0
.
:positive_weights
- class weight for 1
class useful for scaling loss
+Defaults to :none
.
:negative_weight
- class weight for 0
class useful for scaling loss
+by importance of class. Defaults to 1.0
.
:positive_weight
- class weight for 1
class useful for scaling loss
by importance of class. Defaults to 1.0
.
:from_logits
- whether y_pred
is a logits tensor. Defaults to false
.
iex> y_true = Nx.tensor([[0, 1], [1, 0], [1, 0]])
-iex> y_pred = Nx.tensor([[0.6811, 0.5565], [0.6551, 0.4551], [0.5422, 0.2648]])
-iex> Axon.Losses.binary_cross_entropy(y_true, y_pred)
-#Nx.Tensor<
- f32[3]
- [0.8644826412200928, 0.5150600075721741, 0.45986634492874146]
->
-
-iex> y_true = Nx.tensor([[0, 1], [1, 0], [1, 0]])
-iex> y_pred = Nx.tensor([[0.6811, 0.5565], [0.6551, 0.4551], [0.5422, 0.2648]])
-iex> Axon.Losses.binary_cross_entropy(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0, 1], [1, 0], [1, 0]])
+iex> y_pred = Nx.tensor([[0.6811, 0.5565], [0.6551, 0.4551], [0.5422, 0.2648]])
+iex> Axon.Losses.binary_cross_entropy(y_true, y_pred)
+#Nx.Tensor<
+ f32[3]
+ [0.8644826412200928, 0.5150600075721741, 0.45986634492874146]
+>
+
+iex> y_true = Nx.tensor([[0, 1], [1, 0], [1, 0]])
+iex> y_pred = Nx.tensor([[0.6811, 0.5565], [0.6551, 0.4551], [0.5422, 0.2648]])
+iex> Axon.Losses.binary_cross_entropy(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
0.613136351108551
->
+>
-iex> y_true = Nx.tensor([[0, 1], [1, 0], [1, 0]])
-iex> y_pred = Nx.tensor([[0.6811, 0.5565], [0.6551, 0.4551], [0.5422, 0.2648]])
-iex> Axon.Losses.binary_cross_entropy(y_true, y_pred, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0, 1], [1, 0], [1, 0]])
+iex> y_pred = Nx.tensor([[0.6811, 0.5565], [0.6551, 0.4551], [0.5422, 0.2648]])
+iex> Axon.Losses.binary_cross_entropy(y_true, y_pred, reduction: :sum)
+#Nx.Tensor<
f32
1.8394089937210083
->
+>
Categorical cross-entropy is typically used for multi-class classification problems.
By default, it expects y_pred
to encode a probability distribution along the last
axis. You can specify from_logits: true
to indicate y_pred
is a logits tensor.
# Batch size of 3 with 3 target classes
-y_true = Nx.tensor([0, 2, 1])
-y_pred = Nx.tensor([[0.2, 0.8, 0.0], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]])
iex> y_true = Nx.tensor([[0, 1, 0], [0, 0, 1]], type: {:s, 8})
-iex> y_pred = Nx.tensor([[0.05, 0.95, 0], [0.1, 0.8, 0.1]])
-iex> Axon.Losses.categorical_cross_entropy(y_true, y_pred)
-#Nx.Tensor<
- f32[2]
- [0.051293306052684784, 2.3025851249694824]
->
-
-iex> y_true = Nx.tensor([[0, 1, 0], [0, 0, 1]], type: {:s, 8})
-iex> y_pred = Nx.tensor([[0.05, 0.95, 0], [0.1, 0.8, 0.1]])
-iex> Axon.Losses.categorical_cross_entropy(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0, 1, 0], [0, 0, 1]], type: {:s, 8})
+iex> y_pred = Nx.tensor([[0.05, 0.95, 0], [0.1, 0.8, 0.1]])
+iex> Axon.Losses.categorical_cross_entropy(y_true, y_pred)
+#Nx.Tensor<
+ f32[2]
+ [0.051293306052684784, 2.3025851249694824]
+>
+
+iex> y_true = Nx.tensor([[0, 1, 0], [0, 0, 1]], type: {:s, 8})
+iex> y_pred = Nx.tensor([[0.05, 0.95, 0], [0.1, 0.8, 0.1]])
+iex> Axon.Losses.categorical_cross_entropy(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
1.1769392490386963
->
+>
-iex> y_true = Nx.tensor([[0, 1, 0], [0, 0, 1]], type: {:s, 8})
-iex> y_pred = Nx.tensor([[0.05, 0.95, 0], [0.1, 0.8, 0.1]])
-iex> Axon.Losses.categorical_cross_entropy(y_true, y_pred, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0, 1, 0], [0, 0, 1]], type: {:s, 8})
+iex> y_pred = Nx.tensor([[0.05, 0.95, 0], [0.1, 0.8, 0.1]])
+iex> Axon.Losses.categorical_cross_entropy(y_true, y_pred, reduction: :sum)
+#Nx.Tensor<
f32
2.3538784980773926
->
+>
-iex> y_true = Nx.tensor([1, 2], type: {:s, 8})
-iex> y_pred = Nx.tensor([[0.05, 0.95, 0], [0.1, 0.8, 0.1]])
-iex> Axon.Losses.categorical_cross_entropy(y_true, y_pred, reduction: :sum, sparse: true)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([1, 2], type: {:s, 8})
+iex> y_pred = Nx.tensor([[0.05, 0.95, 0], [0.1, 0.8, 0.1]])
+iex> Axon.Losses.categorical_cross_entropy(y_true, y_pred, reduction: :sum, sparse: true)
+#Nx.Tensor<
f32
2.3538784980773926
->
+>
iex> y_true = Nx.tensor([[1, 0, 0], [0, 0, 1]], type: {:s, 8})
-iex> y_pred = Nx.tensor([[0.05300799, 0.21617081, 0.68642382], [0.3754382 , 0.08494169, 0.13442067]])
-iex> Axon.Losses.categorical_hinge(y_true, y_pred)
-#Nx.Tensor<
- f32[2]
- [1.6334158182144165, 1.2410175800323486]
->
-
-iex> y_true = Nx.tensor([[1, 0, 0], [0, 0, 1]], type: {:s, 8})
-iex> y_pred = Nx.tensor([[0.05300799, 0.21617081, 0.68642382], [0.3754382 , 0.08494169, 0.13442067]])
-iex> Axon.Losses.categorical_hinge(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[1, 0, 0], [0, 0, 1]], type: {:s, 8})
+iex> y_pred = Nx.tensor([[0.05300799, 0.21617081, 0.68642382], [0.3754382 , 0.08494169, 0.13442067]])
+iex> Axon.Losses.categorical_hinge(y_true, y_pred)
+#Nx.Tensor<
+ f32[2]
+ [1.6334158182144165, 1.2410175800323486]
+>
+
+iex> y_true = Nx.tensor([[1, 0, 0], [0, 0, 1]], type: {:s, 8})
+iex> y_pred = Nx.tensor([[0.05300799, 0.21617081, 0.68642382], [0.3754382 , 0.08494169, 0.13442067]])
+iex> Axon.Losses.categorical_hinge(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
1.4372167587280273
->
+>
-iex> y_true = Nx.tensor([[1, 0, 0], [0, 0, 1]], type: {:s, 8})
-iex> y_pred = Nx.tensor([[0.05300799, 0.21617081, 0.68642382], [0.3754382 , 0.08494169, 0.13442067]])
-iex> Axon.Losses.categorical_hinge(y_true, y_pred, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[1, 0, 0], [0, 0, 1]], type: {:s, 8})
+iex> y_pred = Nx.tensor([[0.05300799, 0.21617081, 0.68642382], [0.3754382 , 0.08494169, 0.13442067]])
+iex> Axon.Losses.categorical_hinge(y_true, y_pred, reduction: :sum)
+#Nx.Tensor<
f32
2.8744335174560547
->
+>
iex> y_pred = Nx.tensor([[1.0, 0.0], [1.0, 1.0]])
-iex> y_true = Nx.tensor([[0.0, 1.0], [1.0, 1.0]])
-iex> Axon.Losses.cosine_similarity(y_true, y_pred)
-#Nx.Tensor<
- f32[2]
- [0.0, 1.0000001192092896]
->
+iex> y_pred = Nx.tensor([[1.0, 0.0], [1.0, 1.0]])
+iex> y_true = Nx.tensor([[0.0, 1.0], [1.0, 1.0]])
+iex> Axon.Losses.cosine_similarity(y_true, y_pred)
+#Nx.Tensor<
+ f32[2]
+ [0.0, 1.0000001192092896]
+>
iex> y_true = Nx.tensor([[ 1, 1, -1], [ 1, 1, -1]], type: {:s, 8})
-iex> y_pred = Nx.tensor([[0.45440044, 0.31470688, 0.67920924], [0.24311459, 0.93466766, 0.10914676]])
-iex> Axon.Losses.hinge(y_true, y_pred)
-#Nx.Tensor<
- f32[2]
- [0.9700339436531067, 0.6437881588935852]
->
-
-iex> y_true = Nx.tensor([[ 1, 1, -1], [ 1, 1, -1]], type: {:s, 8})
-iex> y_pred = Nx.tensor([[0.45440044, 0.31470688, 0.67920924], [0.24311459, 0.93466766, 0.10914676]])
-iex> Axon.Losses.hinge(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[ 1, 1, -1], [ 1, 1, -1]], type: {:s, 8})
+iex> y_pred = Nx.tensor([[0.45440044, 0.31470688, 0.67920924], [0.24311459, 0.93466766, 0.10914676]])
+iex> Axon.Losses.hinge(y_true, y_pred)
+#Nx.Tensor<
+ f32[2]
+ [0.9700339436531067, 0.6437881588935852]
+>
+
+iex> y_true = Nx.tensor([[ 1, 1, -1], [ 1, 1, -1]], type: {:s, 8})
+iex> y_pred = Nx.tensor([[0.45440044, 0.31470688, 0.67920924], [0.24311459, 0.93466766, 0.10914676]])
+iex> Axon.Losses.hinge(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
0.806911051273346
->
+>
-iex> y_true = Nx.tensor([[ 1, 1, -1], [ 1, 1, -1]], type: {:s, 8})
-iex> y_pred = Nx.tensor([[0.45440044, 0.31470688, 0.67920924], [0.24311459, 0.93466766, 0.10914676]])
-iex> Axon.Losses.hinge(y_true, y_pred, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[ 1, 1, -1], [ 1, 1, -1]], type: {:s, 8})
+iex> y_pred = Nx.tensor([[0.45440044, 0.31470688, 0.67920924], [0.24311459, 0.93466766, 0.10914676]])
+iex> Axon.Losses.hinge(y_true, y_pred, reduction: :sum)
+#Nx.Tensor<
f32
1.613822102546692
->
+>
iex> y_true = Nx.tensor([[1], [1.5], [2.0]])
-iex> y_pred = Nx.tensor([[0.8], [1.8], [2.1]])
-iex> Axon.Losses.huber(y_true, y_pred)
-#Nx.Tensor<
- f32[3][1]
- [
- [0.019999997690320015],
- [0.04499998688697815],
- [0.004999990575015545]
- ]
->
-
-iex> y_true = Nx.tensor([[1], [1.5], [2.0]])
-iex> y_pred = Nx.tensor([[0.8], [1.8], [2.1]])
-iex> Axon.Losses.huber(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[1], [1.5], [2.0]])
+iex> y_pred = Nx.tensor([[0.8], [1.8], [2.1]])
+iex> Axon.Losses.huber(y_true, y_pred)
+#Nx.Tensor<
+ f32[3][1]
+ [
+ [0.019999997690320015],
+ [0.04499998688697815],
+ [0.004999990575015545]
+ ]
+>
+
+iex> y_true = Nx.tensor([[1], [1.5], [2.0]])
+iex> y_pred = Nx.tensor([[0.8], [1.8], [2.1]])
+iex> Axon.Losses.huber(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
0.02333332598209381
->
+>
iex> y_true = Nx.tensor([[0, 1], [0, 0]], type: {:u, 8})
-iex> y_pred = Nx.tensor([[0.6, 0.4], [0.4, 0.6]])
-iex> Axon.Losses.kl_divergence(y_true, y_pred)
-#Nx.Tensor<
- f32[2]
- [0.916289210319519, -3.080907390540233e-6]
->
-
-iex> y_true = Nx.tensor([[0, 1], [0, 0]], type: {:u, 8})
-iex> y_pred = Nx.tensor([[0.6, 0.4], [0.4, 0.6]])
-iex> Axon.Losses.kl_divergence(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0, 1], [0, 0]], type: {:u, 8})
+iex> y_pred = Nx.tensor([[0.6, 0.4], [0.4, 0.6]])
+iex> Axon.Losses.kl_divergence(y_true, y_pred)
+#Nx.Tensor<
+ f32[2]
+ [0.916289210319519, -3.080907390540233e-6]
+>
+
+iex> y_true = Nx.tensor([[0, 1], [0, 0]], type: {:u, 8})
+iex> y_pred = Nx.tensor([[0.6, 0.4], [0.4, 0.6]])
+iex> Axon.Losses.kl_divergence(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
0.45814305543899536
->
+>
-iex> y_true = Nx.tensor([[0, 1], [0, 0]], type: {:u, 8})
-iex> y_pred = Nx.tensor([[0.6, 0.4], [0.4, 0.6]])
-iex> Axon.Losses.kl_divergence(y_true, y_pred, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0, 1], [0, 0]], type: {:u, 8})
+iex> y_pred = Nx.tensor([[0.6, 0.4], [0.4, 0.6]])
+iex> Axon.Losses.kl_divergence(y_true, y_pred, reduction: :sum)
+#Nx.Tensor<
f32
0.9162861108779907
->
+>
iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
-iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]])
-iex> Axon.Losses.log_cosh(y_true, y_pred)
-#Nx.Tensor<
- f32[2]
- [0.2168903946876526, 0.0]
->
-
-iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
-iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]])
-iex> Axon.Losses.log_cosh(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
+iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]])
+iex> Axon.Losses.log_cosh(y_true, y_pred)
+#Nx.Tensor<
+ f32[2]
+ [0.2168903946876526, 0.0]
+>
+
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
+iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]])
+iex> Axon.Losses.log_cosh(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
0.1084451973438263
->
+>
-iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
-iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]])
-iex> Axon.Losses.log_cosh(y_true, y_pred, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
+iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]])
+iex> Axon.Losses.log_cosh(y_true, y_pred, reduction: :sum)
+#Nx.Tensor<
f32
0.2168903946876526
->
+>
iex> y_true = Nx.tensor([1.0, 1.0, 1.0], type: {:f, 32})
-iex> y_pred1 = Nx.tensor([0.6934, -0.7239, 1.1954], type: {:f, 32})
-iex> y_pred2 = Nx.tensor([-0.4691, 0.2670, -1.7452], type: {:f, 32})
-iex> Axon.Losses.margin_ranking(y_true, {y_pred1, y_pred2})
-#Nx.Tensor<
- f32[3]
- [0.0, 0.9909000396728516, 0.0]
->
-
-iex> y_true = Nx.tensor([1.0, 1.0, 1.0], type: {:f, 32})
-iex> y_pred1 = Nx.tensor([0.6934, -0.7239, 1.1954], type: {:f, 32})
-iex> y_pred2 = Nx.tensor([-0.4691, 0.2670, -1.7452], type: {:f, 32})
-iex> Axon.Losses.margin_ranking(y_true, {y_pred1, y_pred2}, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([1.0, 1.0, 1.0], type: {:f, 32})
+iex> y_pred1 = Nx.tensor([0.6934, -0.7239, 1.1954], type: {:f, 32})
+iex> y_pred2 = Nx.tensor([-0.4691, 0.2670, -1.7452], type: {:f, 32})
+iex> Axon.Losses.margin_ranking(y_true, {y_pred1, y_pred2})
+#Nx.Tensor<
+ f32[3]
+ [0.0, 0.9909000396728516, 0.0]
+>
+
+iex> y_true = Nx.tensor([1.0, 1.0, 1.0], type: {:f, 32})
+iex> y_pred1 = Nx.tensor([0.6934, -0.7239, 1.1954], type: {:f, 32})
+iex> y_pred2 = Nx.tensor([-0.4691, 0.2670, -1.7452], type: {:f, 32})
+iex> Axon.Losses.margin_ranking(y_true, {y_pred1, y_pred2}, reduction: :mean)
+#Nx.Tensor<
f32
0.3303000032901764
->
+>
-iex> y_true = Nx.tensor([1.0, 1.0, 1.0], type: {:f, 32})
-iex> y_pred1 = Nx.tensor([0.6934, -0.7239, 1.1954], type: {:f, 32})
-iex> y_pred2 = Nx.tensor([-0.4691, 0.2670, -1.7452], type: {:f, 32})
-iex> Axon.Losses.margin_ranking(y_true, {y_pred1, y_pred2}, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([1.0, 1.0, 1.0], type: {:f, 32})
+iex> y_pred1 = Nx.tensor([0.6934, -0.7239, 1.1954], type: {:f, 32})
+iex> y_pred2 = Nx.tensor([-0.4691, 0.2670, -1.7452], type: {:f, 32})
+iex> Axon.Losses.margin_ranking(y_true, {y_pred1, y_pred2}, reduction: :sum)
+#Nx.Tensor<
f32
0.9909000396728516
->
+>
iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.mean_absolute_error(y_true, y_pred)
-#Nx.Tensor<
- f32[2]
- [0.5, 0.5]
->
-
-iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.mean_absolute_error(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.mean_absolute_error(y_true, y_pred)
+#Nx.Tensor<
+ f32[2]
+ [0.5, 0.5]
+>
+
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.mean_absolute_error(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
0.5
->
+>
-iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.mean_absolute_error(y_true, y_pred, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.mean_absolute_error(y_true, y_pred, reduction: :sum)
+#Nx.Tensor<
f32
1.0
->
+>
iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.mean_squared_error(y_true, y_pred)
-#Nx.Tensor<
- f32[2]
- [0.5, 0.5]
->
-
-iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.mean_squared_error(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.mean_squared_error(y_true, y_pred)
+#Nx.Tensor<
+ f32[2]
+ [0.5, 0.5]
+>
+
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.mean_squared_error(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
0.5
->
+>
-iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.mean_squared_error(y_true, y_pred, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.mean_squared_error(y_true, y_pred, reduction: :sum)
+#Nx.Tensor<
f32
1.0
->
+>
iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.poisson(y_true, y_pred)
-#Nx.Tensor<
- f32[2]
- [0.9999999403953552, 0.0]
->
-
-iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.poisson(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.poisson(y_true, y_pred)
+#Nx.Tensor<
+ f32[2]
+ [0.9999999403953552, 0.0]
+>
+
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.poisson(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
0.4999999701976776
->
+>
-iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> Axon.Losses.poisson(y_true, y_pred, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> Axon.Losses.poisson(y_true, y_pred, reduction: :sum)
+#Nx.Tensor<
f32
0.9999999403953552
->
+>
iex> y_true = Nx.tensor([[-1.0, 1.0, 1.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[0.2953, -0.1709, 0.9486]], type: {:f, 32})
-iex> Axon.Losses.soft_margin(y_true, y_pred)
-#Nx.Tensor<
- f32[3]
- [0.851658046245575, 0.7822436094284058, 0.3273470401763916]
->
-
-iex> y_true = Nx.tensor([[-1.0, 1.0, 1.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[0.2953, -0.1709, 0.9486]], type: {:f, 32})
-iex> Axon.Losses.soft_margin(y_true, y_pred, reduction: :mean)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[-1.0, 1.0, 1.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[0.2953, -0.1709, 0.9486]], type: {:f, 32})
+iex> Axon.Losses.soft_margin(y_true, y_pred)
+#Nx.Tensor<
+ f32[3]
+ [0.851658046245575, 0.7822436094284058, 0.3273470401763916]
+>
+
+iex> y_true = Nx.tensor([[-1.0, 1.0, 1.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[0.2953, -0.1709, 0.9486]], type: {:f, 32})
+iex> Axon.Losses.soft_margin(y_true, y_pred, reduction: :mean)
+#Nx.Tensor<
f32
0.6537495255470276
->
+>
-iex> y_true = Nx.tensor([[-1.0, 1.0, 1.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[0.2953, -0.1709, 0.9486]], type: {:f, 32})
-iex> Axon.Losses.soft_margin(y_true, y_pred, reduction: :sum)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[-1.0, 1.0, 1.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[0.2953, -0.1709, 0.9486]], type: {:f, 32})
+iex> Axon.Losses.soft_margin(y_true, y_pred, reduction: :sum)
+#Nx.Tensor<
f32
1.9612486362457275
->
+>
iex> Axon.Metrics.accuracy(Nx.tensor([[1], [0], [0]]), Nx.tensor([[1], [1], [1]]))
-#Nx.Tensor<
+iex> Axon.Metrics.accuracy(Nx.tensor([[1], [0], [0]]), Nx.tensor([[1], [1], [1]]))
+#Nx.Tensor<
f32
0.3333333432674408
->
+>
-iex> Axon.Metrics.accuracy(Nx.tensor([[0, 1], [1, 0], [1, 0]]), Nx.tensor([[0, 1], [1, 0], [0, 1]]))
-#Nx.Tensor<
+iex> Axon.Metrics.accuracy(Nx.tensor([[0, 1], [1, 0], [1, 0]]), Nx.tensor([[0, 1], [1, 0], [0, 1]]))
+#Nx.Tensor<
f32
0.6666666865348816
->
+>
-iex> Axon.Metrics.accuracy(Nx.tensor([[0, 1, 0], [1, 0, 0]]), Nx.tensor([[0, 1, 0], [0, 1, 0]]))
-#Nx.Tensor<
+iex> Axon.Metrics.accuracy(Nx.tensor([[0, 1, 0], [1, 0, 0]]), Nx.tensor([[0, 1, 0], [0, 1, 0]]))
+#Nx.Tensor<
f32
0.5
->
+>
iex> y_true = Nx.tensor([1, 0, 1, 1, 0, 1, 0])
-iex> y_pred = Nx.tensor([0.8, 0.6, 0.4, 0.2, 0.8, 0.2, 0.2])
-iex> Axon.Metrics.false_negatives(y_true, y_pred)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([1, 0, 1, 1, 0, 1, 0])
+iex> y_pred = Nx.tensor([0.8, 0.6, 0.4, 0.2, 0.8, 0.2, 0.2])
+iex> Axon.Metrics.false_negatives(y_true, y_pred)
+#Nx.Tensor<
u64
3
->
+>
iex> y_true = Nx.tensor([1, 0, 1, 1, 0, 1, 0])
-iex> y_pred = Nx.tensor([0.8, 0.6, 0.4, 0.2, 0.8, 0.2, 0.2])
-iex> Axon.Metrics.false_positives(y_true, y_pred)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([1, 0, 1, 1, 0, 1, 0])
+iex> y_pred = Nx.tensor([0.8, 0.6, 0.4, 0.2, 0.8, 0.2, 0.2])
+iex> Axon.Metrics.false_positives(y_true, y_pred)
+#Nx.Tensor<
u64
2
->
+>
iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
-iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
-iex> Axon.Metrics.mean_absolute_error(y_true, y_pred)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]], type: {:f, 32})
+iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]], type: {:f, 32})
+iex> Axon.Metrics.mean_absolute_error(y_true, y_pred)
+#Nx.Tensor<
f32
0.5
->
+>
iex> Axon.Metrics.precision(Nx.tensor([0, 1, 1, 1]), Nx.tensor([1, 0, 1, 1]))
-#Nx.Tensor<
+iex> Axon.Metrics.precision(Nx.tensor([0, 1, 1, 1]), Nx.tensor([1, 0, 1, 1]))
+#Nx.Tensor<
f32
0.6666666865348816
->
+>
iex> Axon.Metrics.recall(Nx.tensor([0, 1, 1, 1]), Nx.tensor([1, 0, 1, 1]))
-#Nx.Tensor<
+iex> Axon.Metrics.recall(Nx.tensor([0, 1, 1, 1]), Nx.tensor([1, 0, 1, 1]))
+#Nx.Tensor<
f32
0.6666666865348816
->
+>
iex> cur_avg = 0.5
iex> iteration = 1
-iex> y_true = Nx.tensor([[0, 1], [1, 0], [1, 0]])
-iex> y_pred = Nx.tensor([[0, 1], [1, 0], [1, 0]])
-iex> avg_acc = Axon.Metrics.running_average(&Axon.Metrics.accuracy/2)
-iex> avg_acc.(cur_avg, [y_true, y_pred], iteration)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([[0, 1], [1, 0], [1, 0]])
+iex> y_pred = Nx.tensor([[0, 1], [1, 0], [1, 0]])
+iex> avg_acc = Axon.Metrics.running_average(&Axon.Metrics.accuracy/2)
+iex> avg_acc.(cur_avg, [y_true, y_pred], iteration)
+#Nx.Tensor<
f32
0.75
->
+>
iex> cur_sum = 12
iex> iteration = 2
-iex> y_true = Nx.tensor([0, 1, 0, 1])
-iex> y_pred = Nx.tensor([1, 1, 0, 1])
-iex> fps = Axon.Metrics.running_sum(&Axon.Metrics.false_positives/2)
-iex> fps.(cur_sum, [y_true, y_pred], iteration)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([0, 1, 0, 1])
+iex> y_pred = Nx.tensor([1, 1, 0, 1])
+iex> fps = Axon.Metrics.running_sum(&Axon.Metrics.false_positives/2)
+iex> fps.(cur_sum, [y_true, y_pred], iteration)
+#Nx.Tensor<
s64
13
->
+>
iex> Axon.Metrics.sensitivity(Nx.tensor([0, 1, 1, 1]), Nx.tensor([1, 0, 1, 1]))
-#Nx.Tensor<
+iex> Axon.Metrics.sensitivity(Nx.tensor([0, 1, 1, 1]), Nx.tensor([1, 0, 1, 1]))
+#Nx.Tensor<
f32
0.6666666865348816
->
+>
iex> Axon.Metrics.specificity(Nx.tensor([0, 1, 1, 1]), Nx.tensor([1, 0, 1, 1]))
-#Nx.Tensor<
+iex> Axon.Metrics.specificity(Nx.tensor([0, 1, 1, 1]), Nx.tensor([1, 0, 1, 1]))
+#Nx.Tensor<
f32
0.0
->
+>
iex> Axon.Metrics.top_k_categorical_accuracy(Nx.tensor([0, 1, 0, 0, 0]), Nx.tensor([0.1, 0.4, 0.3, 0.7, 0.1]), k: 2)
-#Nx.Tensor<
+iex> Axon.Metrics.top_k_categorical_accuracy(Nx.tensor([0, 1, 0, 0, 0]), Nx.tensor([0.1, 0.4, 0.3, 0.7, 0.1]), k: 2)
+#Nx.Tensor<
f32
1.0
->
+>
-iex> Axon.Metrics.top_k_categorical_accuracy(Nx.tensor([[0, 1, 0], [1, 0, 0]]), Nx.tensor([[0.1, 0.4, 0.7], [0.1, 0.4, 0.7]]), k: 2)
-#Nx.Tensor<
+iex> Axon.Metrics.top_k_categorical_accuracy(Nx.tensor([[0, 1, 0], [1, 0, 0]]), Nx.tensor([[0.1, 0.4, 0.7], [0.1, 0.4, 0.7]]), k: 2)
+#Nx.Tensor<
f32
0.5
->
+>
-iex> Axon.Metrics.top_k_categorical_accuracy(Nx.tensor([[0], [2]]), Nx.tensor([[0.1, 0.4, 0.7], [0.1, 0.4, 0.7]]), k: 2, sparse: true)
-#Nx.Tensor<
+iex> Axon.Metrics.top_k_categorical_accuracy(Nx.tensor([[0], [2]]), Nx.tensor([[0.1, 0.4, 0.7], [0.1, 0.4, 0.7]]), k: 2, sparse: true)
+#Nx.Tensor<
f32
0.5
->
+>
iex> y_true = Nx.tensor([1, 0, 1, 1, 0, 1, 0])
-iex> y_pred = Nx.tensor([0.8, 0.6, 0.4, 0.2, 0.8, 0.2, 0.2])
-iex> Axon.Metrics.true_negatives(y_true, y_pred)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([1, 0, 1, 1, 0, 1, 0])
+iex> y_pred = Nx.tensor([0.8, 0.6, 0.4, 0.2, 0.8, 0.2, 0.2])
+iex> Axon.Metrics.true_negatives(y_true, y_pred)
+#Nx.Tensor<
u64
1
->
+>
iex> y_true = Nx.tensor([1, 0, 1, 1, 0, 1, 0])
-iex> y_pred = Nx.tensor([0.8, 0.6, 0.4, 0.2, 0.8, 0.2, 0.2])
-iex> Axon.Metrics.true_positives(y_true, y_pred)
-#Nx.Tensor<
+iex> y_true = Nx.tensor([1, 0, 1, 1, 0, 1, 0])
+iex> y_pred = Nx.tensor([0.8, 0.6, 0.4, 0.2, 0.8, 0.2, 0.2])
+iex> Axon.Metrics.true_positives(y_true, y_pred)
+#Nx.Tensor<
u64
1
->
+>
output
policy dictates what type the model should output.Here's an example of creating a mixed precision policy and applying it to a model:
model =
- Axon.input("input", shape: {nil, 784})
- |> Axon.dense(128, activation: :relu)
- |> Axon.batch_norm()
- |> Axon.dropout(rate: 0.5)
- |> Axon.dense(64, activation: :relu)
- |> Axon.batch_norm()
- |> Axon.dropout(rate: 0.5)
- |> Axon.dense(10, activation: :softmax)
-
-policy = Axon.MixedPrecision.create_policy(
- params: {:f, 32},
- compute: {:f, 16},
- output: {:f, 32}
-)
+ Axon.input("input", shape: {nil, 784})
+ |> Axon.dense(128, activation: :relu)
+ |> Axon.batch_norm()
+ |> Axon.dropout(rate: 0.5)
+ |> Axon.dense(64, activation: :relu)
+ |> Axon.batch_norm()
+ |> Axon.dropout(rate: 0.5)
+ |> Axon.dense(10, activation: :softmax)
+
+policy = Axon.MixedPrecision.create_policy(
+ params: {:f, 32},
+ compute: {:f, 16},
+ output: {:f, 32}
+)
mp_model =
model
- |> Axon.MixedPrecision.apply_policy(policy, except: [:batch_norm])
The example above applies the mixed precision policy to every layer in + |> Axon.MixedPrecision.apply_policy(policy, except: [:batch_norm])
The example above applies the mixed precision policy to every layer in
the model except Batch Normalization layers. The policy will cast parameters
and inputs to {:f, 16}
for intermediate computations in the model's forward
pass before casting the output back to {:f, 32}
.
iex> policy = Axon.MixedPrecision.create_policy(params: {:f, 16})
-iex> params = %{"dense" => %{"kernel" => Nx.tensor([1.0, 2.0, 3.0])}}
-iex> params = Axon.MixedPrecision.cast(policy, params, :params)
-iex> Nx.type(params["dense"]["kernel"])
-{:f, 16}
-
-iex> policy = Axon.MixedPrecision.create_policy(compute: {:bf, 16})
-iex> value = Nx.tensor([1.0, 2.0, 3.0])
-iex> value = Axon.MixedPrecision.cast(policy, value, :compute)
-iex> Nx.type(value)
-{:bf, 16}
-
-iex> policy = Axon.MixedPrecision.create_policy(output: {:bf, 16})
-iex> value = Nx.tensor([1.0, 2.0, 3.0])
-iex> value = Axon.MixedPrecision.cast(policy, value, :output)
-iex> Nx.type(value)
-{:bf, 16}
Note that integers are never promoted to floats:
iex> policy = Axon.MixedPrecision.create_policy(output: {:f, 16})
-iex> value = Nx.tensor([1, 2, 3], type: :s64)
-iex> value = Axon.MixedPrecision.cast(policy, value, :params)
-iex> Nx.type(value)
-{:s, 64}
+iex> policy = Axon.MixedPrecision.create_policy(params: {:f, 16})
+iex> params = %{"dense" => %{"kernel" => Nx.tensor([1.0, 2.0, 3.0])}}
+iex> params = Axon.MixedPrecision.cast(policy, params, :params)
+iex> Nx.type(params["dense"]["kernel"])
+{:f, 16}
+
+iex> policy = Axon.MixedPrecision.create_policy(compute: {:bf, 16})
+iex> value = Nx.tensor([1.0, 2.0, 3.0])
+iex> value = Axon.MixedPrecision.cast(policy, value, :compute)
+iex> Nx.type(value)
+{:bf, 16}
+
+iex> policy = Axon.MixedPrecision.create_policy(output: {:bf, 16})
+iex> value = Nx.tensor([1.0, 2.0, 3.0])
+iex> value = Axon.MixedPrecision.cast(policy, value, :output)
+iex> Nx.type(value)
+{:bf, 16}
Note that integers are never promoted to floats:
iex> policy = Axon.MixedPrecision.create_policy(output: {:f, 16})
+iex> value = Nx.tensor([1, 2, 3], type: :s64)
+iex> value = Axon.MixedPrecision.cast(policy, value, :params)
+iex> Nx.type(value)
+{:s, 64}
iex> Axon.MixedPrecision.create_policy(params: {:f, 16}, output: {:f, 16})
-#Axon.MixedPrecision.Policy<p=f16 c=f32 o=f16>
+iex> Axon.MixedPrecision.create_policy(params: {:f, 16}, output: {:f, 16})
+#Axon.MixedPrecision.Policy<p=f16 c=f32 o=f16>
-iex> Axon.MixedPrecision.create_policy(compute: {:bf, 16})
-#Axon.MixedPrecision.Policy<p=f32 c=bf16 o=f32>
+iex> Axon.MixedPrecision.create_policy(compute: {:bf, 16})
+#Axon.MixedPrecision.Policy<p=f32 c=bf16 o=f32>
YOx=ogfGzfss5 zb9wnsd_@a>_F7034t%2r?4d7yG7?=xFQ+ArL0$TnXXk&*?FYNqh0;W@xPMs|E z9y69~k1SkVUz}SCVnh3$pQutJSi4uDaHyF}v;1pmJ0IXlR(OE`GKvMc q+nTr-ys<8b2td2UZ@TR+DdU-B}!<*Zo&zbF+Hw?xEvs!1oYo)Q&frFg *~ z-w)>x?8_m@+udm%{R=V9bWdU>EDl;e{
u7tP1D*vbSgG?kt*4RsA+F*APhXm*BOz!Zw zixnH1=r@kxlp(0siPF(xKXyAA `BeA`$<4X^(pCxf%1Cq<3P(Eq5kVF }0slqTKeK}@++4x11X`sk0GJ$2w4;vQ di0a)QV^bE>jti0)E}PL@6?Tvo5RDpYyZKr)^gG!XTnQ8T)q&miKZ~s2jZ5 z`-yJ>Nf{RlgX!w3I^sst%V!5A9ZNWdlqv-sMgG-hFALR23pep6f^cr{xJpWCJD141dWBV!)o%M3RV;2q%T!HSZ1ZE})^+40KAFjUiGE#-d%&>H&5>iZ!^ zg5njSl1Va+QiF*L=>ZthM&)pnjFJc0knXziNJA) wtCX-#x8;qZeSiFbd?FGMX`1z$(=_+hFQY9 zMm%r@+{Li39>5IG=qC7pt*p;k3cjbPx7}=`m`CO%eI!e-!%K79RERE=iU!HY_Zukv zWt~+v=eXna$C0l_*ABt7QrGS#H0%1iuSRXD1BUQjWY4^DVco9saR%yf!m;(SQQ-d% zBEbAFh@eqU6Tt<83Iehh1Oh_wpP=(wOjtoV$%lkA!4C@Mzd%C0Mk`o?$(kc-XN!MN zaZ(EjF~BH4)OoPw{Np+HCGH8?LH*cb7TqqkJirvA`LpsFQZ*} z?jHkl7U*;_2pqx((KoN?%CU1#wcskgG-$?J$v-(S>Wb4 &7wHT|>LRiAA zF_Ohl)JvX>c*!$ZYUcBxMmLe^n2Lhq==cX!5c!af8{>UWrtY%wb+tnj`Ykxel8&TM zXf8Kt86+dib+x0L?qxX0@1i})$A%UB+y_p)%}aj8epx7KEz-HLbN0>c$VtZ^S;+c_ zO<@*^ME!^%@UQ-(FI|u)kaK>lym%kxSL-)ByaU+ox%c1AFlc(WD8eDk>{jk)$9NPj zij{9fsQ1$|;9iz~zTrv21s^-dIn0oabp8U*=mQo9Fc{RI%_6w_?+2tIu z!^O`u*(13;1dO6ri$*9piJ>q{vdE()gG;Xw8@=&{aK)4>iPpqpN-CG3@RV?pqpYfv z{CRBnTc>~dV_kKR{3qNY?G3VWSq)H!U8s|MznHEI;~8NlTnJyVu
#aOkH@rf`#4}+I?<^hc+jt;Gd7~H$_8Y%#SKaqi>arQZ|AG@{n5z#7Mt{Pdyfv8^ zB~u7NxCVhK+(f#~wkK9}jA*m-Gu9>uT!9gu3v*7yPIWGWMx +a#(m`ZJj0V z0%Vn)K-)-l6o@fgk;?4@XaRFBo=c<4TQCST%Tv6$-D%cExsp+M5zbt^=9|N^sOC|4 z1 hyn*cG@2;n(ZZM7tAYOZ=3W40xp#e6wbB!%V`T!j#sz{RPKbM@|{Tk zm}J$Q&)VmizWJ6 W$=<%Ya1gJjYGupbi3bq+?gqb39k@<* zh-<$=lhCO=pu(^v$DEIn5~)G~{A6Ng>jJ%=&SJHm%+d=pI4KpCglO$m0(a=m1lD?F z9F!(!C^~X9ld)-3c?=~zQyxz>(7*8{X%upL{3xYEYZ&Yi62*eJX1eHdi6}hW4oOUM zDoo^1#K=sN`LSuwEwV1g7+e@Pu0yYuZ9uQ*rX7JE$gVZU?zuCFzBv@|!3Xvdf{EYi z)-ojiLsNOW{wez)1p3RiVSuWbh629tM@N8Q03H>{HgY~_cp_GarsE^Fpd7NKCK04? zl0|L+YI>3vlj}(Ta4LiZx1>1elK-Ua;oEk82(9d7Ff9K8LQxnchj^GlVKTk`r+9nH z7*?J0aurRFCgxT4Ni(gPzpysP5=&NU$#(L|+kunQ*Uv~)JrK^pV5Tp7N$I`{6k{$m z0b_(e=J+iB;#56_Ipw~*B2usNM0L;{*ix(Ug8RZ2XD1N-$l$Av@u{qJ%qEMn`h6TQ zYnjoQXmc{-P&5S#g7itgG+F?4spa3b?uWw&gP&H;c(oNuQRI7Xu8QRS!5Mh;#jspc zLZk@kcnPv87`UluNT{P?!CVklBGqzqo~!bS$`14xxqJyB*v?3<3kh%e(yR#ujoL|e zc)f{X`Ct;%SdA@{*xs>csFImPj?|`d?mdanDfDL4$$lWBs@rB&s8v8kVi|ITA?K0K z3~cdV>Al*NR766SL@`2W;wGI`X0d}K5H3miNyPJLg$8kyuuQ?oK9v%~k>a$bt;Of) zczGKN$$~s4t@F3fVyxDPA3F+M9v_iEzhM8TXHII!{LkR+`Jcgy( k1G7;C#`NgZz@E$G$sDES~r%l$2k)R7uzZ%*{VWgiY-~$ zo}gvn*<2Y(1z-E0lk!JVRTo+ex#!f|k1)9S+LItStLHe5g54C3O+A!`ylnpy^Y5>U zqAA@B&S8o*eQ2En>V7MF4!ppd)`;Ve(Cl9e)e%G=t~}CK07O36?1&RZIdhsj-4Y0v z5PkK&l7F2{`bTi6HPg6K1!9_b!#@>xQ#z`V;$o0!W_DU(+M}R{!$DZ2gFbTT-E2i_ zOqXPKEo|`sM?OSjS~7G=26m=$!9vp(Nqeq;Xiaj0Mopm6N6VAp9Xjl1R!nDbGV)l_ zJWSx5bQ+!(fIgLcd-JM%8^(m?h}AMWx&m=sOLLFUFDc@>FbjEl$wA*&Hq?;qbb(Q~ z26;1a7x|U&jMujszhiYCU2^yV;18wuj&w-ad*k18ckeXSt5AWj3{;aQPkw)UamQ1l zkEgtHW)HACZ&X!%a5&jwbzdb-K8mON!6$xkQbu!Zz&8i!U*?RjCi3 73Oi%oGfoXz4GPpNx>+IItfQB)!2&(V=J14|zV)1?d@lbfnpia( zKgXu|N=)dRfc=~3D->6PNLDh)C^f1VJ8i~tg`J!;F+9Z3jCKwK`UNG0E~^{vIo%Y; zz&J-Fz?qH|QAzQz2;XsqHF0{ho*_u-$H=6lbvbDNZ73C!RpUc#_y|Jbo*M0NbbbpW zBFmd56oX>)L~LWtHG_=5cI4`RoA1KuLq>yc_BH+&I=SD^Jw~}y@-=*^`>wJvUnRmZ zRZ*mp&Tb?>kFmzf-)(iiT#KU*s=DG*N8|W_+gPRA1m%WQ>i#{E&-QoI{a+pQ#Q7z; z3{^6cxCw8kq2^>2R~23-Kyn}amDgS}x-4@%frs*{mdbWr)kp7_-A>{&cf@^F#RNqS z>=nPQ^j($6ZK?8|gz{1uwcMwJypw0UagqP@Q?AI_$5IaBR{JZ$vbGB|Put^N` _m@#WHA#)7 zIyg18mHRbE!t%J68?N{Ix6^NgyHbWsFD|I>Rw{^vt#{&8* B<&Dy<{w z6$ijdXB@dJ&-wHZ-;w^`OqGVMeHIh)SC76SbM;m+M~EP*1#a=9h=A=o_(HIbLxk$6 z*~o*~#@D;eNe5Y@lmoF Y(!aI|3ub@xyR_h#&^!0P%+~(JM=>2y z%*+y_BrM%TyE4~p!qvCC{09^pVK??zzW*q Bb}jZ6@4gnd&5X z;b>cz+y3cT-fP0Scfb|w(GaDU%*Bwgh*y)Rp%bXJyM04EcZ9$;E6>}Ih9R)JA!&Uw zq>e$M=@b(o$k)CcYAbTIX1ca2WgA+9_HA|EI88uG`DL~++tJ*RteV6fT=AHwB*>s( z=HRzp(ss#xoAXkcqFUSc48>U&h>Nw$QOEOskLbIa%Rl&v8`DIoyygPk;nyIqluG>i zXZNl(yf@$55rRO&RT3s2#uN}_^0%b+Xcko|!$5+dfh2N@_UEO;6r@AeAnAA1T293# zdV!)ANk7K>r$D5+CETITtkU&o55!(U6-4xg{|E^gZ$5Esl-z>*H%DXs0L2hfcXZHv z#Bt6@v2S)>X$1R_q#bIH_M cmt z;G-5%h gg_L>P;cSd%Al^neH@Qc?4K8F5SV4fooy5nWF%UH-k*hX Nn%Rr~dhFwagw 1!! zDQcxAbhp|tM4n~Qx9-3=JQcOBLXk|Dofw%zg83Gm7$i$%JVopwHnloUn^2rm-ci~F zsjH+_1iYck6@WY2?oFy5JfldKx@1tvgPS-m9S654)>D(+$3lkfg>m${P>oV9cIW z{Bv?0i7h^WI~70rj)xZs|Ik$|t7Y<>hZclTfMeqPduI`wiI_|UJI>z|0u~IXPUeRl ze_Xq9N0&X1%?4O0W$mWdp{(87G|e9Vj+1fnE5nC11>6gApoxGGe=ohB(SH%}j}lXw zLKM3ogg7OzD@*P?+~>3j%|p4z#ik}JHAwm>H`S@FLzA)>JrYWOSKB=1_BEs7AN8iS zP=20Gd+B7H*1YUp`l|X{R7|g6|F10J<|s?Ql+?<`42+?CUFp{wI-Ekrzff#=)r%4R zd|wCmS}DB2fI$iy%+!kTrf^@|GBsfz1Fk0-%F{F!GaBaoEboLU+V7tx9dfbtdOKb> z-aCH)z2q6xcZU}^z?g9YJpUt$@wgN+U78HQeN& jPA=h57r^DobDBvktO zyx$7|UwfUnJL4RYt`#1Z;cBNXV_WZpP{8NCe$O)itxyB*PQ}eD;u8p!f9}n#oxt`7 z=S~@V_880vj40=cj>M Q`XVSI_%^OxtYe(QD725(%@#*)QS06&J>2X?wd)c%hCi z+i_io8`Hw|X&(*0El#a!D?0+xeAX!;u;nSxLAGm|Pn%!rR={T;kMvX6)a@~ktfYm% z2YzhD7=L_j&XpJ&a0@_8fd9O!>|^-_;H2JTYA}5mOJA%QN50yUqVJwV3p}aXcoUkM zj*sK{5z=qo8P;yowd5E=e&*Z13+=Rk+0#!b4VIi3H|`njo%t_!U%FOmPzbOrjoO_q zA;xmXp|?K2he(v|H|5=~MZ=!_sW4hSY@=nj0$K+9LeFhEz5Yuc!rdheuePEB8#8Sb z$%h|$ip?a><&qUx?8qPmMxh3d0Y3v(Bad#3Q L3?5=m9tmXx_P!*a^;En@vU* ziR5JbuPBVOB&5CS$alhXEDQR;Swqb#oWBBmBy6MOv~#=Yl8#`^Y6Yn{7!zx0yHR6@ zhtJ&)s2KMp(s=pAKtil1a5&6n&+d9tW-=T f{61`m_~VqirTrxLjsAx|eF_+F z6HC+8DhY(e8n-V6UGK0jn*aU=JW!zD)N`x8YaQG%%(Nm{^rh4Ir)Nz9q0q33&FXRw zU`SSZo2JMi!)VOPO3S}*h}7=oB{45^UFlQc_8G~KbhR`yGvTbv;kAS9rI_&(K~$c| zc(@(qdxI4d2>RvIdSG};b*D+Kd#wv9f||DWR~CM`J96Xc>xkYuEK0ARo5dFDcD&SF zEj-Wu!5qD%3jDsch{n_hS_Qo{DiE&)8lEV2;$(P#lR~emoMKMrxRj>^eSzNkHs(F$ zrER(u{YB^QQC1WofCP>d+HpnB$!3x2Xp57GK76W>UW|D+sJp1!(nJ}vqI;-ztNE^a zivHruu=XpPkQK^^8_MfEJfYccdE-ri{e#cvmHsuKJ=4-EIGTzGw1Xa}GA%yf{Brul zKLQ;Ty4jheBs@zZY_GYXPYf{H*iTRl$YX)APSC`u!xN$*8=5i*9{I$rnaHm3>BQ|V zedF|I^PgYGos6A4dcYyb%np`7pq2J{Yc4 g{w(XtBWj_+Sl3 6;F#@2432i*g4Q_8^f~>FbHy#AeF$i zP{Suf7&c>Ip6Ff2FcfJ7v}qh 1r2nItX+2T&%BMl(`j{3wGJ) z{lm@bu1>Un$y)x+#2BJ-u@vR`a9RlvgxwP1i65r{tHRGpQ4WuH8y{r);6k{fOb3>o z1B1U>rdq!zOy|`cj}7*V+IH&(zDJ<2V+ZEM_+mp9t*7wNBlCLQuB3hSd+A3tY_$2K zxT|1-;054D2X9}w2wIDOov^t!FCU;fZ}?I?+OO)oT$?K1WtYb;!O9(x34Ia|(aT)* z*WTv uHE^h`dD}|;D5{H-Pej9=|4jp1maF7{>BEB z>6Z9Odpplh=}k4SBfUOp-*opgk+yjkv`P;ILnXgi#PTwFV>|=vT7LXc&>_>wKUj!} zhdxp!qah8?fI=g+=s}r0ge%9xF3>3B5gyNFngK!UIT!~mN}q*gmE=}H(Wg*-UeI`) z^Mvh$ci*R1->$Z #1Uc-VFC+#n2D3i98@ubC5hRGvFUeDI zijO7B%DaR)(ExVEf`EWe?Ikuy3Prb~!6H7{uy5c$9{*?Fek(1yj0)K(^iTzeRIa>p ziO?PyK&97*AbV1XkjGDJ2&P9XO|4-sXi5&C=e&jaHKGIlsaU6&D@cand2Ccfy!TsU zUOgBN)kJ#s-kN9`VIPbqzfyTj*dUID6CGrjnA#08D?5yf2DahVtA~7b5HW?JL^`h$ zn>r)c)3n6}_bX#uQ`H7JwI>+hyk^wE`D#R(+)UxrmbLGe>O_Kw{jNJdD8*WD*zuN_ z6Tiq29}@m%!!Fsp?IxDo0;^_#JwHd@Zl>@JFHMq( Rf zPU-KZ*+fLP_-o||% J0ecJ=&tx!ct+=|&bn<32$=Ty7elZ+f$jVxS4Lru5pn}=`u zPY8#gsp>jr|Mxa*;vG{m=EPY6!O*QnRJt$2y*+BCG`lO^JsHGCW`6m)hv;D_*;%a) z?s?r99BPZiBi`E+Rep${axPAmSPYz7segm05`T47P^yb>i80V1unls>G$zM?(E3Pj z#-7!AKV03~bs-~rz4V1jemGyZeVSbx$X?W19Qel&u{6*vSG&&37dZP<_UGq9VcFKy zE5C_#yH30EOb)%-@ iCD;_3y8 z!$x0ct B~s;y~IAE~05$ zz2A;5v&?t$Jk;C@G$bOALoG0>BfI?WY@RQ|2=lxMK?-{G8Q@x5km-N`wQrpfXVelE zkvbY^9fm$i!REzWh~Uu)DXboXPNmu3U}K>=^>u^+V@7Gwv6=}<^Q2)|Vj+NhIjBl7 zgovyBuu*w<_6>Jlz%7%9CtRwR>%jnQ`)p{b<(AD=d^*lqY%-nGiTSu8T*v4)3rq zyWQX)%a~_8d$q2@3oNuNs;%e2z#^ 35EHR#FPlQHpBQExAPh6Kdj z;s!M|NBwgxyWBcNYBz_5s4Wn #rm(z-boMWpdf0e5|JnlI*Fz c$HfY|$D7CSbML;;Ti !lG3ijnE06(yGCX-}LtV@I0<+T#V!0eW@WAaYaB9}s4%npk7&-o_+7A{Gq0ONVIi z)Ec#g#wKWrn6KLCR;xzVUvX`8eyp;Eqc2NsH@Y=Rtm1e^!55q1e}sI-<7U%E$GrGU zt$Oro4mPjaM%@#vKnCbob?zr)hFuYr9Got}OQH^vx0a74OE8+X6`DykB$Ox%ml`ia z4aSW(tiw3LC}K(>B@p@ztxHhGIJ(<+NL(iH|50_0?O}ymyNzwzw$<3SoyNAYg2uL; z#%5#NcGB2M %K}{4HPE*Ub?`m3$1NU z0vcE%V-*>X2)6hG{(?^fK?UjrvLIsyG&+aQCyy<@%M`xNRHE#Z&aKlgC8dfL=jb~O zw9O1 vYT(R$ +bgt|kNzwpB{8ut9 z$ Lx?%%+W1Lw#WP9S-qLOd z9^x@Kwo2FnM q?DW;3YZupXCYX`Z?4?&cDpo 6=93v1g$Ejul z{w0~V9Y)r)PG6CKtpz72;<;} W=#=v0*5BE) zj%dZQXyy|HCq@6jRVe1nya~Q{)x+vbQ_P9b$1hM0H@6xzRy3bzTE%SJ7QEcFg#T%M z72{9u+;)EpN)>Of@&Jiip$DI0*K4w?8Tq3& Q^6z3m%`3_iE@MlEMO^0zI}K`4Y_E@K?H zVGGm3) Jq= zi_c_8Mfq3?j%Fx`c$C4?tchdq9RS9=WM!AN3aan`!o_R~L$sh0Q;3rb8(!?$NeMt8 ze)A1JHu%&ro>$Fc|3UYjd9BNK>0&=I2YTKK`~Gt~{i)dB9^)D<3z>+~F))B1Rhz@+ zE>m*M=MjhSCU`!1HFiGv3tDot)8yfES%f2*ov7*3hg8FoTAu%JbNYNp4B*gkyK_D| zbvH7CpR+=!B;Z-8amo (qDY;NUY!2uY-^-4JTxl)yPMg|rzl zDjgA428;|}iFNNG*I{_>KHr06>{sXCQHYpmq)7+y9bzJ6`c@N<0lx W4c|R^hqk6*PUL&X eTiz1 &xLTx@kiD zNh>avJDB43vZ#f!g+ )Pr0zmyGP+01m~^iN6m^4IWu@>3r|FFGrg< z2FGkrp*tQFO~>(9qZ_uO;^&{I3>;4|?p7IGo^P($ot^M6rIU>Hei+isoQU2cB)OMh zCF4xOeD 8oB{*nEZ@!n(dzI4xs-sNLKC1_=KfUR}2DN5_EfVMs| zvc8xM&>On!h*y!UkXhk#QJH+S^vzJ_H*ZVl(j3zyB}Vxrz@8D&0R7zP#+ekYU3m3z z#iw$twje)|mTNH_S@~UuJ?vYpT-q%&{$sCEI;ZVb8piiG|Gap$_l`!BZ8duQ9x`05 zmBpx8hE=EJaBoI)UpJBIcl}B|1NTGs6JWFvK;tAJSl+`Ydbq?uzNxO$dMK&)OI&g0 z{==f5(zqmiZdwwtL8=fpBsr!C=cV9mks~@43l6P6FmEjHtLL`_tU1GAV_|8s!|oLo z6sWV~XPR>6R3sc6lX)yI^!F>$Q1C-kK*2yR&LC!hh3bS(r(m;4uuEa2(iw=fw1q7K zvLg1c=VP=w>GKv~$z_WT84k;OzcvD$N!v+U4t0#-Ykd<)8t!rwm1B!BY%)`ya17AE zC@ptiyJI; j#p%)~QyrubYmH|AcGGmWLY+TP+L71;GVu4; zH{kh2u#C5=QldHfbbL@lgH48bE476A#$5cl!Ld(~r01Eg2!86BE%tO(!1xUPcan10 zOV~t&*;M^xtfsp+ZKDK7=9DAVJw@h@;RvdLl#(pjE4Y*BYwSEKo!^||8c<>6|K3$e zHt|#hX18r(he=j+z+~$ZB)&S^6x$BC{xILyF(V+H ~2ytjP^j*sBXBj51PJa+%vll*3HYOW@4s9Eq nNPsDpK z5~=ysqEjAoB5><7&OuM&Ab|@XZu)i`{|Od(x~^6;^&uG0e())S&2PmPbcp9;>hs9| z;+f~WnX|HVFeyY-eQ6Z|PK-?PP1kZydoJ{0S}!Lkfns;hAS5Mt)7gWJeX|Fz^r9(B zNe}eDvae<+E{Xl*Yh@mQRl5}|@$fl)32Tn>469)MiPS&^L99=@d02-`BPz?(OEA zY^y1AkzK_M3eZ4$d!6UoTP(G!|KZ~O{IY6`QBLEgB>4)@Sd>6*N n&1CACn5dUErxc~>iz=4JU?a_GIlYXP;e@q8n+D4ES~oi&oRb^kp!je z$ySLYtAx1&BSWto;n6_14NPqEsmOq |8Wmf%2~ zQ7j80rLv_bs3aL+ebOl+sA-lEYc*c5F#z7L4yKW{JS00Ua=em;;V2h2soFcctbxC` zWzSFf#;J+P#4bvT%|SUWp_0f0Mr6zsusl{j1BZD|A=0MWp%sSSr{%o#?dsm^(K-4r z8cMFm&;<=# Z%J^EPj;0%aVmq?8?o;qgk4`5OLeA`Lg!vr(^+Hl&QL}=Q<-2& zC3K1!-oF9!W2`389ufM D1uJi-{ini5(FM>zP5@xdQHckAO(oZclNlff(%Db9A!K1-xiaO6~!n{#I ziusU4;{P~S8E^xnD`e3j56wm;S4l FJs=VKG~(FG#_@26{9=1S!~SsE&4;OFD?UcT7+I!o2}OMO4e(p!du60G zs$75@XV28!VPDnUOy18w^}F_uJ~aWw6>4kP%TcHDt1mQIsv66`PeT1As6*Y|@}Ap8 zz1!#I&4w4K)?(A1T_;KaVqAr%t`7UmQu9-pg3mj{lXuovkr4#Zi8}C`>G^>I10ty5 zxN!qrXHgKCE)AXanTschvLfa_LEr+kzy|=+X!b~wOieu_;*Y*U%VaQ?y%4yS_AVrr zU!3ltYa70{DwgecT5@>C1OMyREPu~qeGQTt@)S6uoN8HNsO&i-nqf4!|0fp5<1N;- zV2lSbe%RWWhHshIJud9b%7PECxIgi{+#XxAq&tXn8oRi-AkdD}Gwg0?`% X z*!1Efzr_A(LuZbr8Rzu!-LxZCFk}v9fnZpbpUp_A>~$Yp;m1 * Lh0I6^`a0F6hUMxDWc%_h#LX&G}3J# z5o#%pwmrVl%Ra2xDw2(*N~Z0&i_0q#V9Jw&aK36y%d*ETgW63|OC4DiOe|&G&Vk}P z@c7F5Ter*9b4_OgPl6D-i?x%Ovd!l}Mv5Wx-J=&b;ahFi@lV-|Pc8<^^Q9=L0-xSj z<|6(5-v4!?bpEHAN-oqC0DK~e!(=Wg=NW=e%_*7dW;UVN-G1hi)H!X|8h3|{J)gYR z1!Ra tP*Jg&mSu2m_WVVvsC^*{Dg!3ovilq&xZD{u!Je$@01=-qPGP@pDr z9E{R*x*cqUL&?@fk98%leL!g?d-)%#?s1$$ 6t!zGDrkXq3S#z%ZZ{BgK!JMPw{*YYxhoy;_xe0>7 z%cTj5;x@M1aeyDFTp;?(G^|)GNhyfC-52RFC$@U~S)mm1^sGb&dg5woV-wR|@Fib! zpMx##&nYT^|6nbKvvxLmb##h5R}1RVTnS>n>L=G1 Xhf{ zu4N(vk_t!5E$je8bJmiDmHv~2GEKi6-Ek?&$Dlj>8!+0Zx4;>w?xq9@p=!Kx-mXrP zGX748%PFr@x~{uUElxivru_7x0vBv6cY{$vYV$1N?zJYv*v%2uMG_fL{Wt-!ffSSy zGmRvG620E1hYY{^ZjWuEBvcO8c+)9Vrl}oX7z?7XIu H=iBWkL>i>3`kdo4SJ%_iUIvO-7uEJB zGVRg}o?1=MI_=e5)Kd1jsb@04$72Ycj3v-IJ4s=1wA$);Zq_)}Vdt}>zUiaOa1u!o z<|^8 Cs-BuFf!0sFatF6pf4yo6!>e?$a-97-2efs)g zZNP`zeekF$@}YsSblX0*A*Bo HE87t!ZOR^`EhAs$DSoT`v=O&8?j9ctC%|MaFoEJISgT!m=ja zd(9y*t}9#cNEf8AV}fs#{j<6Np{n946*2v~2$2gkUrFMiG=sS5q9))iF~7L`F!y9t zN{Ahh;=u%z5jDUz%RoBI67V c^1x? Eov(sQG_MSy7|Ry{6`2v5BP|#R z2*yc2E@(>spyRS1VSqreXC-HhoyMi8LY-Phjm+-}G?Ksxq=Wll(u|OGFr$E&rl5tH zs3AmWG^czphz5bHfbCNVn8^`D_jx-0z$-T{Mj9~UX>hJl?5LR$a6*g~ER6TM=34D1 zXHk51tplv-s*1+xIz3MLUuqw?*G<_i9-GUPHgEklmE6ipj`WJcaRk@5=~S7 zLz {29jp-Bj<#B(2>UXzQ;C-t`}CRbqmYD zGn*4v57Xbd+xnmRj7h5vH@IaCQ??6>Z}N)|NFTowioe=O)nQ5=BIMksodk^Edhg9Q zA|k&g iK)e zPUygrG%sovO8R=o1}f4PgVsobAd%7LqCQ9=DnKLu(O$mXPAq6Y0@>iBC)lUp^_`A& zGp2AU?Bk1puvyrWwnqdNq5DZe#|lnRlB|iVEGkDU?W`JLAs&Jw0gj_UIOu{TO^k@@ zNrhM-J_q1I@W>38ybvf1%2w4J$8BJaS>qiP9>E>;1z30f@Q##$%|b @b*(onD~4nZBW8eVno^q1Jw?(+ti%oJFU@3X z;)+v$Wa`F!yOs25%paBF(b&1D*IrUQ5)4E%*CFKQdNx0K7H3g16ey-LRyV1vA9f}6 z$>M6bI$8V?gI41P0}VwM)9pmlyr>@E=ZsrK#qp|3qtS6>tBNw`X!aP+WmKJPa0-^@ zwm^r#BH@6UK0_Ka44!irjz^k8ve-v)A0}sbHoAMiNQiaM>`yWM(-*Qxt1sUGOoBHh zsgVzXrH}B`I=(O5o_8MV@sfLwq23>r>^h%|LMO++tH?dg=lzilQT`c>TdQ>QHlE(@ z_sPwR9g h z8~*^$fwd!P7wUCO?%;W~lU=KE{=cnvn>Phk$`qIiYiWiYCePIpA@kYMCF!6z#4V#9 z!;;6%6T@o^&uJ5}3U-cU?mUu*!(cxU8Fp|L2O|$)6Ejka_IV^Q$J#-3Yikz4SV0YL zpACR?C+gR{%f4t{JaBRKpz^i|c^&G**;9qf)D7D2mk`*8)Cxv4rrb!>4~9iJWdbu# z{+_zu3Ca&tz7rhgDmC?@J_YF1@%wW!3S_^>zqwrK05|J1&-U$R^u0`B_PY&5j}OK4 z+^I$lWsSX-+jD=P8ZJ*gcd^VhH1S(&E=mDY^I;G3mJYaJ!?mQTj(a;8_*M5wkHNdV zE1RUYo!67QI1Dba&86HMtjpDOtyC>&^bV2-lV0|<7N!r6<%VqI=c+dI7DoE~quwsR z>&<@RobM7XQMKfNR^7$e7Kyi-;H%9||19B!&+c6D=|`|Xg5g|L6sAIx|BJ;fsWfm= zsQYAY(7-26CM9jg{%|-`=Ck%@WY@{ZBYEE2@#toxN|JZ}M0!~zW8?eBp+A7!@x!&j zmWGEJ8W-d6XwcC6oBEp-c6<*bSKhL%CIN#@R4WPO|C&OV{|JM{{}BebD*v(|bL+*D z=Cr>9)W4BQFfxZ2O(K)m!N&QLgTCVef>pbuQE-~k3?eu9I>l|o$`h@)vws!4WtNH? zh0WEp6k=^e+PBh*v}-Ieb+m#L-Bn8wD0x#|Lg0P+v)~%|J7*xS1!t|NxiZo-s?8-{ z+LU;yLK~-6o1V_ A~PiA8mj^AL6T0m z8&XNt7z?aHS1h+k@KLP0xL`uQVDW8fe=M~txY9Ax&PX;1bsmGhS0j!q-j){{j@N#> z7dW~b*zfBOv-fT}q@~T~U84-3kC}DdCK<__y_9U|a8JtKG}$QIi@|pJ=rL}5B8rv! zk5$;~I@^UBctwyZi)aLkw?A~Ew`Jfi@uGwt=WXUiVBvdjL)+b}vH?rK&NbYGD<~$Y z IwqKmYJhUloY%-a1u_`1Nu`sg`KOzMW}tN0-_4k={uIVvjVPQ0@Wh#i1i z0tSBPBfB>}iOPNuFqPR9;8!}x+#@dM%Q5*2!4-|&r(O9^fd>j$(3`?OF!GMuODzvZ z1e6o;4tY61cp>GY^Lc>?Q-6fsCdSsV;F_v#lp~H?ad?u2H~Q@SDVcdVIqJbAb->W0 zWZ$Vcc`G~#zaP^q@~h(Y*iABb0sbkySpK8*Vj%CK*sxD(wN5jT)8`>YauP3Ri|mYD ziDev}Zw31&6Mj*~=4W~;MJ?DCdrF8AxS77smeDoX=bw!7R-up)q+{`(dW?>4(p(7B zU!$kvwcnKM(lJquY2v4+(qVHKAZ6iom0Zavsm15cAUP2|mfZd!&Ou29Ft-)q_F{>G zwa4ZLW%ym%7&5k7Z5~auS0HQccaXAHmX{XOQ}XdvR_)UCxTsymPO|>1zu$rI%G@%g zgH6iF=Y`uZwq{gYt2vkI8V}zm==v-wQf2VJLQun$X-CB#$~T=q@KR-T#YerO7^32^ zBp2y3L;uJHs|P6X3KuAV!66-cU (+o%alVaKEJK@gw zum(*QL!9m$+ebLLtXmV$%hWYlex{M&{a4$4txatvUdi*dPMmemN;v;Ex1`yb%w>SW z3yGmnyp8jHvatiFMo8dHvG9pN2vOxwG);4K-u%i4etF9hDRm}L^d&=;J0#C$jkX+R zr|m}H*#De7B}0`d&;qW=W{pRk|G;!76EDxU_@+`Cu419&Iz?j|TYjQa%9DA5yu7i< zP`bWeZWx3N<%tLB3D+0<4v{J>i|_?yuK3&Ql0Tb^ioa>o@}e38ll`lNO{@JG#!byN zt~|vr+@naX{8ATS J$Rc%MIFgChoi1rto<^~!Gt2u`Sh*oo zsJ-cIpD(2EIM`e%*Zg`tA_=xoVtilE`_YWzGtC!?Q`goqu 9v>D7T6jkG^n JY;{Y^XU()H`|Rnk6J(SZM pC*n#{BlY0*{f?^ao*q_i;&U;z8%Mzvjbw6J z#woyq?X@jy5u1ddx{%%UL!Ypb;sGOxGO;h&Iv3u0_sM%^o)$(7YQE!gbbm703Y3r! zoy4v d%Lm#PyesRPxc|a%YRLh4Pv=20Tx|w#ElL$X?Xl#W<~d zXZ=OluF-q-Pb+fh?*aGAf^d5&?PptOhkn4tr47(S14Pd2N;f=N_@6dWv10u``@sjr zf+IYHPchR!5dVIFI%JDtXf5qc5`N~5rKzO3R%wY(K{7!>TGkJ$B+NvDW4n%bl23#b zAqG|k>n2|kkiYX2^8zwbJLkC%ceCHhdd^noJz`in&BF@@?-7bBc#LKPOi=alg&bh> zJx`LR87OAQNw@y^Ec$~Gk((5eJ45Qx!P0P#&>$AoGLDtKjnO!hmH@O$Q}hAyR5*mG zOC`(R*Dj %%TeO8DankvUM{}{d KwGFgJFRs&RoLFb3@X^F#sV*Jj8cf(nQIYI%8uz9Er>z!wu+Hhz%uq?u}H0 zjnu{=U8X7frX00LY3j=x;bj0d?jA0}(O=swK!!b7p+hfbbAcKA;5vGTsc(AfDdLwl zOZpp+nEJ-f?kO|LDFSNcUlF(i@~VD@sVvraXg02(V06=1SybEA-6><4C!p-{{!2Fq zq;wu4PXgV`I;)@>&6uOjFsv^JL-u!N1{Pm<)pyl1eoC}O?)&D@h#&G;BEiMgjUWQW z)PMSM!TmsWjMpJdl@Y*#CtrleX22U&Edp=PSPSF7KTlHz%bi1Jm&zhqy#@xlLDDCv zxCPc@H;d8~=ZxR$wgpOl@X}jA?-N8F#8`Dfgt96sGYj_lR5r6p+;d!k$Xpe0x)LO? zTA6xH|2``NK$9)(c!8D9eI`pD_7eo;Bf5aq6`9ZlGV9HY^0@ugn;islrBTOXoY>e- znjRc{W>1wE=8AoB_K8fA7~|oQfc9D`s#sQ0mKoZSAGe(RJENj3IF?~5HouNV V5o3b#Eq2EhI^Uy=d?~gb+2P3ZyhJmBY@ZiSudgiF z@xJ`RELV$q&5LCOQ}Z8WXs!}196p2)I|o5B3JBc4!2$~-M*qoqxQ4*t2f)+AjP2`# zoFGx@kK;{cGF4NA*CUhgIiTffP;l`35traQLUus5v|*|_M{x{gGYUHFBhaoz#(D8X z3*M|15e5BG^Z<$kG3@U+@YAU#Ysq~D*TNb5`Z#oZMFZs;e6wyGbLTu^Zs26iPrNx5 zvtd>!A_8JoP$-FE=#&`4_;#c@Yz!>FmcV-{4V{rm>yKO!8Bz9GWgP;&nA*`09>PG} zY|dnM<=43&A`!q~FR619`VpE<$+T2#*y}0gWSdd(@PVQ=tnO%9I9+w+KUBi>36IL- zKbPi$X!Sv4*Rnz;xZsk2{`xhld3LUJJqViK4Nt9r;z@;IgfCm(ZzDa{CeFY^ZRdpD z@&>+7kB+LsA8!FvBM^KI4B+i#0goTzj#X)cnG_CWf!nmH(#>IG)-%0Vm{TA9@Iiw$ zl+)<75}?^>2o$vU`%WMKU8yO9wT-J7H*YBfuhoE@rP&d(liVOLxc{?*@IYMx8SCV! zSH_0GUTu9Jo&=g) 7YCC2ql z?wA_T+wR-t`&@pcfn=AN=Sx{PFw{wTvOkecDX}(sl34i1X?FI=h9h55`T)?+;UxlJ z|6HH3MY*=h=Mx<_cQ)}^J(Xe4hspLy-S#S~=Ex!TMy*<>za1F;-thdxFjHPW=Sb_* zavkVx*_S#NJi<*x6NwqMg`a9YQfoT+X_e0e1zi4M?FU(fC4~JUQqCZ-HzL o2zm+vCsQw_vA!WsPRQZ-dQ`LE55>)ajelpz{Xkl5 zb6)zB-qvbOV9|V&>sm@W@C_D+nv#%-4NHTG&9}>@j;RFHAOve}o~xWAlw8N?fkN;k z|L*!cskfIE&h^!3bc%O~Ka9S#Te>`9loI}b>e&N2-%&T%zcBAP6p2x& ?%Y*X(KLfzO**4GSu-mtpYyF}ToZ{`zC=fLemj;biW3X>Khgznz75M2zbkE) zw#K7-%C;XOiznybDUId+fY9G4rg5EG)W_5y>#)3?2_)r`cElq&5o(sngeIeiCZ?l z#DR;_D#_ M^ hiT6rY`im l?r2~Lu{LojvW(dsl)=BIrtBS{gm>u3YoiO zz6H9H^AJ#5tL-ii5K_m|W&}#%&qDa@r7OQm|9y&M2bs2s3e%aE&k>s9U!c(2b8FYB z@x8Cdf}{GC@3)}eecg5~7;5Z$tV+HEB#iz-lM3vhn>Y;GX{KISP|-3cwHmD=5y8J= zR6#hkv|GHbB%J~jHIcsQgccCi1qWYJz&0cwW-OyvP}}Yb+9M#UC$y}ecD57>(S$X7 z4oiGfarjoRu|e+0S4Us9fXsaPJSTtuOkk%MMqwyz-pH?
K;fYGU)w0lv>jn5D-iv_yvtJfjX{0Q;iF0fm|CM9yPdK@&iqCOeqUS7gIgLIt(AY zc}w@kC*_h^zLfO2{$k^uGQ?&s*Q~&fy5y?5@sX{o^PWnn0B(PZeJ=zHRQBG~i_rtL zrht*w=X)(Mb_{G}Tq9+jx5LR#YfrZd_!`BIsEvDh6=Ea>+y+2D{7~B-W>ZY!8TQJc z5CUhK>YhBCHkA@})zo$rYv+Oh`^^p6t2!;?1iKugbYROe`aZxt!*>)F1l!}fNH-=t zRomMednBp|aq>rtijV4^Czd-JM-SiXh6N;zv+V<5xYD9*mownjXlVK~u=520@e7sl zZMFpZ{_BdZY?HcLVYPU{e()zh7@#Ej?oW7lb0PKj&1Y!$mSEI5?BH_KQT^+`#n4L~ zk^h|1>G{v}OO>!vtFTdmn@!1-6A11kM#C1lsq1<*CZkQg28`~QY$ROmwQ+zo6R6Sx z*Uy(XFZa)eYBDn$VY2G58?7hWCN3@>-}ASCo`ZigZjhXO!bLk9xc&% z^DE5F>r5!>|1y O*avQ-)`J-@)vSOFeDMzO)Oh~O<$6erod}k@dmAtLd)BcvtK3QSnmjf6 z_-U@HLP=Q9qyExQWqZBa++?Jx0{)T$5QoU|79+#?J+yy@kthRl`{?ns30;;e{vN7* z>nTI1^zMPf;rKb)`y^2TYnm5A5yYx!lCOn^_cxfc t~{#)^*`|Cu7RcAkAY`xoFPv) z6I3@dn 4|(O}?LK4De-!^xN&6KMM<; z`SeZdqC}OGoj8{3Im~Z|Et2W#O7>Cu8O>cU?8j}4so52{feWefF55StR%N*@cY{<{ z p1J_UEA#?stXp{mRk}Bui&{ckv-9WAzkgl) zEFs%RIja4%#4;qsqrCy0yjOymQY|_|InHd)BR8o)R!B3 #kRxP?z(DNVlAr9C<6l$CLB9oMuC40(9Jo22f&qtKKt7axmS z$VErd*281ZIk!FfCWw%HgI71LtZhu@@)q;K^M0ej>g(FoBXIv@>UBFT<*q^O5&blG zgXG^Z7r?pQdTx7_r?{mB+q2V>4jfJ}Capa*_Ex_7sX$I(t#&wc_?T;IJ@! +gmS?uK76%O9+hB1ZIqsKR9~CP5hm z6y7gq#WuhG0__H#Mx}eI-p)@B=UL)dOu55}`~kb_+vw6@QFqxe%RjX~{9HeyTZ^`5 zoxNLQOP%!EZ#QT)x*@sOiLs9ASP5=jt3S8|9!g}QATq*`LBy3JuoC-84*Le1rJ~Km zoTyHdLs=H0teJ$ex$M?;o<`7N*ZtChLRxlUdG7E!fJxOdUy#-PrZfoFzkNFv3^0u< z$wjg5OfY|FH|Qn%=K$=~u1<0cQ%HQN3dgO#Eq6R8Z00cKTao9>A zx87L$_hKmdN-=+vgx_JBOKM`{v|SQsW%&e!&VEP-0@*nxZgU(5*1Ol9RAs-oes_Wv zqEv_}0zW`u54#!;Q#&BeZUy*?f0%Yv^3#ixywHfihOJ$AZ$s4(S?-OwF;N79V;6ur zL9sQmB~mgbvTAyx^XCO_2g=Xw-6Mg+DW{E;7(}sI_ZjDMDIzO0c`TCKy> d!E&Lj-i!k&fGI^y;>Aw$4Vml^+&eHK(J9sm 9!dzcr1z&glC!7$~J5TnxK)c2}=xd|dBW z>r^m*8hsfqANXHXRpwpl_4ZcS@L)o5Mt~)1AbRhnUfA$wUza9tPM@0yv}YS=f}w@Z zgUQ2GzAk
gI%+_7+0QHqZ9#S$eK`P=RK|I!{UcZC!Cr40usZ7OWe~dm$rv-;6 zsK*3J`+&Zf;b9afBtbdJLs3X1;wMn2?2V%n6)$xmg2QF3-G+dphCqNR%E*%r6NQ+H zpsE*9jSg~w;F)k*2C<4DoX$moagY%HN<}h;RVbEAW5m^_f`6jd1T2Gs=+n@Ir3Ipv zlL>jT5Sg?fH2DOxewz>7O3M!=$u@!`%R*bvB3T7sLPm}t*#v$Up}MS~DWryZQg`-? z$4i!uQ53!d!}K+IXUEotLG!8jouD8I9n4OG*_;Yz_@JCScYY*FVHRQFmYpCmE~K*_ zi%g>GITR@x7>JdP0L&BP$dK}ca!THwTi|{Y!3+k9AWImO6mx*d??cW$4hkIG!D69t zVBF^ovG;-SN{MIS1m^nfeCRQ63HAx2IRbcajnKhx9bpND<3fCHt&z8Z)7MzJQ`Bvp z_g(YXRNcOx+XlpMpX;xCJp$DpCrvLd-@fFPggPMB{S8}xE78vtYdmfm8JPaT9$Vga zd&5)aN!+>IZ0QngzWRt?L?((|E*(1?FN(Eo-)yTs$Dc{b X95z!4gQHU>+Y_%SC#DfC zm~Ebab&o2+ft(hnLzxatC1-*^h1(kPu679)ix6vz+HEk94uI7j`&1tkG-z@N){EcO zTd@(Rkl7@LfbI=x@?`kguuzU1I(1E9@bZM6zt-r*!?x}s{W?Q^=Jx1|?Y{na07hnN zwVK+28t!`^4j=x_Wt*p_hF5dX-f}4b qYbwpL-) zn32!;EX)K4L9&Q(2CX7V-LCEe*iCFC{FZ7q=^XYDjUVaR<$K-Ejo9ORfBX1I+^b(C zP1yjj%?VlM4ap u+&R3bI zw)SZH86I}F23Xnn?6gm+S-+65M8F{Vlk-YP4A4Z>Cx|1~$j4s6!?#bSM^$G0?S-$T z$u1yCj-rm(LMv7Np@V}?niy+1%`4WHx9E-s&W*i3H|-=Ew(^68tbTt$6x0%q1EFT} z1)s>OUvtoU%K%S}OTs2GIEBph2uf8~I>F0IrX?aW27pnaZzC5O5u-0rZQt _^AyQTnZ HP$!}5}gM{C- ZCiLLfsl81h3cZ~Y#H^~GuiLzJj;9ZH zx8kOQ_9t>nXja;nW#Sye$#(;H#>p>L3W&s~H=J_q>UkmhhQc|pIrmrIgY%;^@ooaL z*)tx)&)+D%y=m s&$VLfN(Mq0gSSzT2y9vK{*IAF5z=3&`sSz)ZatY zD}b$4LDa8ANf*#qwkmS+ pT7+Zhhel1w z0O<$^ESM(9ck8GV*GLC8yo)H8n=tkgvVrD>6XzHlP?h0Cs8AAptCSE!>@@2xlX&r_ z2q@G65#a)ok(@FZxL`6%8p`ie62TNozoA@T5#$kFzlnk!le1Bal 6)D+h%IS=l=MIet9R0@$~6wy9eLE(sOD{lZ|@fkA=bmtm6xXF-`R z2kQhaZ6^Z@ln_!IQK9n^rXUE7rVRDPF5uT3VRBtYOamd*wtI(brWJ;wiKjIS`hu(P zhSmb}sQnXk(c1Wf%@`x$Br`7e-mvq%O7doM)f^8bgA6{UUjZc*7OGzrx37@g5GcQh zzQzw8C1+vaoh)w9 jo+A<~nVyo-;n zbXWaAvh%7Xo3PHaw#s?z;&G6+8L%5eV7Pd7jxR88OEbt8)4o_}IHHc8k5NXAOB-)C z>fI17Z+}i{Ace4h(t$--R{c44*-`5Rw@#_f1&2`tn?tlB3gj9-q5{qw!EIHrhEtr= zD^>BF|3+CNo3{;GlU1L?w|e2Q-YSbZDO0OsliXX!w+=Qdx&CtXwVEsdxRHGuwEP{i zzEqRpB}X@nB#4T&f?Mdhyo=~Al^xzIG4#!`&BuP$mh&O^h?U+^rG>l|xTvh^SP-Bg zE@Qovb#@M*Sh|~y{~kKO^ewG4`nZ9kCEj9j-k2||=sM9R@&F_4Hg_oNQ0{N>KLKIq zKmfafzY~6VhK{zr=W>1vFy}FRU&mWbgHD%fczJ!5=i@$UHh%2Pt#=IO;9K#!*YvG^ zxH?cOD^)gY#zwgI;|k&5&0*A2pIpsJTc2&sv-GCNr9z2|7@W-xddV2H=TAntN`p6X zg ofdcJNVxbT1k15d8XFvW|Jz)AHw{7 z=<)xK(BA6W$`BA(5{tfj=6HZptur|Ja=aC&V13inrUo5r2 z6cCy=o0-nn%{+OU{{U$W=A1@LyG&~1OsJ{VL;MHgSKGjqzMyHME!@{ezA4$lxRkD& z)tKoiiTC_)<&U<&p@hvMk `IBWOIRyrTV*msd5 zAx)n?k@E@Zj9w~a<+uxcC_(V;nkdgSFl@O^^SnuvYx=r30K8~2+1)5693p6Gq8~mB zvI6KY*lE!y4XqaSaf>YM3dAU+B{Ta6{ikj8u<8o6e1|M4QDXS%OeLZPFW_J%7`Y6d)|@GB9v0yj#w#hGq*DI3pzyr-7sAF9uf?hkiPWIQ*JM#RiJz z^kb7KTUC%`(5(~d!YhzN)kk5%t+9xUbS_sJ@}Q$v_*%Zgtl-;;N;Y a`|qR+j@$(gX=A~~TX|$TTTO;>e&S?>ViXXb!{p#kFVU17kSyHxm5gTX zQ$*OavFMpp2f4@b?Ms5Uw`zrBumRBpqEv{J;%B6pH2w@`aT;SK2Uu)8b~j z#Z4 zzHRHKl P?Umr1_a{R-%g>3kf}omk+UIr za&^-!!q1pb!{sjaKCV~D`dVn<3R1P}+QHfyHApszV1qjk_zNhN?IEYie1JjF#Vm9D zJn$Vt+^opzZSLVN6mHQM$0$UTdtLRaIB{mJA8zaHts|I1*26mMi)fg2?YRDiTc6ty zC 4FI=W!qi?4H;ezF81_~)c @)1fs=~Y;pr{;ua$fj!(uCGsC2jYqM6EZ@%8|cjL0#uF#hMA64fR z9Y&yb?Ko*{Cyi~}R%6??HH~efv2EM7jmBykH~MqV`ELHJS!?cRUhVyC%a-v@U;pKF zJ$^ m5_C3K$cQ)fZ8?W)8Zm4oVy@dIp8IYG;0} z8|S T&5p47#QW z!IDDN+59cgey|#XIn^+=`B;$tZOGY|Igoh4ufJ;UhtU7s^l$g{Urj5FU-o}%nADn< zU`7h#ebUh>Z5rnvtL<%lEdRkFwT`uVYs^m7R019ZnOcz*EmOR984Y)P2|Z7K>n76R z!66#&dOO3Dk&?2NY rF!k3_b}x0 z7PiSb7rk28a}DOaL!E@ySs}5&1}u>24}GlHVGCY#sJ^+~mo^CU=a2mQI*PvQSr@_3 z2FCbLkdvhkOM^j`r_kSufzY%hnyBJM1;AEKTI2zjA+tG$C*>MdFDTt&q2F46Ye%{33joij z3-9^t@ZaxcDr4K8R2(_v#w*eiS$Dtqqf@&xLvk;YH#mOXRAWArY)C60*wbT%49b4X z#A|`D+Dt35p!}o!ymsbn-PgD4^R2GEy#rrUcX%L#gE?C8?d5p%qI3KGu=B Cn;s>z_)!OlGIr2Zk6E-N?Qd=mETaLL^+@;d N_b1J*z)STq~v2-hQqJpYKP^Zq oDfWD$(B&J zoF7|cIUH#DuRZNALz!L3Ynh>#-hN q2)#mI0Nex}bWe zS^J9^96wQ)76>I5>WmlsIL>llnc2I=sVoa@9Jxv)2KQ5vX@0M~NUDlt)l@B8$7q^} z(WD}>$e~UG)oWlZ&lTbe&X KM0ii5f(|O?fk$2q{xidd3aYo11nT4}iki6A9(R3;vSuX2#!Z)rpp4fW zuJ^ZZ+c_m#f>TyecjhaIm3KU=!)O)~h9&;==q)YBU`c$7Jz$Fn6i@a#x&UezkI?FU zYJEfY*zdFMC!&z}Z~VKeBlvt-gy17v^}Sr`>r-$VNuDMgLW|a{zxb)$pE)e!A&UiA zK+HVsRhQ8dbFJ>-n1BBO?I(WX# Z!HvT)c^ zNQo5pO$l(EfHFa@(UDYGi7YH+0xs7`MDk-o0-1kWh2WB;^C-Oi2&i_tu(P-Yu=bD0 zF$gptac6;<(NO-FCioQC1p>1sx?DqkfbmxBEwWo91-s^8x3f_TOPolyI5xCVj)nAT z-~PQ;^;C9Mv`KcTaHLB%bo|iblGVNi8> wQq0)F2&5FkRfil+}7IO6Q%r;IESD z;o}rp;gIe3* -Pb!4e^h)-5gcH37fc^dm=`L aPCLXxAbkmRFntSjqu Iog{QvvteWj=;~u^pLg-ukZ(9q@^3Y z6BWEuAWn;Bu$3mP($_Snr390id#Rv`5VQoUN7^e>rCK}BOFB7L)k@SkBM-$aMSeY3 zrd`spG-OxN*mD DlV$}tyD6w|D>+F^51YuKs3t4yK4FY}r=BH~0ivKE;R z I`C&PKPxt2a>s8#J zdo$@G{fMl}*J2xrg1j>-^MmKJ{4)a~WZh+h*vH|ksR{?c@BLR=0e{u{U+Xl4eP)-# z_B(yQ*(T3k#pT@-5Tfe($3f>+N_m-?eo4BYWlAYYH00=YN~)T$TK>AVz@3iqf~i>1 z)Hr7 @Y`#^M@)}J7Tsi^8AG^E9FN)dOXufphNFy& z2wD7SMvs&OpB*nTP3Swp27lUt^(Od5o>5SooII6UIPNGWFsQ{K3j}6T_IJr;p~>|! zo-6_RP`8kZG&A8FpP~3-+Uz*+lPhshgXd8*P-AFQ**K*ZQz-&7>1x$Kuzz!17Lfjo zyAZ44b=J~^be!oW3i$fdW)}j&i_!(Jw!+hi>H0s9`}=nvD1_POE#nVIGK`ZUS4UGn z4P@by-K0n?h$D1yDW#iEqXpA-YFHg*E~ 0_WI6EP z_o^ubxK0=#Wm4rreywzUhAJ}Ltp+LmO|?Ko<^E@)wmox^DEPmITGP&EOX5EzxW0_6 z3EANBojd}v7#)}DEbLxKC)#_XarmsMqNvhN=F`XGCfY=$I(d6c Cq^sc{>&*K{8q`Z2=UV;^c!Wi4N%y|C!7n0 zP?SGm36_zf%c#L3M~nq;GslV$BvmI~XQ3K~7Y=SBT+f^g#W%BLmrS0!h8!b(4{y## z;`nlO^oIi;7fHI|E%*5m`?V-+@V(t*aC3p9yE|`G&^QuaV3zVg)>1O%i18z~y5Elo zP!<2gF90F&<2PTfI++sK0;>E2tlJpI&nCSiZSwEgwR()c9j*sMnWr3qif0GtbbJXT zv(K;}?G%aVnK^J*94NdJqs+t)6~{mbPR#dAJ#6h(*Z$$$Lz7Cs?@uqh4(jAK31a^` zje{td-_vI^!{}P1V5f%Lp6Mx)7WJrO)V^H1=Rs=q9N#}4VPlOx1cJ|VT; F&dI zH@1GSP$a?6zVQNkKE{|joel}rm01;&b8{H-23SEM^Dw&p$-ul_#!>P1rEYtz-P8a` zq2T8S@(q%Jh9IE{;ngkj(4*k;61={)tcV}GEY@f WoN_rQ@Nwh(A WHsTykme0z|{WfV4r$heLlBpEdH(eZ$@hrhIg^;)*YbZ0z57)P-f=Xky3?Yrwt zT>t5wTIAFjS;k^HOR0*ff}lo*itI$T*A9#1pX{0KNo@7-?xcb=Dwhw<$@A<_3U7(i zfh#YPU!+Nl*?d5H02NliZoA*x3~|uiL2cTJp325SFD5B> z>fB<#JvUXcY*s_%n~a)1<*JX}2@9$aI>_sIBHPOHr`l@2-S1hD56o@?kCiP0@1ObU zU6q6E=CYXsgUiKVy{zd`j`ks3S-Q5?7yGICn#riB7##qi8OLY)_wb$2R-skXWY@FN zsduZ_Ls cLV3Ju;*=*@>Qt$zl?+r- zA`>g)@!gB?kT!Vz%HfcVj4|$Jp_!k~jz`o*!*!?lm2MJE=QFaP d~77B8??)ywf~yym-U;UrC;`{PqT!|TNQ$%osq$*g^4$4u2l<4#l7L?DqlNDK9A zKKsKvq)=G%($cct@|V|o4XnWOoH`Q;8wmA8U7WnR&i>VlsERHmDGi4h_?g0fRjL+9 zO_EWh$z%}WFi_o>>^|R3FtJjFSz@OL0^rWJ%mLfpO(EBJ51T4A{opECBb=S%{Rx`H zG=}bHTD_Vb@+w(75DlEwsK}FA?N$ -WogA`F#JxTLU&pCXmH4f^_+BX3yiGIzI%|fQMHnzseYAUtV8!R4cp*)AXmoc ze87-O`$N&*xb7eolgg;{U$Hk+N%`+PmGMpAQMD2q1E0=c8w7=GtyhbQ`!TrKC6RKe zouLvz3bxr&^nUQMLGre7KJ->Ewmd-Iq_zztGEvk$ivIJ2hB0XE&*AdnZTlFWSev1n z8)RJcR(9fU7M3g|x}|y?y46v0r$mN`<_2cL3Z;FQXFcu=^3d}S^?jXAH?IEEjDa`k z&-X!U4iU078HffFv{6SEc4u6BqbG*pzv* d%U1CPeKz%CJv{i zFH1ha=J@u&3WJgy;N@A;e&Gv&6|5|bJODF-PorY2QrvG6i@rSZnGUNn)vpsmon)Si z2*{BLe@hIcpbw^iuM1fcktq=)DtPvfu S*}n>(MYV z89o6VoB>#N$;R{IG**R5XLvD)T+c|qLG!R76%T1dGmLdnBJ=8*6%yY#t>eV2h0Dmz zdvJtQ OtkFBjMj>cLsM4Nc=racZu{`R)@t;!Co)h}z+{@Sndl0fUN+J&@+S26I ziHRZ~M<@$oROllbIYu>>Oh0b^D~;aL(zDT(g!j9tWol*n;a1=fBjx*FL5$4$hys(_ zO3iA1$ik<7jNV4tug ;ixrmvt6y-W^Y!T++-V{!fH+Q{SM_T%wx`{m(ixCru>82ll=Q6^x|^>KW8 zYtcN-W#rr}oOP`6{rgUfT}_MB;@i`$xC<>j0%zFP=1s1{_Z@xxD oB2{<3|#vh~-NYIRJ-C|D$qU^ 6ppz-tyRA!|Kk#0zm{u5^_ z&{+(ppOZRc@j$nntn{Tzdtr6dva-DeviQ{tqlo1PrWNtm6N&JY3`W@|2{ZT=Rb%8L zeS!kK+hVl-l2S`-i+K{QYjhSV!#j_ntYQ3uN>BS%0|y78r)kJ24x b6T zp3^u|Z#>R}D>_#PT%w9L>WhZfOA)eg{Ny{rLA&Tbo5|L}@*yd}JAm%|LrRv-EJrg> zR7@X8O;mnMMR44sWS9q|qAm_IEh{|75?>lj5@Qh=5>i=fk%XIzFfUY{_%Hbf>@XW1 zfIK1gD3NeQ;wpuX^9_X;A~u?#$e0q=SPD8d0|&fSK4h32h@!FJfT+z4U!vV*>Z|^^ z5p%(qDmgxG)}SsXzU^QcV(zRY`O?(W@(IQX@rSn=n<|9%L5T} ?26PY>ZV6w%ayojU{v}YSZ!AOPUQWA+C zzMcP@$=3b_yxi>xW8KEDrWVCpv&512M2yJ{Bjpsco9Q{q79t^NaX_f~#A9P8AW;*S z4%sZIC^0dv@4K>*u-oDV7IC9`^2DUcbSjb1`pUNiU_d14rF`8rv8|GD!O8&|3NAh` z$0+?w{N-W}R#07@U r(ga8J4f~8ztIx6 zhY}MNYlQ3bp@yKnAKkw0WN=T@Ng&DdDTF7|@GHuolzeeViKEZ{%e7w&3oDhh|50qa zQ 9*oF3o8&%xZql;j5`B2 X{CqHJzZ6rA@m$yahm?#nQTa&K$ z#Dyz;lfU8Hp_)^aKfDS2u8^fv+@~~wa)G)K?ry)2BhbgtEbZ#DM11|Nbm_i&-n(Ys zvs4{dh}pUO|H-JJG~LSkWkQmiiiLpR7RFx^s;anZp(sSn&V209pQ)BnaN-qYN247d zPkPD?NnDAz)K~GoPR6%{g!((9?GyL&Q696qz}o|Pq}xDXVfl+tR vcJVh!|B*MV63lpnO zaI |91$y!w6 q1#wBqPGJn QK3HWEar0!Yz17wv-c;n;tb0* z9{D}MCQ?SCQ~2cx(2ZiYi_x{tGZ&*J-xvEG*$6$hYS7-+*2+#GfK;67J8WCnRgb_~ zGYQAZeh5+QR8lBP415_Jkd?z98FRCQj1P@C=boaqNx*#QO +O+&o&D9>%owK= z)OU{K3a}rhJkzdzdw`Zl)THuOo5doCB#KzC4fyUhWKU%XTw8LMBn2}Ow?{nLlB(k> z89v|iAUM{G-xfMOCM}7cic~hq)7f=G(W4Ki6LKngCvyzRK>pHLboyJVm!a{i)B9Wr zCU<|knX@K%vk)%@$G+M=ZrkrRiv~W=)}IwW+2q4TbQ1E}p@3I0sX)wRv^L&4iiGMI zWhT>SQUY+opQCHGDJ3AtW`8TnY^Bn1O1(}~;{Q-8C`%~N`HG~~&v1g;!`4dKgO}Gv zyuc3jM5nuq(c`-Fgj=iRg3A9Q5;8H4NS2a9K^plj(WHo|4-qDx9BU|z0uGaYj&0}_ zJX9b=Sq@{ucmtrWaKlW*X|87bi#uH#X~Ifz{z7o05Bc+jB?**qq+-jU%Lp3%%O9gx zUO2~7Vx(*9umw@da-?V;v2FwjO3V_XEQ|ng^hgJdyw71eB>KIdCwQ+<+vH~#0aCgH z0)dcX5fOX+)c1A TLte*Ycs zh5mG7t)~s^%(~;oGlzHVpw+N-_UsYfX8r26_v>_#e!d)dNPu(wN5|l6_3Y90+XpS0 z*(R6Ug9xzv^8+~qHHv(*;QXp*;8CCNrf=o=sg*>>=K{K`XQo1^vYg%S{8Ard2i?;v zOwP$OA`My|)ssw|Q@~blF3#2_xJ3V+kqrwEsk=4CwpVVL#H7RoFFz5c*+5t5d!F{V z2|ZVU1k$?`eK~y}RSI^IffQ`W&kR(ru$lJYj0J#%dkN%}Q>5lTzdz6(NT}_uvtfL+ zxm)LcCN)q+w1mjro{2=A*k2u5iRs?p;zHF{8Ho>#&CcaXTHxzm(SfBlCEK0hl&D5` zJ|3dMLrYm0tA_WLZFUMn(X;6icWGc!C7aJ^>TsrvJ$B4jcLXy8>o`lr=8LBYL1v!u z0|PcBP+MUlM2%7=MWQ%%!AKj+&3-vmPAfs&i%z#dd!r@V)>?v+L`!>Gk*h{<+QSX% z*5dYzK)=$myLNLL<{IqYVpJfxD3+slv$Xek=Av_rJ*@JZsez>y54ymqsVR9*=>q!9 zQ!9v%R_Yr;vge~`>v^pBzkWT^jQ)E^w*};~zVaXIb2)f<@NW!Q-8r2cWj*?rN@cR_ zt`fX4u4T&5qBrvRb)lsP8b&D3cVFn| wv4E}ki1_p`V z7{Zeaz 3@%}%R z9j3iXciLIphIA1!QB1FFSYUIm%7o=%u%;f&R_ec z*s-#}z$?#dJ8C7OZyXs?UpRzbVfX3hj&!BHZtfLqSup{sIMDdc4}1VSuVAZ}4drbT zAqVm lCY8Zc)lSP@4HV z6rMo tWJ{-SXGkf_8Vkp=dt_<9{ATXpU!tI88xo?WA)~28=og_f)gv6(BPvD?rJ-7) zIr9qcM+oZ2=zkTKZG^dXW{Qjxz_u6`AifA)s+uI%-++^x_9+T*L={u1rzC8+E5kin zU~9XBE|EfvkT}CcNo9&Bo#^;rDQhV`Df`_tz*G0KDs~6g#|0)c203x?1~+PDHr{K| z V&70OOJyLsdM+>|V-;W!R)k2TUdav;FB-i1CB>wtdJgMirEEy#`5 BbB3;`9#pD17bG`B)ftof}sy^CD$_g1R=9TCJW6I(|~ypbkOj{k}~7XJ^lxs7(uk z;ZOZ*QRC?KH3(u@f8m+Q2HL;dc^+Ch>McpN&ZkH}4xJ`I4>rbr8YuoUq)8LCxUHYYjF*U^GDBT4{^N~qOf56Ul*Q#K+(lF z$dCU?y0m2f!-D)W6&8rA5+%6qB5i1bMc|zSpiZJ+NS#dyc{>wc5}4 +!Q5|ri@5Wn>>t- zgCE@Oh4Zwz^B#%(S-L&Gz3ODI%^n*BM5+2&&cY3aSC=OG43%(0XHzy(4Gq=|f8E#s zt{BrIJL#32Be(sh^zMj$jOf1j%hK6XV{w+Kst+FXhaD6=w3Qp?b+Pil@FuY6H Mb(xQep_?g{x(JjELL&98Hs^*QW&I+`e9S!hoT727 _CCGkQ=|;N7XrG!&@yN&Eu> rZ(@L!6&)f+CL89_+ LskW^CZ?GV{$eGPQ`V_1p-&2`SpOgu zl%ucS(G?m9vD70zcUnkwY)d2XLou=fr}6g=ZU}N4e2~j>gb*29T@Km0MHPKQWqw^N z<83~)9u1k$X*u;ydo=Di#%d(Xcr;*w#%nM@t4w3q<}y>ZH{&FuhU$i|xui@TQ!hhJ zpN?t8a94B7gx~{jSI)8k^Yerd1W9Fr$^Mu71Q%r@zB_d468uP)O12X~MVLhMO~c+- zMY!52(~2~Dj@pmap3XHqRn=M6$cDM3IdGfD+LQQ>6U12)aVDySZsms%@;#OC+$Sl_ zzW~U72#EbyjkizlJ7qdc( QH1T`&Rl*og#T`Vx&2ZzN{M2 zN?zPUwuyzJ*l=Um&DqXCe-YCL1 fJYyI9tjuHB{OHA8(7>tu=yPuiieb8}|-jC*z}8%kDn&+ivf}XUl_G zemCbVlxdY=%j?VUSF4?k7n4!3Vv#5`V>rgEy>GPz*Td2uS3a#*{}veyzIIc-ZTfb( z{_>skTk49dz-Rpr>$}_dwcO8~ULA*))+hO}oE_C_PuCs)C%cXhw8eYy_KVLA%7 x5pW2V|i+%h-9I>EgCe05O;ps~1a4%(qS-IslQlY@)m~4o@ z<2&d9^JOjjU5VACE8i*NW-nf+{~dGBg|*CweW3cC0t@#_5~qcrmc?LAPSjaxgO@P) z$0@WQOX~&f)}ewFJ#Ea$=(lqsD5Isg#{-C7#)S0Njf&UL5s*t!)Ku7&n&cZUIaJ5j zhFYr0{1CVxm}<#!tOjUOFdGx4&{%HB@bX`)mm4iLI4Cx`K3r=pqEunasS$FpZL1kV zVLlr>K>B-$J#DOffDSSR*95tokUEuc?u>G|YNXaCrg=S#4BhWpR4!Tjo4y!kh*4My zGxTssOVJ3_Z!?klAVpx;;Go*bh-`?^9*85qL&!u{wZ@FF+|f)Min5s=`eb&R%G7a` z`MeO|!_Ad^VFpkWFL5ZWV#J5#p$F;==WGV|fVPaHee96PAQNgh)ks%s%y(PXL<1)Z zjt!$B+5Y?x;)xB!#-+SpX(BQThU0ml;{}VGP!-c(8wC-&^BpY{v3LY53Q-G;W=NQ& zfOLYp1kT?$vdwL<7FF|&PgM0ncCyfvjpQNbD`eQXk-~N)f{2(+5m*oy^g=9g*~jH5 z0Q443vWtQ;7!88T?Cp=Cf$%-8EohTTFn01R#{FR6@p!xgf%Fx53Nl|s2N@rmtiJ$0 z2>N0Ua!Q1;GZP 8mAqmuufHi>bV-i;^PZeb(SGxPwQ$mYivxZ8gcb%8U{@w${fZhT z&=G<WG!%pT`6jFBlDEFCdYL4Iul>q4hX`mYFCafq>#^oU ^I!p?E15y1wuc3 z+s~1vmNuaj(Z`7!8R+dNhuUzR==9L sF&?M^DuyzKh@m;wFMVxPtXT0OG43{7!Vftg2HY z4&PFlNXOI~RjT$-GWj~e5v4?mJ$@%;3CzjzZcF}hWUuhoX_TQo6MRI9LY!P2SjjAJ+Xci?|4>9P!gCz0lZDf zOTTnZCgfuVqQYOE2&sJTTLHe$Ck10@%3-buIO?JOv*XGPk0Jps>fdARaoF(oyo0 z{BWSt)JfUQ9QIl-t5*9l+nZN$3v4sCj&EPX+!dWWfWu!Ns8H!CHX26bF)IWi+mNPj zfF38--K+lzS`1_7uK@jhSq1Mke=9&^c3mZI>w4ob9~=XlA;-Jr0mLq2Fo}2Uj8sw% z{YSY1q M;W*KToa{U$a egd#N@t^xJhHa0 ztv#_BYB+ynP5y{`G9YAdsI7&f5xE?gO5=;J73A;c^|yc9Uwd-#xLVlyZg8Eu^X`Aw z*4@tgrpK)8 NSFQZww6 +a^ z Tp aUzJq1+nw!OQ7DEiV6T#3omkReYu4XaR2 z#^i!&jR$X`s%|!PcQCJ*T4T~xCjo%b0bdlt8&z
gRguhJ9Cc4*O_G!j;t&Qg5u1KM8_t}VKgBvnvaQ584>Beb<0sI!XP&JL zr%?uFPQGhnJa9@HpK}D8r `V>#Ilhr;+ zYAq~8kr=;A3E0^9qI8{De=I;>PHsi3qa@svCnM>;{nIM+pk&WlYpb+YwB)Ty6M_A< z6}dP(NCg$*%jXgTsSf)cV_HT!b?@x+#UU9VmD`t%MG>~lw4s |5?@MyX&8|Te7A~g|?xQKaciQ%#K9C)dBaW%%;5QjSoM9R} zl#GZ5#ZGQH>|8eS3?3y{5T>=78BZxkz?=9tMpM~J$r<%p`irOXTqK+{%k6RgP`sU! zvi3 Q`4eQ _fP!Nh@iUU*{_9u5bATp4K>-LDDSTh9`b2_Xaf?y7?Wc5sqmNN~|h3 zEmO_b@ZrCv2MZ9J29q$pY+$&ZkTd@DUME#% |Se6ZcH8}NO(;PV?Vjn&P0!tVoo;@N#w}uPymI)$&tZR|~Ac<0nxa>GXaE_^?ati`b*jhgbxgMT z+*?L)XM5)T!Q4|LqbuWNDxPRwh{*PKy}libI5SPl))1xt8xx|Lkcb2eztIis%;=CX zSo?`X+ZZ^htw;Yttr>|~M1zuQcAi$C^cmd`>)wB?mR~bgS{Ws#Mb}TD0uuvr7F`N( zri?AOiiE>1A=tDL3Ne5}b)lYB`48FKAg+P`YoE|Ft65xCU11-!_tXK^{~e|4otIEm z1s77@!f)2m`_$haNoRYkVf5B?MWBim-=!Ds{HDar_A?d%e*FX!ZIhQ8clze^Igj z_Bvnr^_lOC7I(RW4uu!TzH(*<(xEmmzn%8Auj9c>yvPz=*{heRF5f4_QD tc|(rKs2VPGssP&nSx`BT^^2?K=Q1*r74n zP4dee?|986Px`YRwB}Iyk4Yb~?AHwiGYd3gS0%9O2z |XVobQW~p){-qxFLYIDy-XVoedX6#|>?>ZXo zjgP-~L#cw 46ekk%zRNF1KBIuoP1Y$&va#z@wk)O8Ft)7z>lsV` z{|hCR{edru-1DlD7jcl}bgsADkdeEmVxM4EjnJYvCZbs-*6?>%1wpFEcGfb~+!Uw( zTE OyR<~B$ z)3I(C{L|CwMJFeqzje)6Lny$=-Isk|)tvX39)YmI^x