Skip to content

Commit

Permalink
294
Browse files Browse the repository at this point in the history
  • Loading branch information
ascoders committed Apr 15, 2024
1 parent ab94f12 commit 50398c6
Show file tree
Hide file tree
Showing 3 changed files with 277 additions and 40 deletions.
3 changes: 2 additions & 1 deletion readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

前端界的好文精读,每周更新!

最新精读:<a href="./机器学习/293.%E5%AE%9E%E7%8E%B0%E4%B8%87%E8%83%BD%E8%BF%91%E4%BC%BC%E5%87%BD%E6%95%B0%3A%20%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C%E7%9A%84%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1.md">293.实现万能近似函数: 神经网络的架构设计</a>
最新精读:<a href="./机器学习/294.%E5%8F%8D%E5%90%91%E4%BC%A0%E6%92%AD%3A%20%E6%8F%AD%E7%A7%98%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C%E7%9A%84%E5%AD%A6%E4%B9%A0%E6%9C%BA%E5%88%B6.md">294.反向传播: 揭秘神经网络的学习机制</a>

素材来源:[周刊参考池](https://github.com/ascoders/weekly/issues/2)

Expand Down Expand Up @@ -338,6 +338,7 @@
- <a href="./机器学习/291.%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E7%AE%80%E4%BB%8B%3A%20%E5%AF%BB%E6%89%BE%E5%87%BD%E6%95%B0%E7%9A%84%E8%89%BA%E6%9C%AF.md">291.机器学习简介: 寻找函数的艺术</a>
- <a href="./机器学习/292.%E4%B8%87%E8%83%BD%E8%BF%91%E4%BC%BC%E5%AE%9A%E7%90%86%3A%20%E9%80%BC%E8%BF%91%E4%BB%BB%E4%BD%95%E5%87%BD%E6%95%B0%E7%9A%84%E7%90%86%E8%AE%BA.md">292.万能近似定理: 逼近任何函数的理论</a>
- <a href="./机器学习/293.%E5%AE%9E%E7%8E%B0%E4%B8%87%E8%83%BD%E8%BF%91%E4%BC%BC%E5%87%BD%E6%95%B0%3A%20%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C%E7%9A%84%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1.md">293.实现万能近似函数: 神经网络的架构设计</a>
- <a href="./机器学习/294.%E5%8F%8D%E5%90%91%E4%BC%A0%E6%92%AD%3A%20%E6%8F%AD%E7%A7%98%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C%E7%9A%84%E5%AD%A6%E4%B9%A0%E6%9C%BA%E5%88%B6.md">294.反向传播: 揭秘神经网络的学习机制</a>

### 生活

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,31 +86,32 @@ neuralNetwork.fit(); // { loss: .. }

- w: `number[]`,表示上一层每个节点连接到该节点乘以的系数 w。
- b: `number`,表示该节点的常数系数 b。
- c: `number`,表示该节点的常数系数 c。
- c: `number`,表示该节点的常数系数 c(该参数也可以省略)

我们可以定义神经网络数据结构如下:

```ts
/** 神经网络结构数据 */
type NetworkStructor = Array<{
// 启动函数类型
activation: "sigmoid" | "relu";
activation: ActivationType;
// 节点
neurals: Array<{
/** 当前该节点的值 */
value: number | undefined;
/** 上一层每个节点连接到该节点乘以的系数 w */
w: Array<number>;
/** 该节点的常数系数 b */
b: number;
/** 该节点的常数系数 c */
c: number;
}>;
neurals: Neural[];
}>;

interface Neural {
/** 当前该节点的值 */
value: number;
/** 上一层每个节点连接到该节点乘以的系数 w */
w: Array<number>;
/** 该节点的常数系数 b */
b: number;
}
```

则我们根据用户传入的 `layers` 来初始化神经网络对象,并对每个参数赋予一个初始值:

```ts
```js
class NeuralNetwork {
// 输入长度
private inputCount = 0;
Expand All @@ -122,23 +123,28 @@ class NeuralNetwork {
constructor({
trainingData,
layers,
trainingCount,
}: {
trainingData: TraningData;
layers: Layer[];
trainingCount: number;
}) {
this.trainingData = trainingData;
this.inputCount = layers[0].inputCount!;
this.networkStructor = layers.map(({ activation, count }, index) => ({
activation,
neurals: Array.from({ length: count }).map(() => ({
value: undefined,
w: Array.from({
length: index === 0 ? this.inputCount : layers[index - 1].count,
}).map(() => getRandomNumber()),
b: getRandomNumber(),
c: getRandomNumber(),
})),
}));
this.trainingCount = trainingCount;
this.networkStructor = layers.map(({ activation, count }, index) => {
const previousNeuralCount = index === 0 ? this.inputCount : layers[index - 1].count;
return {
activation,
neurals: Array.from({ length: count }).map(() => ({
value: 0,
w: Array.from({
length: previousNeuralCount,
}).map(() => getRandomNumber()),
b: getRandomNumber(),
})),
};
});
}
}
```
Expand All @@ -159,7 +165,7 @@ type TrainingItem = [number[], number[]]
所以拿 `traningItem` 的第 0 项就是输入,`modelFunction` 就是根据输入得到预测的输出。
```ts
```js
class NeuralNetwork {
/** 获取上一层神经网络各节点的值 */
private getPreviousLayerValues(layerIndex: number, trainingItem: TraningItem) {
Expand All @@ -173,20 +179,22 @@ class NeuralNetwork {
this.networkStructor.forEach((layer, layerIndex) => {
layer.neurals.forEach((neural) => {
// 前置节点的值 * w 的总和
let weightCount = 0;
let previousValueCountWithWeight = 0;
this.getPreviousLayerValues(layerIndex - 1, trainingItem).forEach(
(value, index) => {
weightCount += value * neural.w[index];
previousValueCountWithWeight += value * neural.w[index];
},
);
const activateResult = activate(layer.activation)(weightCount + neural.b);
neural.value = neural.c * activateResult;
const activateResult = activate(layer.activation)(
previousValueCountWithWeight + neural.b,
);
neural.value = activateResult;
});
});

// 输出最后一层网络的值
return this.networkStructor[this.networkStructor.length - 1].neurals.map(
(neural) => neural.value
(neural) => neural.value,
);
}
}
Expand Down Expand Up @@ -220,22 +228,31 @@ loss function 的输入也是 Training item,输出也是一个数字,这个

计算 loss 有很多种选择,我们选择一种最简单的均方差:

```ts
```js
class NeuralNetwork {
private lossFunction(trainingItem: TraningItem) {
// 预测值
const y = this.modelFunction(trainingItem);
const xList = this.modelFunction(trainingItem);
// 实际值
const t = trainingItem[1];
// loss 最终值
let loss = 0;
const tList = trainingItem[1];

const lastLayer = this.networkStructor[this.networkStructor.length - 1];
const lastLayerNeuralCount = lastLayer.neurals.length;
// 最后一层每一个神经元在此样本的 loss
const lossList: number[] = Array.from({ length: lastLayerNeuralCount }).map(() => 0);
// 最后一层每一个神经元在此样本 loss 的导数
const dlossByDxList: number[] = Array.from({ length: lastLayerNeuralCount }).map(
() => 0,
);

for (let i = 0; i < y.length; i++) {
// l(t,y) = (t-y)²
loss += Math.pow(t[i] - y[i]!, 2);
for (let i = 0; i < xList.length; i++) {
// loss(x) = (x-t)²
lossList[i] = Math.pow(tList[i] - xList[i]!, 2);
// ∂loss/∂x = 2 * (x-t)
dlossByDxList[i] += 2 * (xList[i]! - tList[i]);
}

return loss / y.length;
return { lossList, dlossByDxList };
}
}
```
Expand Down

0 comments on commit 50398c6

Please sign in to comment.