Skip to content

Commit

Permalink
lrp eq
Browse files Browse the repository at this point in the history
  • Loading branch information
Xmaster6y committed Dec 28, 2023
1 parent ba93ac1 commit 7a16552
Show file tree
Hide file tree
Showing 3 changed files with 58 additions and 31 deletions.
28 changes: 0 additions & 28 deletions pages/_drafts/layer-relevance-propagation.md

This file was deleted.

55 changes: 55 additions & 0 deletions pages/_drafts/layer-wise-relevance-propagation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
title: Layer-Wise Relevance Propagation
tldr:
tags:
references:
aliases:
crossposts:
publishedOn:
editedOn:
authors:
- "[[Yoann Poupart]]"
readingTime:
---
> [!caution] WIP
>
> This article is a work in progress.
> [!tldr] TL;DR
>
> LRP is a method that produces pixel relevances for a given output which doesn't to be terminal. Technically the computation happens using a single back-progation pass.

> [!example] Table of content
>
> - [Hot Takes](#hot-takes)
> - [Short-term vs Long-term](#short-term-vs-long-term)
> - [The Industry Pressure](#the-industry-pressure)
> - [The Curse of AI Doomerism](#the-curse-of-ai-doomerism)
## LRP Framework

### Formulations

With $R_j^{[l]}$ being the $j$-th neuron's relevance of the layer $l$, and the propagation mechanism is given by the equation $\ref{eq:aggregate}$.

$$
\begin{equation}
%\label{eq:aggregate}
R_{j}^{[l]}=\sum_{k}\dfrac{w_{jk}}{\sum_j w_{kj}}R_k^{[l+1]}
\end{equation}
$$
### Different Rules

### Technical Details

## Classification Example

### Network Decomposition

### Interpretation

> [!quote] References
>
> [1] [[On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation]]
> [2] [[A Rigorous Study Of The Deep Taylor Decomposition]]
6 changes: 3 additions & 3 deletions pages/_stories/my-approach-to-ai-safety.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: My Approach to AI Safety
tldr: Shortcoming issues should not be undermined because, if not tackled immediately, they will make [Alignment](https://en.wikipedia.org/wiki/The_Alignment_Problem) a lot more complex. I am convinced that interpretability will be the best tool for monitoring and control, and for that, I will pursue this agenda through research and entrepreneurship.
tldr: Shortcoming issues should not be undermined because, if not tackled immediately, they will make Alignment a lot more complex. I am convinced that interpretability will be the best tool for monitoring and control, and for that, I will pursue this agenda through research and entrepreneurship.
tags:
- AIS
- Agenda
Expand All @@ -20,7 +20,7 @@ readingTime: 11
> [!example] Table of content
>
> - [My Claims](#my-claims)
> - [Hot Takes](#hot-takes)
> - [Short-term vs Long-term](#short-term-vs-long-term)
> - [The Industry Pressure](#the-industry-pressure)
> - [The Curse of AI Doomerism](#the-curse-of-ai-doomerism)
Expand All @@ -34,7 +34,7 @@ readingTime: 11
> - [Brief Agenda](#brief-agenda)
> - [What's next?](#whats-next)
## My Claims
## Hot Takes

- It is necessary to pursue research for short-term and long-term AIS in the meantime
- Entrepreneurship can be highly valuable to contribute to AIS
Expand Down

0 comments on commit 7a16552

Please sign in to comment.