TooNoisy is a web app that allows you to reduce noise in images one of three ways:
- Gaussian Blur
- Median Filter
- Bilateral Blur
Most of the work was done during Summer 2022; I was interested in both image processing and improving my Python skills, and ended up building this.
I'd encourage anyone seeing this to visit the website (linked above), but here is a quick overview.
Let's use Barbara as an example image, here she is before noise reduction:
Gaussian Blur was implemented first. The general idea behind this filter is to replace each pixel in the image with the weighted sum of its neightbours, where the weights are determined by the Gaussian function:
Initial naiive for loop based attempts were succesful but extremely slow. A short attention span and limited free time led me to read up on faster methods, and so instead tooNoisy implements the Gaussian Blur as a convolution with two 1D Gaussian Kernels (separating filter into two passes reduces time complexity by a factor of kernel width).
Here is Barbara with Gaussian Blurs applied:
![]() |
![]() |
Whilst noise is definitely reduced, we also lose a significant amount of detail. Applying Gaussian blur with a higher
Next, I implemented the Median filter. The Median filter is slightly more complicated than the Gaussian Blur, and it is non-linear so cannot be separated. It works by replacing each pixel with the median value of the pixels in the surrounding window (of chosen width).
Here is Barbara with Median filters applied:
| width = 3px | width = 5px |
|---|---|
![]() |
![]() |
Clearly, Barbara is both less noisy than in the original image, and more detailed than with Gaussian blur applied. The Median filter is especially good at preserving edges.
Side note: how we calculate the median pixel value of the window is interesting. The obvious approach would be to sort pixels by value, in
- Split the array into many short (5 number long) subarrays
-
Sort the subarrays (yes sorting is
$O(n\log{n})$ , but here$n=5$ is small, so this can be treated as a constant time operation.) - Calculate the median of each subarray, and form an array of medians
- Find the median of this list, if the list is 5 or fewer medians long then find median as above, otherwise apply the median of medians function to find median via recursion.
- This median is the 'pivot'. Build two new arrays, one with elements smaller than pivot, and one with elements larger than pivot.
- Finally, If the left and right arrays are equal in length, the pivot is returned as the median, otherwise the median of medians function is applied to either the smaller or larger array, if there are more elements in the smaller or larger array respectively.
We can still do better, though. The Median filter seems to produce some weird effects in places, and is still losing a lot of detail.
The final filter implemented last summer, Bilateral blur can be thought of as a normalised Gaussian blur, but with pixel values weighted both by spatial and range distance from the subject pixel. The Bilateral filter is defined as:
Where
-
$I_f$ is the filtered intensity of the pixel$ -
$I$ is the original intensity -
$x$ is the coordinates of the pixel being filtered -
$\Omega$ is the window centred on$x$ -
$f_r$ is the range kernel for smoothing distances in intensities -
$g_s$ is the spatial kernel for smoothing differences in coordinates
Both of the kernels are implemented here as the Gaussian function. The normalisation term
We have two parameters here,
Here is Barbara with Bilateral Blur applied:
|
|
|
|
|---|---|---|
![]() |
![]() |
![]() |
The issue with the Bilateral Blur is that it is much more computationally intensive to apply than the other two filters. One more efficient option is to, instead of evaluating all pixels around a window, evaluate a subset of them. There are different methods for selecting this subset, Banterle et al. apply Poisson-disk subsampling.
I plan to revisit this project soon, it would be interesting to investigate some new filters, and possibly try and implement them in a compiled language such as C++ for faster execution times.
I would be particularly interested to look at Deep Learning Models applicable to Image Denoising.







