Please provide a step-by-step explanation on how to find the likelihood function for a normal distribution with median $\mu$ and variance $\sigma^2$ And subsequently find the maximum likelihood estimator (MLE) for the parameters $\mu$ and $\sigma^2$.
To find the likelihood function for a normal distribution and subsequently determine the maximum likelihood estimator (MLE) for the parameters $\mu$ and $\sigma^2$, follow these steps:

Step 1: Define the normal distribution probability density function (PDF)
The PDF of a normal distribution with mean $\mu$ and variance $\sigma^2$ is given by:

```js
$f(x_i | \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x_i-\mu)^2}{2\sigma^2}}$
```

Step 2: Define the likelihood function
Given a set of observations $x_1, x_2, \dots, x_n$, the likelihood function, $L(\mu, \sigma^2 | x_1, x_2, \dots, x_n)$, is the product of the individual PDFs:

```js
$L(\mu, \sigma^2 | x_1, x_2, \dots, x_n) = \prod_{i=1}^n f(x_i | \mu, \sigma^2)$
```

Step 3: Compute the log-likelihood function
Instead of working with the likelihood function directly, it's common to use the log-likelihood function because it simplifies calculations and avoids numerical issues. The log-likelihood function, $l(\mu, \sigma^2)$, is the natural logarithm of the likelihood function:

```js
$l(\mu, \sigma^2) = \ln{L(\mu, \sigma^2)} = \sum_{i=1}^n \ln{f(x_i | \mu, \sigma^2)}$
```

Substitute the normal distribution PDF into the log-likelihood function:

```js
$l(\mu, \sigma^2) = \sum_{i=1}^n \ln{\left(\frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x_i-\mu)^2}{2\sigma^2}}\right)}$
```

Simplify the log-likelihood function:


```js
$l(\mu, \sigma^2) = -\frac{n}{2} \ln{(2\pi\sigma^2)} - \frac{1}{2\sigma^2} \sum_{i=1}^n (x_i - \mu)^2$
```

Step 4: Find the MLE for the parameters $\mu$ and $\sigma^2$
To find the MLE for $\mu$ and $\sigma^2$, we need to maximize the log-likelihood function with respect to both parameters. We'll do this by taking the partial derivatives with respect to $\mu$ and $\sigma^2$ and setting them equal to zero.

a) MLE for $\mu$:

```js
$\frac{\partial l}{\partial \mu} = \frac{1}{\sigma^2} \sum_{i=1}^n (x_i - \mu) = 0$
```

Summing over all $x_i$, we get:

```js
$\sum_{i=1}^n x_i - n\mu = 0$
```

Solving for $\mu$:

```js
$\hat{\mu} = \frac{1}{n} \sum_{i=1}^n x_i$
```

The MLE for $\mu$ is the sample mean.

b) MLE for $\sigma^2$:

```js
$\frac{\partial l}{\partial \sigma^2} = -\frac{n}{2\sigma^2} + \frac{1}{2(\sigma^2)^2} \sum_{i=1}^n (x_i - \mu)^2 = 0$
```

Now, we will multiply through by $2\sigma^2$ to simplify the equation:

```js
$-n + \frac{1}{\sigma^2} \sum_{i=1}^n (x_i - \mu)^2 = 0$
```

Next, we will isolate the term $\frac{1}{\sigma^2}$:

```js
$\frac{1}{\sigma^2} \sum_{i=1}^n (x_i - \mu)^2 = n$
```

Now, we can solve for $\sigma^2$:

```js
$\hat{\sigma^2} = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)^2$
```

The MLE for $\sigma^2$ is the sample variance.

In summary, you can follow these steps:

1. Define the normal distribution PDF.
2. Define the likelihood function.
3. Compute the log-likelihood function.
4. Find the MLE for the parameters `$\mu$` and `$\sigma^2$` by maximizing the log-likelihood function with respect to both parameters.