In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.
Normality tests of univariate dataset include the following tests:
- Shapiro–Wilk test,
- D'Agostino's K-squared test,
- Anderson–Darling test,
- Cramér–von Mises criterion,
- Lilliefors test,
- Kolmogorov–Smirnov test,
- Jarque–Bera test, and
- Pearson's chi-squared test.
I have used only first three tests.
The test is a way to tell if a random sample comes from a normal distribution. The test gives you a W value; small values indicate your sample is not normally distributed (you can reject the null hypothesis that your population is normally distributed if your values are under a certain threshold).
The test is based on transformations of the sample kurtosis and skewness, and has power only against the alternatives that the distribution is skewed and/or kurtic.
It is used to test if a sample of data came from a population with a specific distribution. It is a modification of the Kolmogorov-Smirnov (K-S) test and gives more weight to the tails than does the K-S test.The K-S test is distribution free in the sense that the critical values do not depend on the specific distribution being tested. The Anderson-Darling test makes use of the specific distribution in calculating critical values. This has the advantage of allowing a more sensitive test and the disadvantage that critical values must be calculated for each distribution.
https://archive.ics.uci.edu/ml/datasets/Cryotherapy+Dataset+