Could you explain the difference between using infinitesimals and using limits in calculus to a high school audience?
Sure! In calculus, both infinitesimals and limits are methods used to work with very small or very large quantities that cannot be expressed in standard numerical form.

Infinitesimals are values that are infinitely small or infinitely close to zero, but not exactly zero. In the past, mathematicians used infinitesimals as a way to work with these small values directly, treating them as if they were actual numbers. This approach is called infinitesimal calculus or non-standard analysis. However, this approach was later found to have some inconsistencies, and today it is more common to use limits instead.

Limits, on the other hand, are a way of approximating a value as it approaches a certain point or value. For example, imagine you have a function that approaches a certain value as x gets closer and closer to a specific number. To find the value of the function at that number, you can use limits to approximate the value. In this way, limits allow us to work with values that are infinitely close to each other, without having to use infinitesimals.

The main advantage of using limits over infinitesimals is that limits are a more rigorous and consistent approach to calculus. Limits are based on the concept of convergence, which allows mathematicians to work with very small and very large values in a more precise way. Infinitesimals, on the other hand, can lead to some inconsistencies and paradoxes if they are not used carefully.

In summary, while infinitesimals and limits are both methods used in calculus to work with very small or very large values, limits are more commonly used today because they are more rigorous and consistent.