Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TwoCrystalBalls: doing jumps of ∛N (or any other root base) #70

Open
ajaralampidis opened this issue Jan 29, 2024 · 9 comments
Open

Comments

@ajaralampidis
Copy link

ajaralampidis commented Jan 29, 2024

https://frontendmasters.com/courses/algorithms/implementing-two-crystal-balls/
In minute 5 (-1:58) somebody made a great question:

"Why don't we use ∛N instead of √N since it is smaller ?"

Prime responds that the jumps would be smaller so then we would be doing more jumps. But this goes against big O notation, If N gets infinitely bigger, it seems to be better to do more (smaller) jumps so that we can then do a smaller linear search.

Maybe my "wording" is confusing but simply put:

∛N < √N
☝️ which means that O(∛N) is more efficient (according to Big O Notation) than O(√N)

And for the same reasons:
O(∜N) is better than O(∛N)

So maybe we need to write:

$$\lim_{x\to \infty} \sqrt[x] N$$

But this is also nonintuitive to me: the bigger the root base, the lower the jumps will be (until we have jumps of 1 or less than one), which means that we are doing ... a linear search.

I guess that my doubt is how we should write the big O notation of this problem, and if the answer is O(√N), then "Why don't we use ∛N instead of √N?"

Maybe this helps visualize the relationship between N and the steps the algorithm will do. (Remember that we do discrete steps, on the Y axis 1,5 doesn´t exist, we are traversing an array so we are in position 1 or 2).

@ajaralampidis ajaralampidis changed the title TwoCrystalBalls: using jumps of ∛N (or any other root base) TwoCrystalBalls: doing jumps of ∛N (or any other root base) Jan 29, 2024
@aidoskanapyanov
Copy link

For any integer x > 2, in $\sqrt[x]{N}$, the runtime is going to be O( $\sqrt[x]{N^{x-1}}$ ) because the case when the ball doesn't ever break will be the worst case. So, yeah, basically when x goes to infinity we have a runtime of O( N )

@ajaralampidis
Copy link
Author

@aidoskanapyanov maybe I need a bigger explanation (probably because I am not good at math 🥴 sorry).

Why for any integer x > 2 in $\sqrt[x]{N}$ is going to be $\sqrt[x]{N^{x-1}}$ ? I think this means that you are going to do all the jumps possible based on the size of the jumps you want to take (which are based on the root of N).

And yes, if x tends to infinity (because $\sqrt[x]{N}$ is worse than $\sqrt[x + 1]{N}$ according to bigO) this will eventually be O(N), but I guess that $$\lim_{x\to \infty} \sqrt[x] N$$ is not the correct or complete way to express the correct answer.

Probably we need to do something like:
$$\lim_{x\to \infty} \sqrt[x] N \Rightarrow \sqrt[x] N \ge 2$$

☝️ which means that we should use the biggest x possible as long as our jumps are not jumps of 1 unit. But still ... this sounds wrong to me. Because that means that we are doing jumps of two units ... if we drop the constants we get O(N).

Probably there is another way to do or write this. Something like "x as big as possible while $\sqrt[x]{N}$ is smaller than N/2". 👈 this is not correct, but I really can't think of a condition to determine the optimal value for x.

Sorry if I am not clear, maths is not my thing; in this case, I don't know how to express myself better.

@jts307
Copy link

jts307 commented Feb 19, 2024

So, $\sqrt[x]{N}$ is the amount of items you skip in each jump, i.e. its your step size. In the case where no crystal balls break, you are making steps that are of step size $\sqrt[x]{N}$ until you pass N things. In other words, you are essentially just asking how many times does $\sqrt[x]{N}$ go into N. So, N / $\sqrt[x]{N}$ = $\sqrt[x]{N^{x-1}}$. Now the question is whether $\sqrt[x]{N}$ or $\sqrt[x]{N^{x-1}}$ is the term that dominates the runtime. This is the key to understanding the runtime. You have to see that the runtime is a trade off between these two values: $\sqrt[x]{N}$ and $\sqrt[x]{N^{x-1}}$, i.e. step size and the amount of steps needed to traverse N. For x > 2, $\sqrt[x]{N^{x-1}}$ dominates so that is going to be the runtime in that case. (read comment below this one for further explanation on why this is the case for x > 2).

There are two ways I think you can make sense of a runtime of O(n). The first way is that when x goes to infinity two things happen. $\sqrt[x]{N}$ becomes 1 and $\sqrt[x]{N^{x-1}}$ becomes N. $\sqrt[x]{N^{x-1}}$ becomes N because both x and x-1 go to infinity as x goes to infinity. So, the equation simplifies to $\sqrt[infinity]{N^{infinity}}$. The infinities cancel out so you are left with N. $\sqrt[x]{N}$ becomes 1 because $\sqrt[x]{N}$ is just N^(1/x). When 1/x goes to infinity we get 0 because as x gets bigger 1/x gets smaller. This means $\sqrt[x]{N}$ = N^(1/x) = N^0 = 1. In our conceptual understanding of the problem this makes sense because in this scenario we are taking a step size of 1 and it takes N steps of size 1 to traverse N things. This leaves us with a runtime of O(N) because the amount of steps dominates, i.e. O(N) > O(1).

The other way is to think about the runtime of O(n) is the case where x=1. All a 1st root is if you think about is just N, so $\sqrt[1]{N}$ = N. This means for the amount of steps: $\sqrt[x]{N^{x-1}}$ = $\sqrt[1]{N^{0}}$ = 1. For our step size: $\sqrt[1]{N}$ = N. This actually makes sense because when x=1 we are taking steps of size N and there is one step we need to make to traverse N. The runtime is now dominated by the step size instead of the amount since $O(N) &gt; O(1)$. This is like the opposite reasoning of the above approach because now the runtime is dominated by the step size instead of amount of steps.

Does this make sense?

@jts307
Copy link

jts307 commented Feb 19, 2024

This also might help: $\sqrt[x]{N^{x-1}}$ dominates for x > 2 because another way to write $\sqrt[x]{N^{x-1}}$ is $N^{(x-1)/x}$ and $\sqrt[x]{N}$ = $N^{1/x}$. Compare $N^{(x-1)/x}$ and $N^{1/x}$. Which one is bigger? The only difference between the two terms is x-1 and 1. When x > 2, x -1 > 1. This means $N^{(x-1)/x}$ > $N^{1/x}$ when x > 2.

@jts307
Copy link

jts307 commented Feb 19, 2024

The optimal value for x is 2. To find this formally, one could potentially minimize max($N^{(x-1)/x}$ , $N^{1/x}$) using calculas with derivatives on different intervals. I dont know if it would work out or is possible, but one could try. Honestly though, I think the intuitive approach below might be easier to understand while looking at the graphs of it in desmos.

My intuitive understanding of the optimal value of 2 is thinking about how when x=2, then $N^{(x-1)/x}$ = $N^{1/2}$ and $N^{1/x}$ = $N^{1/2}$. When x > 2, $N^{(x-1)/x}$ is our runtime as previously established. When x > 2, as x increases $N^{(x-1)/x}$ gets bigger because it approaches N. In other words, we start at $N^{1/2}$ and went up to N. So the best we could do is $N^{1/2}$ when x = 2.

Using similiar logic, when 0 < x < 2, $N^{1/x}$ dominates. $N^{1/x}$ gets bigger as x gets closer to 0 from the right side. In other words, $N^{1/x}$ goes to infinity as x approaches 0 from the right for N > 0. We start at $N^{1/2}$ and keep going up as we go to the left, so our lowest value is $N^{1/2}$ when x = 2.

You can extend this logic for the other parts of the graph, but I think the point is kind of clear.

@ajaralampidis
Copy link
Author

@jts307 I was thinking about this and reached a similar conclusion:

Given $\sqrt[x]{N}$ we can think of two "extremes":

x tends to infinity so as you said our steps would be the smallest possible (steps of one unit) and we would end up with a linear search O(N).

The other option is that x tends to 0, so our steps would be the biggest size possible (N), so we would do only one jump and then we would need to do a linear search of N elements ... again we end up with O(N).

So I guess that the real question is how to find the optimal value for x.

Intuitively I was thinking that the best way to find this is the following:
$\sqrt[x]{N}$ is the size of our steps
$N / \sqrt[x]{N}$ is the amount of steps that we will do (in the worst case).

My guess was that the optimal x value is found when both of these values are as close as possible:

$\sqrt[x]{N} \approx N / \sqrt[x]{N}$

☝️ this is (if I understood you properly) what you were referring by:

Now the question is whether $\sqrt[x]{N}$ or $\sqrt[x]{N^{x-1}}$ is the term that dominates the runtime.

BTW, thanks for making explicit that:
$$N / \sqrt[x]{N} = \sqrt[x]{N^{x-1}} = N^{(x-1)/x}$$ $$\sqrt[x]{N} = N^{1/x}$$
It helped me understand, Its been quite a while since I've done some maths like these.

So by moving everything to the exponent, we can try to find the optimal value of x like this:

$$N^{(x-1)/x} = N^{1/x}$$

Now I might be doing a horrible mistake here, but I think we can remove N out of that equation:

$$(x-1)/x = 1/x$$

$$x-1 = (1/x) * x$$

$$x-1 = (\frac{1}{1*x}) * x$$

$$x-1 = (\frac{1}{1*\cancel{x}}) * \cancel{x}$$

$$x-1 = 1$$

$$x = 1 + 1$$

$$x = 2$$

☝️ reaching the same conclusion you did.

I am not sure if my math is correct 🥴 so I tried to be as clear as I could, if I made any mistake here please let me know.

@jts307
Copy link

jts307 commented Feb 20, 2024

"The other option is that x tends to 0, so our steps would be the biggest size possible (N), so we would do only one jump and then we would need to do a linear search of N elements ... again we end up with O(N)."

I almost fell for this interpretation as well because I thought there might be symmetry in the graph. It seems intuitive. However, the other option is specifically when x =1, not when x approaches 0. For N^(1/x) as x approaches 0 it is undefined. The right and left limit don't equal.

"My guess was that the optimal x value is found when both of these values are as close as possible:"

Your guess is correct, but you're missing some parts I think. You have to show why the optimal x value is when the functions are equal.

Here is a more concise reason why:

In general, if you have an increasing function on some interval f' and a decreasing function g' on that same interval then their intersection is the minimum of max(f',g') assuming they have a unique intersection (Here is a post showing why: https://math.stackexchange.com/questions/132974/minimizing-a-maximum). The explanation is a better, more formal, more generalizable version of my explanation for why x =2 is optimal in the previous post. It shows specifically why the intersection must be the minimum.

We can apply this concept here and look at the interval (0,infinity) on max($N^{(x-1)/x}$ , $N^{1/x}$). $N^{(x-1)/x}$ is increasing and $N^{1/x}$ is decreasing so their unique intersection is the optimal minimum. The intersection is 2 like you showed.

Now the question is whether (-infinity, 0) contains a better x. I think this is just a matter of showing that $N^{(x-1)/x}$ is the maximum on this interval. Then you can show it is always greater than when x = 2.

Thanks for bringing up the idea of there being a reason why the intersection is the minimum! I think that prespective makes showing the optimal minimum much clearer and allowed for a more generalized approach.

@ajaralampidis
Copy link
Author

@jts307 sorry forgot to comment this:
https://www.geogebra.org/calculator/bmtut7hd
One function is constantly increasing and the other is constantly decreasing (considering only the positive side) so they will intersect only in one point.

I guess I can set this as "closed". Thanks a lot 🙌

@jts307
Copy link

jts307 commented Feb 27, 2024

@ajaralampidis No problem!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants