-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TwoCrystalBalls: doing jumps of ∛N (or any other root base) #70
Comments
For any integer x > 2, in |
@aidoskanapyanov maybe I need a bigger explanation (probably because I am not good at math 🥴 sorry). Why for any integer x > 2 in And yes, if x tends to infinity (because Probably we need to do something like: ☝️ which means that we should use the biggest x possible as long as our jumps are not jumps of 1 unit. But still ... this sounds wrong to me. Because that means that we are doing jumps of two units ... if we drop the constants we get O(N). Probably there is another way to do or write this. Something like "x as big as possible while Sorry if I am not clear, maths is not my thing; in this case, I don't know how to express myself better. |
So, There are two ways I think you can make sense of a runtime of O(n). The first way is that when x goes to infinity two things happen. The other way is to think about the runtime of O(n) is the case where x=1. All a 1st root is if you think about is just N, so Does this make sense? |
This also might help: |
The optimal value for x is 2. To find this formally, one could potentially minimize max( My intuitive understanding of the optimal value of 2 is thinking about how when x=2, then Using similiar logic, when 0 < x < 2, You can extend this logic for the other parts of the graph, but I think the point is kind of clear. |
@jts307 I was thinking about this and reached a similar conclusion: Given x tends to infinity so as you said our steps would be the smallest possible (steps of one unit) and we would end up with a linear search O(N). The other option is that x tends to 0, so our steps would be the biggest size possible (N), so we would do only one jump and then we would need to do a linear search of N elements ... again we end up with O(N). So I guess that the real question is how to find the optimal value for x. Intuitively I was thinking that the best way to find this is the following: My guess was that the optimal x value is found when both of these values are as close as possible: ☝️ this is (if I understood you properly) what you were referring by:
BTW, thanks for making explicit that: So by moving everything to the exponent, we can try to find the optimal value of x like this: Now I might be doing a horrible mistake here, but I think we can remove N out of that equation: ☝️ reaching the same conclusion you did. I am not sure if my math is correct 🥴 so I tried to be as clear as I could, if I made any mistake here please let me know. |
"The other option is that x tends to 0, so our steps would be the biggest size possible (N), so we would do only one jump and then we would need to do a linear search of N elements ... again we end up with O(N)." I almost fell for this interpretation as well because I thought there might be symmetry in the graph. It seems intuitive. However, the other option is specifically when x =1, not when x approaches 0. For N^(1/x) as x approaches 0 it is undefined. The right and left limit don't equal. "My guess was that the optimal x value is found when both of these values are as close as possible:" Your guess is correct, but you're missing some parts I think. You have to show why the optimal x value is when the functions are equal. Here is a more concise reason why: In general, if you have an increasing function on some interval f' and a decreasing function g' on that same interval then their intersection is the minimum of max(f',g') assuming they have a unique intersection (Here is a post showing why: https://math.stackexchange.com/questions/132974/minimizing-a-maximum). The explanation is a better, more formal, more generalizable version of my explanation for why x =2 is optimal in the previous post. It shows specifically why the intersection must be the minimum. We can apply this concept here and look at the interval (0,infinity) on max( Now the question is whether (-infinity, 0) contains a better x. I think this is just a matter of showing that Thanks for bringing up the idea of there being a reason why the intersection is the minimum! I think that prespective makes showing the optimal minimum much clearer and allowed for a more generalized approach. |
@jts307 sorry forgot to comment this: I guess I can set this as "closed". Thanks a lot 🙌 |
@ajaralampidis No problem! |
https://frontendmasters.com/courses/algorithms/implementing-two-crystal-balls/
In minute 5 (-1:58) somebody made a great question:
Prime responds that the jumps would be smaller so then we would be doing more jumps. But this goes against big O notation, If N gets infinitely bigger, it seems to be better to do more (smaller) jumps so that we can then do a smaller linear search.
Maybe my "wording" is confusing but simply put:
∛N < √N
☝️ which means that O(∛N) is more efficient (according to Big O Notation) than O(√N)
And for the same reasons:
O(∜N) is better than O(∛N)
So maybe we need to write:
But this is also nonintuitive to me: the bigger the root base, the lower the jumps will be (until we have jumps of 1 or less than one), which means that we are doing ... a linear search.
I guess that my doubt is how we should write the big O notation of this problem, and if the answer is O(√N), then "Why don't we use ∛N instead of √N?"
Maybe this helps visualize the relationship between N and the steps the algorithm will do. (Remember that we do discrete steps, on the Y axis 1,5 doesn´t exist, we are traversing an array so we are in position 1 or 2).
The text was updated successfully, but these errors were encountered: