In your given code, in the last line of the function exponentialSearch(), will it not be return binarySearch(arr, i/2, min(i, n-1), x); instead of return binarySearch(arr, i/2, min(i, n), x); ???
Sir time and space complicity didn't understand fully please make one video on time and space which searching algorithm is best why and compare with another what difference could you explain in more detail
Great Sir but one question plz do ans.....2^n gets counted in one of worst complexities....and while finding runtime we take the max time...so we need to take 2^n instead of logN....
But we're still using binary search right? We've just reduced some computations but the algorithm runs in O(log n) still so why do all this can't we just do with pure binary search.
i=1 initially , then i gets pow(2,1), pow(2,2) and so on ,let's say this continues upto k times , pow(2,k) ,the loop will be exited when i>=n , thus k = no of times loop runs =logn ,also the time complexity of BS is O logn,so it's O logn +O logn =O logn
@@techdose4u MATLAB is a multi-paradigm numerical computing environment and proprietary programming language developed by MathWorks . Source:- wikipedia
Hi, this is absolutely ridiculous. Unless you have an extremely large array where modular exponentiation indexing becomes useful,... You're far better off just doing a linear search, or maybe a binary search if the list is larger by an order of magnitude or two than a cache line. Even If you had a million integers, linear search is still faster, however you might see improvement with a binary search at that scale.... Maybe.... It really depends if you are on hardware that can fetch polynomially ahead rather than linear ahead. For this algorithm to net actually gains, you'd have to be searching through some obscenely massive collection of hashes indexing objects in a relatively remote record heap with two or more layers of pointer indirection... then it might be worth it. But you might actually be better off using modular exponentiation indexing. The point is that anything outside of cache is obscenely slow by comparison, and a single short tight loop will optimize far better than a sequence of loops and a potentially recursive function call... and because of that being able to leverage the automatic cache prefetch using linear iteration is preposterously effective. I mean.... We have multi gigahertz processor cores, you can linearly iterate through about a billion member list of candidates checking for equality in less than a second. Trust me, binary search isn't the problem. In your algorithm, it could be argued that the overhead of the loops, and function calls alone will obliterate any gains you think you're getting from this algorithm at this scale and scales even several orders of magnitude greater. I don't care what your big O notation says.
In your given code, in the last line of the function exponentialSearch(), will it not be return binarySearch(arr, i/2, min(i, n-1), x); instead of return binarySearch(arr, i/2, min(i, n), x); ???
Please change it to min(i,n-1) and you will get the correct result.
@@techdose4u ok man!!
Bhai ekdum jhaakkass samjhaya hai❤️❤️..keep it up..lots of love from my side
Thanks :)
Thank you vey much for your explanation and efforts for this video totally appreciate it. keep up the good work.
Welcome :)
Very helpful, keep updating such new topic, thanks a lot
Welcome
Please.. please... Upload more videos on data structures based problems
Sure :)
Very good explanation ❤
Odom sir....keep it up
Sure :)
Sir time and space complicity didn't understand fully please make one video on time and space which searching algorithm is best why and compare with another what difference could you explain in more detail
Isn't the command "min(i, n-1)" unnecessary, because the while loop already has prevented, that n would be larger than i? (while i
the while loop does not prevent it. It just handles elements in side that range. when I exceeds range While loop breaks (without preventing)
Great video sir! Just one observation, should n't expression (i
What does min(i, n-1) do? Sorry im new to coding. As in why are there two parameters inside
Min(x, y) will return minimum of these two values. If x
Great Sir but one question plz do ans.....2^n gets counted in one of worst complexities....and while finding runtime we take the max time...so we need to take 2^n instead of logN....
But we're still using binary search right?
We've just reduced some computations but the algorithm runs in O(log n) still so why do all this can't we just do with pure binary search.
hi, how is the exponential jump from i = 2, 4, 8... giving (logn) time complexity? can you let me know. thanks
i=1 initially , then i gets pow(2,1), pow(2,2) and so on ,let's say this continues upto k times , pow(2,k) ,the loop will be exited when i>=n , thus k = no of times loop runs =logn ,also the time complexity of BS is O logn,so it's O logn +O logn =O logn
Why are we incrementing i = i*2 , why not x3 or x5 ???
Thank you sir
Welcome :)
Can you provide the same code in MATLAB?
I don't know matlab.
@@techdose4u MATLAB is a multi-paradigm numerical computing environment and proprietary programming language developed by MathWorks .
Source:- wikipedia
@@ChandrapalSd LOL he want to say that He don't have the knowledge about MATLAB.
Why we compared 0th element separately in beginning ?
Because i represents the index number and if yo ustart with 0, you cannot increase i exponentially as 0x2=0
why should be we give min(i,n-1).Instead of that can we give n-1 alone
No because if I < n && a[I] > x then we don't want to search to the end of the array if variable "I" can be smaller than the array size
WHY I/2 TAKEN PLEASE EXPLAIN
If 11 would be present in 5th,6th or 7th index we can't find it.
I got it. But how can i get the explanation of the lower bound of i/2. why it is i/2 it should be anyother thing. Please explain.
10+11/2=10,how it is true??????
Where did you find this??
sorry men i was wrong .
i got you
Hi, this is absolutely ridiculous.
Unless you have an extremely large array where modular exponentiation indexing becomes useful,... You're far better off just doing a linear search, or maybe a binary search if the list is larger by an order of magnitude or two than a cache line.
Even If you had a million integers, linear search is still faster, however you might see improvement with a binary search at that scale.... Maybe.... It really depends if you are on hardware that can fetch polynomially ahead rather than linear ahead.
For this algorithm to net actually gains, you'd have to be searching through some obscenely massive collection of hashes indexing objects in a relatively remote record heap with two or more layers of pointer indirection... then it might be worth it. But you might actually be better off using modular exponentiation indexing.
The point is that anything outside of cache is obscenely slow by comparison, and a single short tight loop will optimize far better than a sequence of loops and a potentially recursive function call... and because of that being able to leverage the automatic cache prefetch using linear iteration is preposterously effective.
I mean.... We have multi gigahertz processor cores, you can linearly iterate through about a billion member list of candidates checking for equality in less than a second. Trust me, binary search isn't the problem.
In your algorithm, it could be argued that the overhead of the loops, and function calls alone will obliterate any gains you think you're getting from this algorithm at this scale and scales even several orders of magnitude greater. I don't care what your big O notation says.
Exanple u took is completely ridiculous. Can't u take index and values different?
Its useless