Please correct me if i am wrong, Considering n=cache size and q=query size Even in hash approach the case where there is a page hit to remove the hitted node and keep it at the rear position it takes O(n) ,so for shifting in this case it takes O(n) and searching O(1) so considering time complexity is calculated referring to worst time complexity ,it takes O(n*q) time complexity even using the hash approach 😅
You didn't mention about updating an existing value, during which we erase existing node even if its in middle of linked list and put it in the front. This erase takes O(n)
Yaa it will still take O(N) time complexity in this approach to, because you need to reach that particular position and than remove it.And than further add it at end.
How its order of 1 time at 8:06 suppose the question given is 1 2 3 "2" 1=2=3 are in cache then if you found 2 then we need to make linked list 1=3=2 is it O(1) now? we need to find where is 2 in the list and put it to the back at front here ("=" means doubly linked list)
Sir can we use circular queue to reduce shifting since the insertion is always done at rear and deletion will be always done from front. also the time taken for both the operation will be 1
Approach is good sir. Writing code in DLL will be lengthy & more difficulty in handling boundary cases.Instead we can use queue ,code will be very compact & easy to handle. Sir please also make videos on DP question.
I might be wrong but searching a key in a HashMap has worst complexity as O(n) because multiple elements(with same hashcode) can be present in same bucket.
In searching suppose an element is present in cache in the middle then we have to move it to front ....it will take O(n) time...how to optimise I it? I am getting TLE for it.
very very nice video...but what was the point to store the address of the nodes into the map as a key?... we never made any use of them..is there any use of value (i.e the address of the keys) in the most optimised algorithm ?... please let me know... thanks in advance
The way u explained... It's brilliant. But i have 2 doubts 1. Hash table size is, random or equal to window size and how to handle collision? 2. What r it's real world implications ?
@@yitingg7942 I think in LRU implementation, it could have been done using map if I remeber correctly, along with deque. Then it will be O(1) for remove. I will have to check it.
@@techdose4u Thanks Surya, I am a little confused that yes by using hashMap we can search for O(1) to see if it's there, to remove in DLL it's O(1) as well. Yes I admit that. However doesn't it take Linear time to search the whole DLL to see position this element is in and then remove it ? I don't use anybody use deque in the discussion, only DLL. So I am very confused.
@@yitingg7942 The time complexity is linear in the number of elements removed. It's implemented like a linked List in C++. It should be similar in Java as well. So yea, O(1) it is.
Can you explain which is the best algorithm to find the shortest path in a graph... I know there are a bunch of algorithms for that among that which is the optimal and why... Thanks again
Really Neat Explanation! There was an interview question asked what data structure do you use for caching? As per my knowledge i know LinkedHashMap or LinkedHashSet but interviewer asked why not ConcurrentHashMap? Please explain.
What if element is already present at the middle of linked list then removing it will take O(N) again right? Or can we go directly in O(1) since we have address of that element in hashmap?
Hi Sir, why we didn`t use single LL like this... 1-> 2->3 if 4 comes then pointing the head to the 2 and at the tail 4 is added. In this way we don`t need to traverse back to update the variable name. So was there need of doubly. Please help !!
Thanks a lot man !This was an excellent explaination.Just one doubt when we remove an element from an unorderedmap will it be a O(n) operation or O(1) operation??
Actually you need to figure out the goal of question and the operations you want to perform. Now from your knowledge, you will think what DS to apply to get the job done efficiently. The more you practice, the better you get at deciding good DS & Algo :)
Great explanation as always. A minor doubt, why are we using Doubly Linked list? If its to link yhe previous node, can't we just algorithm for removing node in O(1)?
Yes you can but then you won't be able to get the new rear page. The page that will be least recent after removing the current least recent page. Hence you will need to update to rear to the page 1 before the least recent page that's why doubly linked list.
I have a silly doubt. If we use hash function instead of shifting elements in array, then also we have a O(N) space complexity, then how can we say that it can be optimized..?Please bhaiya solve my doubt..
2 mins into the video, I already know the DS and Algo to be used. Amazing video!
my CS degree < Tech Dose Content. Keep making sir. I love the content and easy to understand the concepts.
Thanks
Damn True 😂😂
😅
@@techdose4u Yes. This is very good and great explanation within 10 mins. Pls keep adding such puzzles/algorithms etc, thanks
India me teri CSE ke degree mere chaddi se bhi kam value rakhti hai
actually u r pretty gr8 to be a teacher...absolutely amazing
Thanks :)
Bravo once again, I never got a CS degree so I missed out on a lot. your videos are a blessing!
Thanks
I like the approach presented to arrive at the final solution - thank you for making this video.
Welcome
I love your way of teaching. Made all my doubts clear.
Thanks
This is gold. Thank you very much for this.
Welcome :)
Please correct me if i am wrong,
Considering n=cache size and q=query size
Even in hash approach the case where there is a page hit to remove the hitted node and keep it at the rear position it takes O(n) ,so for shifting in this case it takes O(n) and searching O(1) so considering time complexity is calculated referring to worst time complexity ,it takes O(n*q) time complexity even using the hash approach 😅
Awesomeee...... I've been searching for this everywhere, and the explanation was amazing..... hats off man
Thanks :)
Thank you so much for such a wonderful explanation
Welcome 😁
Instead of HashMap we can use Set to store current elements keep updating that on any change with O(1). Searching would also be O(1).
Get is linear operation on set
awesome explanation. Thank you, sir, felt like remembered OS class in college, of course, this is better than that
Most rearly found explanation
Very helpful, thank you.
Knowledge sharing was good in session. Thanks
Welcome :)
thanks for showing how to optimize step by step
Thanks :)
Brilliant video. Loved the way you explained optimisation stepwise. Thanks. Also, no unnecessary comments. Strictly content.
Thanks
thanks man , now I am able to solve this question by your helps.
very clear and well defined explanation. Thank you..........
Welcome
You didn't mention about updating an existing value, during which we erase existing node even if its in middle of linked list and put it in the front. This erase takes O(n)
Yaa it will still take O(N) time complexity in this approach to, because you need to reach that particular position and than remove it.And than further add it at end.
insert at beginning i.e. at head -> o(1)
Tum bahut mast kaam karta h bhai
One of your best videos.
Thank you.
Welcome :)
Yt username me roll no. Likhne par exam me zyada aate hai kya ? Har koi likh rha hai 🌚
Does the Python's in build lru_cache works in the same way as you have written in your code?
How its order of 1 time at 8:06
suppose the question given is 1 2 3 "2"
1=2=3 are in cache then if you found 2 then we need to make linked list 1=3=2 is it O(1) now? we need to find where is 2 in the list and put it to the back at front
here ("=" means doubly linked list)
refer his next video in which he explained the code of this
Your point seems acceptable.
Compare to other channel techdose is underrated...It never happens that i watched your video and feel like wasted time..u are awesome.
Thanks bro :)
Sir can we use circular queue to reduce shifting since the insertion is always done at rear and deletion will be always done from front. also the time taken for both the operation will be 1
Nice but I believe u missed the last part in which we have to shift the requested page to the rightmost if gets found in cache
Yes I missed it. Might have mentioned in someone's comment 😅
Liked, subscribed, shared ❤
♥️❤️❤️❤️❤️this is awsome sir
Thanks :)
You must be working really hard
:)
Thanks sir. Could understand easily
Nice :)
Thanks,it's a very good explanation.
Welcome :)
Super explaination.👌👌
Thanks :)
Approach is good sir.
Writing code in DLL will be lengthy & more difficulty in handling boundary cases.Instead we can use queue ,code will be very compact & easy to handle.
Sir please also make videos on DP question.
Actually this is just for understanding. We will use Deque for this. Internally implementation will be optimized as explained :)
@@techdose4u okay sir.
Excellent Explanation sir and keep growing your channel
Thanks :)
You explain topics pretty well. Thanks for this one
Welcome :)
Wonderful explanation!!
Thanks
I might be wrong but searching a key in a HashMap has worst complexity as O(n) because multiple elements(with same hashcode) can be present in same bucket.
When we request 4 at 10:26 what if 4 is presented at 2 position?? Pls reply
In searching suppose an element is present in cache in the middle then we have to move it to front ....it will take O(n) time...how to optimise I it? I am getting TLE for it.
i had implemented the code using singly linked list. the code is given below if there is any mistake please correct it
#include
using namespace std;
struct node
{
int data;
struct node *next;
};
int main()
{
int arr[] = {1, 2, 3, 4, 1, 3};
int n = sizeof(arr) / sizeof(int);
node *front;
node *rear;
node *temp;
node *temp1;
front = (struct node *)malloc(sizeof(struct node));
rear = (struct node *)malloc(sizeof(struct node));
temp = (struct node *)malloc(sizeof(struct node));
temp1 = (struct node *)malloc(sizeof(struct node));
front->data = -1;
rear->data = -1;
int k = 3;
for (int i = 0; i < k; i++)
{
cout data == -1)
{
cout data = arr[i];
front->data = arr[i];
rear = front;
}
else
{
cout next = (struct node *)malloc(sizeof(struct node));
front->next->data = arr[i];
front = front->next;
front->next = NULL;
}
}
cout 0)
{
cout data)
{
cout next;
front->data = op;
}
else
{
int data = temp->data;
temp->next = temp->next->next;
front->next = (struct node *)malloc(sizeof(struct node));
front = front->next;
front->data = data;
}
}
else
{
cout data;
rear = rear->next;
front->next = (struct node *)malloc(sizeof(struct node));
cout
amazing explanation !!
Thanks
best one
👍🏼
very good insights
Space complexity is O(size of cache) ? Asking because it is not mentioned in the video.
If you are just using cache then you are correct :)
@@techdose4u What are the other things am i missing something?
Brilliant explanation, Thanks
Welcome :)
very very nice video...but what was the point to store the address of the nodes into the map as a key?... we never made any use of them..is there any use of value (i.e the address of the keys) in the most optimised algorithm ?... please let me know... thanks in advance
The way u explained... It's brilliant.
But i have 2 doubts
1. Hash table size is, random or equal to window size and how to handle collision?
2. What r it's real world implications ?
Its real world application is related to the memory management of different process that operate in machines.
Can anyone help me to understand how search operation in Deque takes O(1)? Searching any random index doesn't take O(n) complexity?
Instead of using doubly linked list, can we use queue? because it will also give the time complexity O(1) .. Removing from front nd adding on rear...
Thank you so much TECH DOSE. :3
Welcome :3
Very good content.
:)
The deque.remove(element), is this O(1) time ?
Gm 🌄 Don't think so.
@@techdose4u So we can't use deque then ?
@@yitingg7942 I think in LRU implementation, it could have been done using map if I remeber correctly, along with deque. Then it will be O(1) for remove. I will have to check it.
@@techdose4u Thanks Surya, I am a little confused that yes by using hashMap we can search for O(1) to see if it's there, to remove in DLL it's O(1) as well. Yes I admit that. However doesn't it take Linear time to search the whole DLL to see position this element is in and then remove it ? I don't use anybody use deque in the discussion, only DLL. So I am very confused.
@@yitingg7942 The time complexity is linear in the number of elements removed. It's implemented like a linked List in C++. It should be similar in Java as well. So yea, O(1) it is.
Can you explain which is the best algorithm to find the shortest path in a graph... I know there are a bunch of algorithms for that among that which is the optimal and why... Thanks again
Just stick with Dijkstra or else bellman ford for negative edges. It will work.
Really Neat Explanation!
There was an interview question asked what data structure do you use for caching? As per my knowledge i know LinkedHashMap or LinkedHashSet but interviewer asked why not ConcurrentHashMap? Please explain.
Then your answer would be...that i am not proficient in java...its a java concept
Something related to thread safety?
Btw why can't we use Singly Linked List?
What if element is already present at the middle of linked list then removing it will take O(N) again right? Or can we go directly in O(1) since we have address of that element in hashmap?
Yup ture hashmap stores address of linked lost nide, being doubly linked link it's easy to associate previous node to next node
You are awesome....
Thanks :)
thank you for making this
Sir can't we use Deque here??
Hi Sir, why we didn`t use single LL like this... 1-> 2->3 if 4 comes then pointing the head to the 2 and at the tail 4 is added. In this way we don`t need to traverse back to update the variable name. So was there need of doubly. Please help !!
Can we use a circular queue in place of linked list
Thanks a lot man !This was an excellent explaination.Just one doubt when we remove an element from an unorderedmap will it be a O(n) operation or O(1) operation??
O(n) worst case. Please read docs.
Sir please make video on FARTHEST SEARCH FIRST cache algorithm, it's is difficult to under please
Problem added in todo list. Will make it once I process the earlier requested problems.
@@techdose4u Queue will be better in LRU cache
Can't weuse deque instead of a doubly linked list for O(1) deletion and insertion?
yeah i also thought the same
You can use Either Deque or List or DLL.
can anyone tell me which tool is he using for writing the explanations
This is how you do it in an interview ❤️❤️
:) Yep
Beautiful!
Thanks :)
How to think which data structure is to be applied ?
Actually you need to figure out the goal of question and the operations you want to perform. Now from your knowledge, you will think what DS to apply to get the job done efficiently. The more you practice, the better you get at deciding good DS & Algo :)
Great explanation as always.
A minor doubt, why are we using Doubly Linked list? If its to link yhe previous node, can't we just algorithm for removing node in O(1)?
Yes you can but then you won't be able to get the new rear page. The page that will be least recent after removing the current least recent page.
Hence you will need to update to rear to the page 1 before the least recent page that's why doubly linked list.
thank you
Welcome :)
Why are we using a Doubly Linked List here?
Easy to remove an element
nice
Thank for you
You need to update hashmap, it takes O(n) not O(1)
I have a silly doubt. If we use hash function instead of shifting elements in array, then also we have a O(N) space complexity, then how can we say that it can be optimized..?Please bhaiya solve my doubt..
array is fixed size ... won't grow dynamically so linked list is useful
Can you do a video on LFU cache as well?
I will try that as well.
Why not single linked list????
👌👌👌👌
Please explain in simple terms..the main purpose of us students is to solve problem.. not to figure out complex terminologies
Where is the code man
you are switching rear with the front!
IB me submit krte hue, 1000 points ke 200 ho gye 😂😂😢😢
🤣 koi baat nhi....sikhne se mtlb hai :)
@@techdose4u testcases of IB are horrible
@@techdose4u sir I have few doubts in IB questions, can you make explanation video of those questions....
#Python code with OrderedDict only
from collections import OrderedDict
class LRU:
def __init__(self, capacity: int):
self.cache = OrderedDict()
self.capacity = capacity
def refer(self, k):
print('-----------------Each Iter--------------------')
print('k =', k)
print(self.cache.keys())
if k in self.cache:
self.cache.move_to_end(k, last=False)
return True
else:
self.cache[k] = True
self.cache.move_to_end(k, last=False)
if len(self.cache) > self.capacity:
self.cache.popitem()
return False
# RUNNER
# initializing our cache with the capacity of 2
lru = LRU(4)
print(lru.refer(1))
print(lru.refer(2))
print(lru.refer(3))
print(lru.refer(1))
print(lru.refer(4))
print(lru.refer(5))
print('------------------------Final------------------------')
print(lru.cache.keys())
1 2 3 4 5 6 3 5 6
You did not covered handling of such cases!
I already covered. When 2nd time 5 comes then you need to reorder the DLL. I already told DLL reordering is required and done in O(1).
But the deletion of node and changing of pointers u didn't showed,although u explained that
For that we are already using DS and it is too trivial to explain.
Martin George White Ruth Perez Robert
Your implementation will fail for Test Case :
1 2 3 1 4
#Python code with self implemented doubly linked list
import gc
class Node:
def __init__(self, key):
self.key = key
self.next = self.prev = None
def __str__(self):
return f'{self.key}'
class DoublyLL:
def __init__(self):
self.head = self.last = None
self.size = 0
def insertfirst(self, node):
if self.head is None:
self.head = self.last = node
else:
self.head.prev = node
node.next = self.head
self.head = node
self.size += 1
def deletenode(self, node=None):
if node == None:
node = self.last
self.last = self.last.prev
elif node.next == None:
self.last = self.last.prev
k = node.key
if node.prev is not None:
node.prev.next = node.next
if node.next is not None:
node.next.prev = node.prev
node.next = node.prev = None
gc.collect()
self.size -= 1
# print('deleted', k)
return k
class LRU:
def __init__(self, n):
self.dll = DoublyLL()
self.dict = {}
self.csize = n
def refer(self, k):
print('-----------------Each Iter--------------------')
print('k =', k)
print('Cache:\t', end='')
self.print_cache()
addr = self.dict.get(k)
if addr != None and addr == self.dll.head:
return True
if self.dll.size == self.csize or addr != None:
del_k = self.dll.deletenode(addr)
del self.dict[del_k]
node = Node(k)
self.dll.insertfirst(node)
self.dict[k] = node
return True if addr is not None else False
def print_cache(self):
temp = self.dll.head
while temp is not None:
print(temp, end=' ')
temp = temp.next
print()
lru = LRU(4)
print(lru.refer(1))
print(lru.refer(2))
print(lru.refer(3))
print(lru.refer(1))
print(lru.refer(4))
print(lru.refer(5))
print('------------------------Final------------------------')
lru.print_cache()
print(lru.dll.head)
Super explaination 👌👌