This problem should be in its own category... it force you to implement multiple data structures and keep track of many things... it is just too much to come up with in a 45-minute interview if you have not seen it before...
I've done it using same structure as for LRU with the only difference is that each list node have "frequency" attribute as well as new frequencyMap where key is frequency, and value is a pointer to the most recently added node with that frequency. Your approach is better memory wise, since you don't store key and frequency for each node.
Glad to be able figure out the logic and implement this problem by myself. Although I had to look for the solution in case of LRU Cache, but having that knowledge helped in this problem a lot. The way I came up with the logic of using "Hash map of double linked list" is pretty fascinating. I revised some stack problem few days back and one of the problems I went through was "Maximum Frequency Stack". In that question, we had to use "Hash map of stack" to be able to keep track of most frequent element. So, when I saw LFU Cache problem, I quickly thought what if I could utilize that same knowledge here. What if I could maintain map of doubly linked list in order to keep track of most frequently used node. This way I was able to come up with the optimal approach. Solving this problem made me realize the importance of revising the older problems that you have already solved(not memorize of course, just try to revise the idea or pattern used). This helps a lot in figuring out patterns in the newer problem.
Lets say you have a bunch of nodes with frequency 3 and only 1 with frequency 1. If you evict the one at frequency 1, how do you know to set the minLFU count to be at 3 now instead of just incrementing to 2?
class LFUCache { class ListNode { int val = 0; ListNode next = null; ListNode prev = null; ListNode() {} ListNode(int val, ListNode prev, ListNode next) { this.val = val; this.prev = prev; this.next = next; } } class LinkedList { ListNode left = new ListNode(); ListNode right = new ListNode(0,left,null); Map map = new HashMap(); LinkedList() { left.next = right; } int length() { return map.size(); } void pushRight(int val) { ListNode node = new ListNode(val,right.prev,right); map.put(val,node); right.prev = node; node.prev.next = node; } void pop(int val) { if ( map.containsKey(val) ) { ListNode node = map.get(val); ListNode next = node.next, prev = node.prev; next.prev = prev; prev.next = next; map.remove(val); } } int popLeft() { int res = left.next.val; pop(res); return res; } void update(int val) { pop(val); pushRight(val); } }
int n = 0; int lfuCnt = 0; Map valMap = new HashMap(); Map countMap = new HashMap(); Map listMap = new HashMap(); public LFUCache(int capacity) { n = capacity; } void counter(int key) { int cnt = countMap.getOrDefault(key,0); countMap.put(key,cnt+1); if ( !listMap.containsKey(cnt) ) listMap.put(cnt,new LinkedList()); listMap.get(cnt).pop(key); if ( !listMap.containsKey(cnt+1) ) listMap.put(cnt+1,new LinkedList()); listMap.get(cnt+1).pushRight(key); if ( listMap.get(cnt).length() == 0 && cnt == lfuCnt ) lfuCnt++; } public int get(int key) { if ( !valMap.containsKey(key) ) return -1; counter(key); return valMap.get(key); } public void put(int key, int value) { if ( n == 0 ) return; if ( !valMap.containsKey(key) && valMap.size() == n ) { int val = listMap.get(lfuCnt).popLeft(); valMap.remove(val); countMap.remove(val); } valMap.put(key,value); counter(key); lfuCnt = Math.min(lfuCnt, countMap.get(key) ); } } /** * Your LFUCache object will be instantiated and called as such: * LFUCache obj = new LFUCache(capacity); * int param_1 = obj.get(key); * obj.put(key,value); */
Also for least recently used cache, I don't think we have to create a custom linked list. We can use built in ordered dictionary. Here is the code that beats 98 % solution. Please review. from collections import OrderedDict class LRUCache: def __init__(self, capacity: int): self.cache = OrderedDict() self.size = capacity def get(self, key: int) -> int: if key not in self.cache: return -1 self.cache.move_to_end(key) return self.cache[key] def put(self, key: int, value: int) -> None: if key in self.cache: # just update the cache self.cache[key] = value self.cache.move_to_end(key) else: # check size of cache if len(self.cache) == self.size: self.cache.popitem(last = False) self.cache[key] = value
Failling on test case ["LFUCache","put","put","get","get","get","put","put","get","get","get","get"] [[3],[2,2],[1,1],[2],[1],[2],[3,3],[4,4],[3],[2],[1],[4]] Output - [null,null,null,2,1,2,null,null,3,2,-1,4] Expected - [null,null,null,2,1,2,null,null,-1,2,1,4]
This complexity merely beats 30% of the submitters, I have a faster one class LFUCache { public: struct node{ node* next; node* prev; int value; int key; int counter; node* tailPtr; node(int _key, int _val){ key = _key; value = _val; counter = 1; next = prev = NULL; tailPtr = NULL; } }; int cap; int maxCount; int minCount; unordered_map hashmap; vector counters; LFUCache(int capacity){ for(int i=0; icounter; int result = currNode->value; if(currCount+1 > maxCount) maxCount = currCount+1; if(counters.size() < maxCount + 1){ counters.resize(2*(maxCount+1)); }
You made a mistake. In line 71 and 72, you can't pop dict "by value". what if self.valmap = {"3":1, "2":1}. both key 3 and key 2 shared the same value of 1, so which one you want to delete? the correct way of doing it is using "del self.valmap[res]", also, in line 34 within the popLeft(), you need to return the self.left.next.key instead of val. A second part can be improved is line 69. The way you doing in the LRUcache is to write line 71 first, which is addding the value to the self.valmap first. and then, check if if len(self.valmap) > self.capacity, then remove the LRUcache. you don't need to revert the logic to additional check if key not in self.valmap. what's more, by doing it in this way, you can remove line 66 and 67. the edge case is not exist anymore since you add and remove it. (not remove and add). This makes me feel that although the video accent sounds like the same person, the coding style is indeed two different styles... Finally, self.lfuCut can be removed completely, it is not necessary at all, and the logic to maintain this variable is very complicated. especially in line 56. You can't possible think of this edge case at beginning. In leetcode 895, I doing it like "lfuCut = len(self.listmap)", this question can use "lfuCut = min(self.listmap.keys())" to get the minimum frequence count. and every time you pop the self.listmap[cnt], you check if it's LRUCache is empty, and del self.listmap[cnt], it is a very standard way of doing it and you use it in a lot of videos. Using this way will add a little bit time complexity (10% - 20%) since min() function is O(n), I just don't care. I want the variable as much less as possible. I see no one point out these issues in comment. which means this question is really hard, and probably most people just give up, not even copy the answer to try it. Don't get me wrong, Your explanation is still indeed the best on youtube. I can't possible understand it by watching your video. So i want point these things out, because I love the channel. I appreciate all your effort man ! class Node(object): def __init__(self, key = None, value = None, prev = None, next = None): self.key = key self.value = value self.prev = prev self.next = next class LRUCache(object): def __init__(self, capacity = 10000): """ :type capacity: int """ self.capacity = capacity self.cache = {} self.head = Node() self.tail = Node() self.head.next = self.tail self.tail.prev = self.head def get(self, key): """ :type key: int :rtype: int """ if key in self.cache: self.remove_node(self.cache[key]) self.add_node(self.cache[key]) return self.cache[key].value return -1 def put(self, key, value): """ :type key: int :type value: int :rtype: None """ if key in self.cache: self.remove_node(self.cache[key]) new_node = Node(key, value) self.add_node(new_node) self.cache[key] = new_node if len(self.cache) > self.capacity: del self.cache[self.head.next.key] self.remove_node(self.head.next) def add_node(self, node): # self.tail.prev self.tail node.next = self.tail node.prev = self.tail.prev self.tail.prev.next = node self.tail.prev = node def remove_node(self, node): # node.prev -> node -> node.next node.prev.next = node.next node.next.prev = node.prev node.next = None node.prev = None def pop(self, key): if key not in self.cache: return self.remove_node(self.cache[key]) del self.cache[key] def pop_left(self): rs = self.head.next.key self.pop(self.head.next.key) return rs class LFUCache: def __init__(self, capacity: int): self.capacity = capacity self.least_frev = 0 self.hashmap = {} # key => value self.count = {} # key => count self.cache = {} # count => LRU def counter(self, key): count = self.count.get(key, 0) if count in self.cache: # 移除原来的LRU self.cache[count].pop(key) if not self.cache[count].cache: del self.cache[count] # 增加计数 self.count[key] = 1 + self.count.get(key, 0) # 添加新的LRU self.cache.setdefault(count + 1, LRUCache()).put(key, self.hashmap[key]) def get(self, key: int) -> int: if key in self.hashmap: # 使用计数 + 1 self.counter(key) return self.hashmap[key] return -1 def put(self, key: int, value: int) -> None: # 维护数据结构 self.hashmap[key] = value # 超过容量 if len(self.hashmap) > self.capacity: curr = self.cache[min(self.cache.keys())].pop_left() del self.hashmap[curr] del self.count[curr] # 使用计数 + 1 self.counter(key) # Your LFUCache object will be instantiated and called as such: # obj = LFUCache(capacity) # param_1 = obj.get(key) # obj.put(key,value)
The problem with doing "lfuCut = min(self.listmap.keys())" is that the "put" and "get" operations have to be O(1), but I still like your solution overall.
Pause between sentences, take a breath. Sometimes it's hard to understand when one sentence is blurred into the next. Not sure if it has been edited this way, but your earlier videos didn't have this problem.
Man I just really love this problem 😃😃😃
Btw here's the LRU Cache explanation: ua-cam.com/video/7ABFKPK2hD4/v-deo.html
Now write it in C89 ☠
Tell me you are a nerd without telling me u r a nerd : I am the first viewer on Neetcode's LFU cache video.
If I get these problems on my interview, I am going to deliver pizza for the rest of my life.
Me : Starts to enjoy coding and getting comfortable with medium level questions.
Leetcode : Now solve LFU Cache
Me : Starts to hate my life😭
You lied to me this problem blows.
If I ever get this in an interview I'm just going to show them this video instead.
This problem should be in its own category... it force you to implement multiple data structures and keep track of many things... it is just too much to come up with in a 45-minute interview if you have not seen it before...
Dude this is amaZing. Was asked this question in interview and could solve in LogN. Your explanations are simple, easy to follow yet elegant.
If you don't mind me asking , which company asked this question?
@@MP-ny3ep the one on whose platform you are commenting :)
I've done it using same structure as for LRU with the only difference is that each list node have "frequency" attribute as well as new frequencyMap where key is frequency, and value is a pointer to the most recently added node with that frequency. Your approach is better memory wise, since you don't store key and frequency for each node.
i did the same thing. had to write excruciating big code. canyou share your solution, I would appreciate it
Glad to be able figure out the logic and implement this problem by myself. Although I had to look for the solution in case of LRU Cache, but having that knowledge helped in this problem a lot.
The way I came up with the logic of using "Hash map of double linked list" is pretty fascinating. I revised some stack problem few days back and one of the problems I went through was "Maximum Frequency Stack". In that question, we had to use "Hash map of stack" to be able to keep track of most frequent element.
So, when I saw LFU Cache problem, I quickly thought what if I could utilize that same knowledge here. What if I could maintain map of doubly linked list in order to keep track of most frequently used node. This way I was able to come up with the optimal approach.
Solving this problem made me realize the importance of revising the older problems that you have already solved(not memorize of course, just try to revise the idea or pattern used). This helps a lot in figuring out patterns in the newer problem.
🤓
Damnn that's one hell of a question!!
I just ended up using a Heap for both frequency and a sequence_counter
Thank you NeetCode for your amazing explanation ! A really complex and challenging question...
And most impressive of all - he produced this video within 2 hours after LeetCode posted the problem of the day.
“The problem that will make you hate your life” lol😂
Yay I'm early ! Just wanted to say I love your videos and will forever support you and your channels ♥ thank you for all that you do !
is there code shared ?
Thanks Neetcode for you amazing explanation ❤️. Please make videos on leetcode weekly and biweekly contest problems as well.
Lets say you have a bunch of nodes with frequency 3 and only 1 with frequency 1. If you evict the one at frequency 1, how do you know to set the minLFU count to be at 3 now instead of just incrementing to 2?
class LFUCache {
class ListNode {
int val = 0;
ListNode next = null;
ListNode prev = null;
ListNode() {}
ListNode(int val, ListNode prev, ListNode next) {
this.val = val;
this.prev = prev;
this.next = next;
}
}
class LinkedList {
ListNode left = new ListNode();
ListNode right = new ListNode(0,left,null);
Map map = new HashMap();
LinkedList() {
left.next = right;
}
int length() {
return map.size();
}
void pushRight(int val) {
ListNode node = new ListNode(val,right.prev,right);
map.put(val,node);
right.prev = node;
node.prev.next = node;
}
void pop(int val) {
if ( map.containsKey(val) ) {
ListNode node = map.get(val);
ListNode next = node.next, prev = node.prev;
next.prev = prev;
prev.next = next;
map.remove(val);
}
}
int popLeft() {
int res = left.next.val;
pop(res);
return res;
}
void update(int val) {
pop(val);
pushRight(val);
}
}
int n = 0;
int lfuCnt = 0;
Map valMap = new HashMap();
Map countMap = new HashMap();
Map listMap = new HashMap();
public LFUCache(int capacity) {
n = capacity;
}
void counter(int key) {
int cnt = countMap.getOrDefault(key,0);
countMap.put(key,cnt+1);
if ( !listMap.containsKey(cnt) )
listMap.put(cnt,new LinkedList());
listMap.get(cnt).pop(key);
if ( !listMap.containsKey(cnt+1) )
listMap.put(cnt+1,new LinkedList());
listMap.get(cnt+1).pushRight(key);
if ( listMap.get(cnt).length() == 0 && cnt == lfuCnt )
lfuCnt++;
}
public int get(int key) {
if ( !valMap.containsKey(key) )
return -1;
counter(key);
return valMap.get(key);
}
public void put(int key, int value) {
if ( n == 0 ) return;
if ( !valMap.containsKey(key) && valMap.size() == n ) {
int val = listMap.get(lfuCnt).popLeft();
valMap.remove(val);
countMap.remove(val);
}
valMap.put(key,value);
counter(key);
lfuCnt = Math.min(lfuCnt, countMap.get(key) );
}
}
/**
* Your LFUCache object will be instantiated and called as such:
* LFUCache obj = new LFUCache(capacity);
* int param_1 = obj.get(key);
* obj.put(key,value);
*/
hey. can we use deque with tuple for emulating a doubly linked list?
I think not because we have to pop from middle in O(1) time
this question is absolute mind*uck
Also for least recently used cache, I don't think we have to create a custom linked list. We can use built in ordered dictionary. Here is the code that beats 98 % solution. Please review.
from collections import OrderedDict
class LRUCache:
def __init__(self, capacity: int):
self.cache = OrderedDict()
self.size = capacity
def get(self, key: int) -> int:
if key not in self.cache:
return -1
self.cache.move_to_end(key)
return self.cache[key]
def put(self, key: int, value: int) -> None:
if key in self.cache:
# just update the cache
self.cache[key] = value
self.cache.move_to_end(key)
else:
# check size of cache
if len(self.cache) == self.size:
self.cache.popitem(last = False)
self.cache[key] = value
Failling on test case ["LFUCache","put","put","get","get","get","put","put","get","get","get","get"]
[[3],[2,2],[1,1],[2],[1],[2],[3,3],[4,4],[3],[2],[1],[4]]
Output - [null,null,null,2,1,2,null,null,3,2,-1,4]
Expected - [null,null,null,2,1,2,null,null,-1,2,1,4]
@@HoppyGamer This is suggested code for lru cache. I mentioned above.
@@ADITYAKUMARking My bad didn't notice
This was really helpful ...can you also make a solution video on "All O`one Data Structure" from leetcode please
You should do all O(1) data structure next :D
Too much information in just 14 mins haha !
This complexity merely beats 30% of the submitters, I have a faster one
class LFUCache {
public:
struct node{
node* next;
node* prev;
int value;
int key;
int counter;
node* tailPtr;
node(int _key, int _val){
key = _key;
value = _val;
counter = 1;
next = prev = NULL;
tailPtr = NULL;
}
};
int cap;
int maxCount;
int minCount;
unordered_map hashmap;
vector counters;
LFUCache(int capacity){
for(int i=0; icounter;
int result = currNode->value;
if(currCount+1 > maxCount) maxCount = currCount+1;
if(counters.size() < maxCount + 1){
counters.resize(2*(maxCount+1));
}
changeNode(currNode, currCount+1);
if(counters[currCount]->next->next == NULL && minCount == currCount) minCount = currCount + 1;
currNode->counter = currCount+1;
cout value = value;
return;
}
else if(hashmap.size() < cap && hashmap.find(key) == hashmap.end()){
minCount = 1;
if(maxCount==0) maxCount = 1;
node* newNode = new node(key, value);
addNode(newNode, 1);
hashmap[key] = newNode;
return;
}
else if(hashmap.size() == cap && hashmap.find(key) == hashmap.end()){
node* currHead = counters[minCount];
node* temp = currHead->tailPtr->prev;
int currKey = temp->key;
hashmap.erase(currKey);
removeNode(minCount);
node* newNode = new node(key, value);
addNode(newNode, 1);
hashmap[key] = newNode;
minCount = 1;
return;
}
}
void initializeVector(){
for(int i=0; inext = tail;
tail->prev = head;
head->tailPtr = tail;
counters[i] = head;
}
}
void addNode(node* currNode, int i){
node* currHead = counters[i];
node* nextHead = currHead->next;
currNode->next = nextHead;
currNode->prev = currHead;
currHead->next = currNode;
nextHead->prev = currNode;
}
void removeNode(int i){
node* currHead = counters[i];
node* tail = currHead->tailPtr;
node* lastUse = tail->prev;
lastUse->prev->next = tail;
tail->prev = lastUse->prev;
}
void changeNode(node* currNode, int newCounter){
node* nextNode = currNode->next;
node* prevNode = currNode->prev;
nextNode->prev = prevNode;
prevNode->next = nextNode;
if(counters[newCounter] == NULL){
node* head = new node(-1,-1);
node* tail = new node(-1,-1);
head->next = tail;
tail->prev = head;
head->tailPtr = tail;
counters[newCounter] = head;
}
addNode(currNode, newCounter);
}
};
Thanks!
626 Greenfelder Mountains
Felicity View
Brown Mark Lee Kenneth Gonzalez David
Yes, it really made me to hate my life ..😆😂
yay! my runtime was 435ms faster than yours!
thanks giga bro
god bless OrderedDict
Damaris Ranch
Rubie Mountain
this problem deserves a place in hell
LMAO XD
Rath Meadows
It's so hard .... If I have this question I will fail
Me too
Skyla Route
72123 Weissnat Row
Awesome 👍
You made a mistake. In line 71 and 72, you can't pop dict "by value". what if self.valmap = {"3":1, "2":1}. both key 3 and key 2 shared the same value of 1, so which one you want to delete? the correct way of doing it is using "del self.valmap[res]", also, in line 34 within the popLeft(), you need to return the self.left.next.key instead of val.
A second part can be improved is line 69. The way you doing in the LRUcache is to write line 71 first, which is addding the value to the self.valmap first. and then, check if if len(self.valmap) > self.capacity, then remove the LRUcache. you don't need to revert the logic to additional check if key not in self.valmap. what's more, by doing it in this way, you can remove line 66 and 67. the edge case is not exist anymore since you add and remove it. (not remove and add). This makes me feel that although the video accent sounds like the same person, the coding style is indeed two different styles...
Finally, self.lfuCut can be removed completely, it is not necessary at all, and the logic to maintain this variable is very complicated. especially in line 56. You can't possible think of this edge case at beginning. In leetcode 895, I doing it like "lfuCut = len(self.listmap)", this question can use "lfuCut = min(self.listmap.keys())" to get the minimum frequence count. and every time you pop the self.listmap[cnt], you check if it's LRUCache is empty, and del self.listmap[cnt], it is a very standard way of doing it and you use it in a lot of videos. Using this way will add a little bit time complexity (10% - 20%) since min() function is O(n), I just don't care. I want the variable as much less as possible.
I see no one point out these issues in comment. which means this question is really hard, and probably most people just give up, not even copy the answer to try it.
Don't get me wrong, Your explanation is still indeed the best on youtube. I can't possible understand it by watching your video. So i want point these things out, because I love the channel. I appreciate all your effort man !
class Node(object):
def __init__(self, key = None, value = None, prev = None, next = None):
self.key = key
self.value = value
self.prev = prev
self.next = next
class LRUCache(object):
def __init__(self, capacity = 10000):
"""
:type capacity: int
"""
self.capacity = capacity
self.cache = {}
self.head = Node()
self.tail = Node()
self.head.next = self.tail
self.tail.prev = self.head
def get(self, key):
"""
:type key: int
:rtype: int
"""
if key in self.cache:
self.remove_node(self.cache[key])
self.add_node(self.cache[key])
return self.cache[key].value
return -1
def put(self, key, value):
"""
:type key: int
:type value: int
:rtype: None
"""
if key in self.cache:
self.remove_node(self.cache[key])
new_node = Node(key, value)
self.add_node(new_node)
self.cache[key] = new_node
if len(self.cache) > self.capacity:
del self.cache[self.head.next.key]
self.remove_node(self.head.next)
def add_node(self, node):
# self.tail.prev self.tail
node.next = self.tail
node.prev = self.tail.prev
self.tail.prev.next = node
self.tail.prev = node
def remove_node(self, node):
# node.prev -> node -> node.next
node.prev.next = node.next
node.next.prev = node.prev
node.next = None
node.prev = None
def pop(self, key):
if key not in self.cache:
return
self.remove_node(self.cache[key])
del self.cache[key]
def pop_left(self):
rs = self.head.next.key
self.pop(self.head.next.key)
return rs
class LFUCache:
def __init__(self, capacity: int):
self.capacity = capacity
self.least_frev = 0
self.hashmap = {} # key => value
self.count = {} # key => count
self.cache = {} # count => LRU
def counter(self, key):
count = self.count.get(key, 0)
if count in self.cache:
# 移除原来的LRU
self.cache[count].pop(key)
if not self.cache[count].cache:
del self.cache[count]
# 增加计数
self.count[key] = 1 + self.count.get(key, 0)
# 添加新的LRU
self.cache.setdefault(count + 1, LRUCache()).put(key, self.hashmap[key])
def get(self, key: int) -> int:
if key in self.hashmap:
# 使用计数 + 1
self.counter(key)
return self.hashmap[key]
return -1
def put(self, key: int, value: int) -> None:
# 维护数据结构
self.hashmap[key] = value
# 超过容量
if len(self.hashmap) > self.capacity:
curr = self.cache[min(self.cache.keys())].pop_left()
del self.hashmap[curr]
del self.count[curr]
# 使用计数 + 1
self.counter(key)
# Your LFUCache object will be instantiated and called as such:
# obj = LFUCache(capacity)
# param_1 = obj.get(key)
# obj.put(key,value)
The problem with doing "lfuCut = min(self.listmap.keys())" is that the "put" and "get" operations have to be O(1), but I still like your solution overall.
Please make videos on easy problems
Definitely will, but daily LC problems have been all Hards recently
@@NeetCodeIO it will be great if you choose 100 easy problems on LC .
Here's a playlist of about 60 easy problems from my other channel ua-cam.com/play/PLot-Xpze53lfQmTEztbgdp8ALEoydvnRQ.html
Spinka Cape
Ima Garden
Fuck I watched this Solution Now I hate my life 😭😭
Nienow Common
Awesome
NGL, this is one of your least understandable videos.
Pause between sentences, take a breath.
Sometimes it's hard to understand when one sentence is blurred into the next.
Not sure if it has been edited this way, but your earlier videos didn't have this problem.
Slow down the speed
Your internet is slow ,my guy
I’m sorry but you over complicated this solution. Not as good as the rest of your videos.
It is over complicated problem my friend
Which part you think it's over complicated?