python: functools.lru_cache (beginner - intermediate) anthony explains #54
Вставка
- Опубліковано 18 чер 2020
- today I explain functools.lru_cache as well as a few ways that you might use it in your programs!
- decorators video: • python @decorators - (...
playlist: • anthony explains
==========
twitch: / anthonywritescode
dicsord: / discord
twitter: / codewithanthony
github: github.com/asottile
stream github: github.com/anthonywritescode
I won't ask for subscriptions / likes / comments in videos but it really helps the channel. If you have any suggestions or things you'd like to see please comment below! - Наука та технологія
very clear explanation. I've been seeing this used in a lot of leetcode solutions and never understood what it meant. thanks!
easy caching! there's a followup re: leetcode here: ua-cam.com/video/4tc5MUBjw-g/v-deo.html
Excellent. I first came across lru_cache in FastAPI but this video helps understand it's wider usage
To the point, I love it! Subscribed!
This makes my memoization DP code A LOT cleaner! Thanks!😅
Thanks man. This has been very helpful.
Thank you for video!
great video 👍
Wow, using lru_cache as a closure namespace is a neat little trick!
thanks a lot
best ever
Super!
I am spamming you with lots of comments, apologies for this... do you think this can be used within AWS lambda functions? I have a specific use case to retrieve read only data which is only updated nightly, given the lambda container doesnt hang around all that long I am wondering if I could use this as a cache for json data instead of the additional overhead and complexity of adding elasticache in. I guess what would be even better were if the cache TTL could be controlled?
comments are always good! happy to answer as well :) lambda keeps your instance running for like ~4 hours so this can be used (in memory caching is one of the easiest ways to improve performance with lambda). lru_cache doesn't have a TTL but there are several libraries on pypi which add this functionality
You mentioned that the large cache is wasteful on that function that will only ever return one value, but is it? I assumed the cache is just like a dictionary or something so it wouldn't reserve space for up to maxsize, so if you only ever put one item in it, the size would be the same no matter what you set maxsize to.
yeah it looks like the cpython implementation doesn't use a sized table (I misremembered here!) -- but you could imagine an implementation which does (or even has a fastpath optimization for n=1)
Thanks for the video. Can you maybe make a video on the variable types, e.g. in the video you write: def square(x: float) -> float. What's the usefulness of it other than you explicitly see what are the variable types, in this case input and output are floats.
yep -- check this one out: ua-cam.com/video/H5CnZQDKfhU/v-deo.html
@@anthonywritescode thank you, appreciated!! 👍
I'm guessing functools only works with functions :/
If you set some default values in a module, how do you stop them being reset if that module is imported a second time?
module objects are global so they'll only ever be initialized once (unless someone uses `reload(...)` though at that point almost all bets are off because things like singletons are no longer singletons!)
@@anthonywritescode Well that is awfully sensible - horray for Python!
And thanks for clearing that up :)
is this kinda like dfs with memorisation?
lru_cache can be used for memoization, yes
How long the values would be cached for with lru_cache?
until the process exits or the cache is manually cleared or they are evicted due to reaching the `maxsize`
I'm surprised there is no way to specify time to live?
Can a logger be benefited by slamming lru_cache on every single* method of the Logger class?
I did profiling on the logger class and turns out the number of function call were reduced but is it still safe to assume that this slamming most of the methods with lru_cache is okay? Implementation wise the output of the logger is correct so haven't had any issues with that as well, just curious if it is good implementation or am I missing something big here?
I assume you're writing a log-once logger? adding lru_cache would prevent duplicate logging calls from succeeding
@@anthonywritescode sorry, but I didn't get what you mean by log-once logger. This logger is something I initialize once and then use it throughout the code. And yes, lru_cache did manage to avoid duplicate function calls and my output was also something that I expected, but not sure why i feel i'm doing something wrong here. Also as per the below comment, please explain `partial` as well.
I guess I don't understand what you're asking -- perhaps show some code?
@@anthonywritescode haha fine. here's the link to the code: github.com/kaamiki/miroslava/blob/main/src/miroslava/utils/logger.py
yeah I would not suggest doing that -- you're essentially writing a memory leak (keeping log entry objects alive much longer than necessary) -- formatPath is probably the only function that _could_ be lru_cache'd though it's unlikely to improve your performance
please explain functools.partial as well.
will do! added it to the list
Respects.