What will happen if there will be many requests for the same item that is not in the cache? Is there any locking mechanism to generate the item only once for the first request so the rest of them will just wait? Or there should be some kind of warm up mechanism on memcache restart so every item is in the cache before any request occurs? And also cache updates on any edit to the stored item.
I think, even in case of burst of requests there would be a few requests that will be head of others. These requests will warm up the cache. In some specific scenarios we can choose to cache the results if we expect a lot of requests hitting the system
One of the best channel for the golang...
Thank you. Please spread the word.
really great videos
Glad you like them!
May I ask a question? Why not do it this way?
func CacheData(cacheKey string, ttl int32, callback func() []byte) []byte {
item, err := mc.Get(cacheKey)
var retValue []byte
if err != nil {
if !errors.Is(err, memcache.ErrCacheMiss) {
logs.Error("memcache error: ", err)
}
retValue = callback()
// Cache the result
memcacheItem := &memcache.Item{
Key: cacheKey,
Value: retValue,
Expiration: ttl,
}
if err := mc.Set(memcacheItem); err != nil {
logs.Error("memcache error: ", err)
}
} else {
retValue = item.Value
}
return retValue
}
Yes, this is a great way.
I started with an assumption that the function passed can have any kind of args.
I think this would have been simpler.
Thanks! the error handling and types are way better like this!
@@codeheimThinks,
learning from you.
What will happen if there will be many requests for the same item that is not in the cache? Is there any locking mechanism to generate the item only once for the first request so the rest of them will just wait? Or there should be some kind of warm up mechanism on memcache restart so every item is in the cache before any request occurs? And also cache updates on any edit to the stored item.
I think, even in case of burst of requests there would be a few requests that will be head of others. These requests will warm up the cache. In some specific scenarios we can choose to cache the results if we expect a lot of requests hitting the system
Nice Tutorial, but why you implement the cache function complex with a callback? You can simply pass the byte array as third parameter, or?
I used this mechanism so that we can pass any kind of function. You are right too.
I thought redis was more favored than memcached
Yes, I too prefer Redis. But Memcached is popular too.
thanks