I have a problem with the cache.
There is a possibility to set the maximum entry size. That's fine. But what happens if we try to store bigger data than expected at the beginning? Well...nothing. No warning, no information that something was wrong. Is it really how should the cache work? Isn't there any possibility how to write a warning in the Logs? Any possibility how to set false if we are in branch "All assertions..." or so?
I can see only one workaround and it's immediately after the Store to cache call Look Up in Cache. Is it the only option or are there any other?
From my point of view, it's not necessary to know if the data is successfully stored in the cache.
whenever we want to access the data, we "look up in cache" first, if any reason the data is not in the cache, we retrieve it again from resource provider.
So, it doesn't matter if "store to cache" fail or not.
If we raised error for "store to cache" assertion, that means we would need extra error handling policy for this assertion, that would make things more complicated.
I understand your logic and having some complex error handling is definitely not a good option.
But I also understand the "pain" from Jan. Assuming you set the max entry size to 10000 bytes, but more than 50% of your requests have values of up to 15000 bytes. Then non of these requests will be cached and the usage of the cache is very poor. Ok, you could compare the amount of incoming requests with those, which are routed to the backend. But that's all complicated and additional effort.
I also had a similar discussion and question last year, if there is some kind of cache-monitoring. Means, some special commands to check how the cache is used (how much entries, the Overall size and RAM consumption). This would be very helpful in tuning the cache parameters and to check how caching is impacting overall system performance and resource consumption.
Is there really nothing to further analyze the usage of the cache?
I get your point.
I don't remember any cache-monitoring feature, it may need a new idea ticket.
A work around could be monitoring the performance instead, if backend response time increase, tune the cache settings