Layer7 API Management

 View Only
  • 1.  [Experimental Feature] Distributed Key Value Storage

    Broadcom Employee
    Posted Apr 24, 2023 04:43 PM

    I'm excited to announce the availability of a new experimental distributed Key Value Storage feature.

    This feature works much like the existing store and look up in cache assertions, except this assertion was designed to use the gateway's current support for embedded or external Hazelcast, and we plan to extend it to other external key value storage solutions like Redis and those currently supported by the tactical Remote Cache assertion in the future.

    That means this feature can share key values across a cluster of gateways, or a cluster-less/database-less deployment of many gateways.




    If this is something of interest to you, please reply directly to me or this thread to request access.

    Experimental Features

    An early access progressive delivery feature that was rapidly developed by Layer7 and offered to customers for experimentation. Not intended for production use. Depending on user feedback, experimental features may be altered in or removed from future releases.

    Layer7 support is not available. Users are encouraged to share feedback in Layer7 Communities as responses to this thread



    ------------------------------
    Ben Urbanski
    Product Manager, API Gateway
    Layer7 API Management
    ------------------------------


  • 2.  RE: [Experimental Feature] Distributed Key Value Storage

    Broadcom Employee
    Posted Apr 28, 2023 11:48 AM

    Ben,

    Great feature.  I am curious if there has been any thought given to being able to list all of the keys in the store?  I realize that this is not really the intent of a key/value store, but it can be useful.

    For example, you might store a username/sessionId key value pair, and want to provide the ability to logout all users (purge their session keys so they are forced to reauthenticate).  This is virtually impossible if you cannot get a list of the keys in the store.

    I have run into similar scenarios where the inability to list the keys forced us to use an external database call when the cache would have been much more performant.




  • 3.  RE: [Experimental Feature] Distributed Key Value Storage

    Broadcom Employee
    Posted May 03, 2023 10:31 AM

    Thanks, Joe. Great feedback. We haven't considered enhancing the lookup assertion or adding a new assertion to get all keys in the store, but we will now. However, if the primary use case for that would be to delete all keys, wouldn't it be better to just provide a delete all keys feature? We can still consider getting all keys, but I do worry about scenarios where there are possibly millions of keys.



    ------------------------------
    Ben Urbanski
    Product Manager, API Gateway
    Layer7 API Management
    ------------------------------



  • 4.  RE: [Experimental Feature] Distributed Key Value Storage

    Broadcom Employee
    Posted May 03, 2023 12:15 PM

    Ben, I wouldn't say that the primary purpose is to delete keys, that was just an example. 

    One customer hoped to use a cache to gather transaction metrics for a service.  Every request would add the transaction time and username as the key/data to a cache with a 24 hour expiration.  A separate service would have returned a web page showing the last day's worth of transactions by looping over the keys in the cache.  Unfortunately without being able to list the keys, there was no way to use the cache this way because you need to know the key to retrieve the value; so now they have to log each transaction to a database.

    I totally appreciate that there are technical concerns associated with my idea.

    Likely the easiest way to implement this would be to create a "get keys" assertion that accepts a cache id (check a box for the distributed key store), a 'count', a 'skip past key', and a 'target variable' field.  The count would be used to limit the number of keys returned.  "Skip past key" would allow you to skip past the last key you previously consumed (assuming that the key store/cache are FIFO) to allow you to do batch operations.  It would return a multivalued context variable containing the keys that the customer can then use existing assertions such as "run assertions for each item" against.




  • 5.  RE: [Experimental Feature] Distributed Key Value Storage

    Broadcom Employee
    Posted May 03, 2023 12:59 PM

    I 100% support this idea. I know that there has been at least one time in the past where a "get keys" feature for our existing cache functionality was asked for. I don't recall the specific use case but I think I filed a feature request at the time. 



    ------------------------------
    Jay MacDonald - Adoption Architect - Broadcom API Management (Layer 7)
    ------------------------------



  • 6.  RE: [Experimental Feature] Distributed Key Value Storage

    Broadcom Employee
    Posted May 03, 2023 12:58 PM

    Ben,

    I guess I should just request access and test for myself.  But I did have some other questions.

    1. Is it a single key/value store (your screenshot suggests it is, since I don't see a field to specify which store to use), or can we have multiple stores with different names/id's like we do with caches?
    2. Do we have the same controls as we do with a cache as far as the max entries, max age, max entry size, etc?  If not, how/when are items removed from the store?
    3. Any limitations on data types (Can it store multipart variables & messages)?  Size limits?
    4. Will we have the ability to preload/bulk load the key/value store, or just one entry at a time?
    5. Any chance of using listeners/triggers in the future?  For example, a policy that would execute when a value in the map changes could make for some interesting use cases; I know that Hazelcast supports listeners.

    The more I think about this, the more I can see some really powerful applications of it.  Especially with regard to clusterless implementations.   I'm excited to give it a try.




  • 7.  RE: [Experimental Feature] Distributed Key Value Storage

    Broadcom Employee
    Posted May 05, 2023 10:32 AM

    Hello Joe,

    In answer...

    • Is it a single key/value store (your screenshot suggests it is, since I don't see a field to specify which store to use), or can we have multiple stores with different names/id's like we do with caches?

      Currently we support a single k/v store  (and for the experimental release, embedded or external Hazelcast specifically). However, we will consider support for configuring and selecting between multiple different stores down the road.

    • Do we have the same controls as we do with a cache as far as the max entries, max age, max entry size, etc? If not, how/when are items removed from the store?

      We currently only support the max age control in the Store to Key Value Storage assertion with this behavior:


      - If value is negative, the entry will live in key store indefinitely (unless evicted or explicitly removed by some means).

      - If value is zero, the entry is removed.

      - If value is positive, the entry max age is set.

      There are no additional controls available for embedded Hazelcast, but for external Hazelcast, many such controls can be configured at the server level.

    • Any limitations on data types (Can it store multipart variables & messages)? Size limits?

      From a context variable perspective, the current assertions have been tested with these context variable types, but should probably also work with others:

      - String

      - Integer

      - Date

      - Message

      Because different storage providers will support different data types, but all seem to support strings, the current behavior of these assertions is to convert context variable values to/from JSON strings that are understood by the gateway, and then these JSON strings are stored as the value in the key store.

      The current size limit for these JSON strings is 2^31-1 (or 2,147,483,647 bytes).

    • Will we have the ability to preload/bulk load the key/value store, or just one entry at a time?

      While pre-/bulk loads can be done outside of the gateway, and this feature would benefit from that, it currently doesn't support pre-/bulk loads directly itself. We can consider adding that support in future versions of this feature.

    • Any chance of using listeners/triggers in the future? For example, a policy that would execute when a value in the map changes could make for some interesting use cases; I know that Hazelcast supports listeners.

      No current plans, but this is something we can definitely consider. However, we are trying to design and develop for the least common denominating capabilities across multiple key value store solutions (beginning with embedded and external Hazelcast, but also Redis soon, and possibly some or all of the other solutions currently supported by the tactical remote cache assertion, and possible others down the road).


    Regards,



    ------------------------------
    Ben Urbanski
    Product Manager, API Gateway
    Layer7 API Management
    ------------------------------