Layer7 API Management

 View Only

 About c3p0DataSource.maxPoolSize and io.httpMaxConcurrency

MARUBUN SUPPORT's profile image
MARUBUN SUPPORT posted Jul 12, 2024 03:13 AM

Hi Team,

Our customers have asked questions about c3p0DataSource.maxPoolSize and io.httpMaxConcurrency.
I know you are busy, but I would appreciate your answers or advice.

[Product]
API Gateway 11.0

[Questions]
Q1: About node.properties : c3p0DataSource.maxPoolSize

Q1-1:
If they sets 1710 to "node.properties : c3p0DataSource.maxPoolSize" , what is the maximum number of requests that can be accepted?

Q1-2:
Also, they assume that the value set in this parameter is the maximum number of simultaneous connections that can be made.
Is this assumption correct?

Q2: This question is related to Q1.

They set 1610 to "Cluster Properties : io.httpMaxConcurrency" and consider "io.httpMaxConcurrency" to be the maximum number of http(s) accepted.
Does the value set for "io.httpMaxConcurrency" need to take into account the value set for "c3p0DataSource.maxPoolSize"?
If so, what are the conditions for setting it?

Q3: This question is related to Q1 and Q2.
Their system has multiple configurations with separate policy setting contents for each communication destination.
They consider the value set in "c3p0DataSource.maxPoolSize" to be the maximum total value, even if the policy ports are different.
Please let me know if this idea is correct.

Q4:

The Layer7 API Gateway 11.0 manual has the following explanations in sections Gateway System Properties, External Assertions, and com.l7tech.external.assertions.http2.routing.clientPoolSize.
(https://techdocs.broadcom.com/us/en/ca-enterprise-software/layer7-api-management/api-gateway/11-0/reference/gateway-system-properties.html)

     Specifies the maximum client pool size. 
     If the pool size exceeds the maximum set value, then the least recently used client is replaced with the new HTTP client.

Could you please tell me if this explanation also applies to "c3p0DataSource.maxPoolSize"?

Q5: This question is related to Q4.

Please let me know if this also applies to console connections for monitoring purposes logged in to Policy Managaer.

Q6: About "Apply Rate Limit"
They use APL assertions to implement the flow control function, and understand it as follows:

- "Apply Rate Limit" assertions specify the number of simultaneous connections per second
- Requests that exceed the number of simultaneous connections are stored in a queue for a certain period of time
- When the number of connections falls below the connection limit, processing is handed over to the next assertion

And they have the following questions:

Q6-1:
Even if a request in the queue receives a response within the same 1-second frame and there is free space in the flow limit, it will not be sent with the assertion set for the same 1 second.
(The queue will actually be sent the next 1 second)

Q6-2:
Even if the queue is sent, it will not be sent beyond the flow control limit.
Is this correct?

Best Regards,
Marubun Support

Vince Baker's profile image
Broadcom Knight Vince Baker

Hi,

Ref Q2:

Optimizing io.httpCoreConcurrency and io.HttpMaxConcurrency

The io.httpCoreConcurrency and io.HttpMaxConcurrency settings control the number of parallel requests (java threads) the gateway can handle. The "core" value is the initial thread count, while "max" is the upper limit. When changing these values you must also evaluate com.l7tech.common.http.prov.apache.CommonsHttpClient.maxConnectionsPerHost  and com.l7tech.common.http.prov.apache.CommonsHttpClient.maxTotalConnections settings in system.properties to control the maximum number of connections (inbound and outbound) AND to any single host.

My Recommendation: I generally set both the Core and Max CWPs to the same value. This creates a static thread pool, meaning all threads are available from the start. Why?

  • Performance Under Load: Dynamically creating threads (the difference between core and max) consumes CPU resources. This is problematic when your gateway is already under heavy load. Pre-allocating threads helps avoid adding extra latency to an already stressed system.
  • Focus on Responsiveness: Remember, high request latency is what triggers the need for more threads. Instead of relying on more threads, prioritize optimizing your policies and services. Use tools like APM (Precision Monitoring) or OpenTelemetry to analyze request traces/timelines and pinpoint slowdowns at the assertion level.

Load Testing is Key: It's crucial to experiment with these settings in your own environment to find the optimal balance for your specific workload.

Regards

Vince