Spring

 View Only

 Spring boot gemfire client max connections exceeded exception

Vaidhyanathan Pranatharthiharan's profile image
Vaidhyanathan Pranatharthiharan posted Sep 08, 2020 05:29 PM

Hi , We are using s pring boot gemfire locator server and client . We have lot of transactions that needs to be inserted into gemfire . We are getting the below exception

refused connection: exceeded max-connections 800; nested exception is org.apache.geode.cache.client.NoAvailableServersException: org.apache.geode.cache.client.ServerRefusedConnectionException:

 

where and how do we set the maximum number of connections a

Juan Ramos's profile image
Juan Ramos

Hello @Vaidhyanathan Pranatharthiharan​ ,

 

Thanks for contacting the Support Community!.

That said, the max-connections setting is configured at server level; you can have a look at How Client/Server Connections Work and Fine-Tuning Your Client/Server Configuration for further details about how the parameter is used and how/when it should be changed.

If you're starting your servers using spring-boot-data-geode, on the other hand, you can have a look at Client/Server Applications In-Detail for details about how to configure this setting.

Hope this helps.

Best regards.

Vaidhyanathan Pranatharthiharan's profile image
Vaidhyanathan Pranatharthiharan

sure . I shall take a look , I havent set any max connections at client or server ,I was trying to understand how come when the exception was thrown it quoted the number 800 , exceeded 800 max-connections . Is this number the default max connection count on a server or the entire cluster in the server ?

Juan Ramos's profile image
Juan Ramos

Hello @Vaidhyanathan Pranatharthiharan​ ,

 

Yes, 800 is the default value configured for max-connections and it's counted per server, not at cluster level (see here for details).

Best regards.

Vaidhyanathan Pranatharthiharan's profile image
Vaidhyanathan Pranatharthiharan

In the Spring boot gemfire client , I have a @EnablePool annotation and we have an idleTimeout too , So ideally even if there are 800 connections it should have expired the connections with the idleTimeout parameters . Is there any other paramter that needs to be set so that the connection is back into the pool.

Juan Ramos's profile image
Juan Ramos

Hello @Vaidhyanathan Pranatharthiharan​ ,

 

From the client-cache [1] reference:

idle-timeout: Maximum time, in milliseconds, a pool connection can stay open without being used when there are more than min-connections in the pool. Pings over the connection do not count as connection use. If set to -1, there is no idle timeout.

That is, no connections will be released if they are still in use, so this setting shouldn't have a considerable effect on the max-connections configuration on server side.

Best regards.

 

[1]: https://gemfire.docs.pivotal.io/910/geode/reference/topics/client-cache.html

Vaidhyanathan Pranatharthiharan's profile image
Vaidhyanathan Pranatharthiharan

ok got it thank you . Is there a setting to release the connection once that is being done so that we dont run into the max connection pool issue .

Juan Ramos's profile image
Juan Ramos

Hello @Vaidhyanathan Pranatharthiharan​ ,

 

When the cluster is correctly sized to handle the expected workload, you shouldn't even hit this exception as the connections are automatically managed, both on client and server sides.

The clients use an internal connection pool, so the number of concurrent connections from a single client is proportional to the number of threads that are currently doing operations from the client. If you are unable to control the number of threads on the client side, the pool itself has several parameters that can be used to tune how the connections are acquired/released, you might try with min-connections and max-connections.

Best regards.

Vaidhyanathan Pranatharthiharan's profile image
Vaidhyanathan Pranatharthiharan

I get that , We are also using partitioned regions , I was reading in the documentation if we have a cap for max connections we might have to disable the prsingleHopEnabled which might hamper the performance right ?

Juan Ramos's profile image
Juan Ramos

Hello @Vaidhyanathan Pranatharthiharan​ ,

 

Yes, that's certainly true... which brings me back to one of my other replies: you basically need to plan in advance and make sure your cluster is correctly sized to handle the workload you expect to have. Maybe the simplest answer is the correct one here: increase the max-connections setting on server side to be higher, and make sure to execute several cycles of tuning and testing before applying the changes on production environments.

Best regards.