When you say evenly, do you mean all the policy servers will get equal number of requests?
If that is what you mean, then the answer is NO.
The whole point of having cluster based configuration is to avoid traditional round robin kind of load distribution and to have better load distribution mechanism which is actually based on each of the servers response time. Such that, policy server with best response times( e.g with better hardware configuration) will get to server more request than the server with poor response times.
I suggest you go through the following guide to get more understanding on this:
Thanks for your response. Our question is to have equal number of requests what we need to do.
Please advice .We have total 8 policy servers and all configured with same hard ware and memory and close to 10 applications consuming and total 125 web agents talking to the policy servers.
Thanks for your help in advance.
Sorry for the delay in getting back on this.
I was doing more research on this.
It seems that we do not have the round-robin load balancing anymore in either cluster/non-cluster policy server configurations.
I have created a community article on this topic :
Tech Tip : CA Single Sign-On :: PolicyServer::Cluster vs Non-Cluster Load balancing
Please let me know if you have any further questions on this.
One more question. For traditional round robin load balancing. Do we have any time out values set for fail over to next server in the load balance and if so how to set that up and for how long we don’t send any traffic once we have time out from policy server .Appreciate for your help.
As explained in my previous reply, there is no more round-robin load balancing.
However, remaining questions that you raised are still valid. Please find my answer below :
*) Do we have any time out values set for fail over to next server in the load balance and if so how to set that up?
Ujwol => Yes, this time out is set by configuring "Request Time Out" in the HCO configuration.
Server timeout. The maximum time an agent will wait for a response from a server. If the wait time exceeds the server timeout value, the server is considered inactive, and failover to the next server occurs.
If a server timeout occurs within a cluster, and the timeout causes the cluster’s failover threshold to be exceeded, failover to the next cluster occurs.
Set through: RequestTimeout.
*) How long we don’t send any traffic once we have time out from policy server
Ujwol => The request timeout configuration parameter specifies the maximum period to wait for a response from a single Policy Server. When a request returns with timeout, the server is considered failed (inactive). The Agent API management thread periodically tries to establish connections to failed servers. A failed server is recovered when the management thread succeeds to establish a connection to the server. Agent sends request to previously failed server as soon as it recovers.
Thanks for your response and appreciate for your help on this.
I have few more questions regarding the HCO configurations.
We have 3 set of HCO’s in our environment and these configured long time back and no one here know who set this values and why they are different.
In their HCO’s we are seeing different Maximum Sockets per port values and different Request Time outs.
Can you please let us know what are these values and how it will impact if we make one Generic HCO to all our applications rather three different.
Maximum Sockets Per Port
Minimum Sockets Per Port
New Socket Step
Appreciate for your time and help.
Do you mind initiating a new thread on this new topic ? It will be easier to manage threads for tracking purpose.
Also if you could mark this current thread as answered, that would be great.
I have now started a new discussion on this topic :
TCP IP connection settings in HCO