I thought it would be useful to show a real-world example of hunting for bottlenecks.
Let's consider the SiteMinder Policy Server trace log.
Because there are many threads all logging to the trace log, it's best to convert the interleaved lines of multiple simultaneous transactions into what I call a "transaction flow". A transaction flow is nothing more than gathering all of the lines of a transaction together starting from the first line and, when the last line is encountered, then printing all of the lines together.
Transaction Flow Tool
The SM PS trace log might look something like this (depending on the configuration).
[Date][PreciseTime][Pid][Tid][SrcFile][Function][TransactionID][Message][CallDetail][ExecutionTime][Data]
[====][===========][===][===][=======][========][=============][=======][==========][=============][====]
...
[03/05/2026][20:01:32.432][3004336][139625888736832][CServer.cpp:1532][ThreadPool::Run][][Dequeuing a Normal Priority message, from IP ::ffff:10.252.37.120 with Port No 48710. Current count is 0][][00:00:00.000062][]
...
[03/05/2026][20:01:32.441][3004336][139625888736832][CServer.cpp:6557][CServer::ProcessRequest][][Leave function CServer::ProcessRequest][][00:00:00.008367][]
The SM PS trace log configuration on my system is:
$ cat /opt/CA/siteminder/config/smtracedefault.txt
components: AgentFunc, Server/Connection_Management, Server/Policy_Object, Server/Administration, Server/Audit_Logging, Server/Policy_Server_General, IsProtected, Login_Logout, IsAuthorized, Tunnel_Service, JavaAPI, Directory_Access, ODBC, LDAP, IdentityMinder, TXM, Fed_Server, Srca
data: Date, PreciseTime, Pid, Tid, SrcFile, Function, TransactionID, Message, CallDetail, ExecutionTime, Data
version: 1.1
We can use a tool like GNU AWK (available on most Linux systems) to transform the log into transaction flows. This involves the following:
- Break the input into fields: The field separators are "[" at the beginning of the line, "][" between fields, and "]" at the end of the line.
- Handle the beginning of a transaction: This occurs when "Dequeueing a Normal Priority message" is observed. Save this line into an array of transactions indexed by process ID/thread ID (Pid,Tid) pair.
- Handle transaction lines: Get the process ID and thread ID for the log line line and, if found in the array of transactions, then add a newline and this log line to the existing message in the array.
- Handle the end of a transaction: This occurs when "Leave function CServer::ProcessRequest" is observed. If the process ID and thread ID are found in the array of transactions, then print the "transaction flow" for this (Pid,Tid) pair and clear the entry in the transaction array.
While it sounds difficult, it's really quite easy using GAWK:
$ expand --tabs=4 ~/.local/bin/smtrace-to-transaction-flow.awk
#!/usr/bin/env -S gawk -f
BEGIN {
FS = "^\\[|\\]\\[|\\]$"; # SiteMinder policy server trace log field separators...
RS = "\r?\n"; # Handle Windows-format logs too...
if (!F_PID) F_PID = 4; # Use --assign F_PID=... so set from command line...
if (!F_TID) F_TID = 5;
delete TXNS; # Defines an array...
}
{
PID = $F_PID; # Get PID and TID from *EACH* line...
TID = $F_TID;
}
/Dequeuing a Normal Priority message/ {
TXNS[PID,TID] = $0;
next; # Handle next line...
}
(PID,TID) in TXNS {
TXNS[PID,TID] = TXNS[PID,TID] "\n" $0;
if (/Leave function CServer::ProcessRequest/) {
print TXNS[PID,TID];
fflush()
delete TXNS[PID,TID];
}
}
Note: The field numbers are "off by one" since the first field separator is the "[" character at the beginning of the line so the first GAWK field is always the empty string.
Now, this very simple tool can form the basis for more detailed analysis. Once the script is marked as executable, it can be run against a trace log:
$ ~/.local/bin/smtrace-to-transaction-flow.awk /opt/CA/siteminder/log/smtracedefault.log | less
Or, logs can be tailed into this tool:
$ tail -F /opt/CA/siteminder/log/smtracedefault.log | ~/.local/bin/smtrace-to-transaction-flow.awk
Custom Tool Pipelines
Using the UNIX philosophy of having one tool do one thing very well, we can develop other simple tools that expect the output of this tool as their input. For example, we can write a tool that calculates the time difference between log lines and writes the time difference in milliseconds at the beginning of each line:
$ expand --tabs=4 ~/.local/bin/smtrace-log-diff-milliseconds.awk
#!/usr/bin/env -S gawk -f
BEGIN {
FS = "^\\[|\\]\\[|\\]$"; # SiteMinder policy server trace log field separators...
RS = "\r?\n"; # Handle Windows-format logs too...
if (!F_DATE) F_DATE = 2; # Use --assign F_DATE=... so set from command line...
if (!F_PRECISE_TIME) F_PRECISE_TIME = 3;
LAST_TIME = 0;
}
function mktime_milliseconds( DATE, PRECISE_TIME, T) {
if (3 != split($F_DATE, DATE, /\//))
return -1;
if (4 != split($F_PRECISE_TIME, PRECISE_TIME, /[:.]/))
return -1;
T = mktime(DATE[3]" "DATE[1]" "DATE[2]" "PRECISE_TIME[1]" "PRECISE_TIME[2]" "PRECISE_TIME[3]);
if (-1 == T)
return -1;
return T * 1000 + PRECISE_TIME[4];
}
{
TIME = mktime_milliseconds();
if (-1 == TIME)
TIME = LAST_TIME;
}
/Dequeuing a Normal Priority message/ {
LAST_TIME = TIME; # Time difference always 0 for first line of a transaction...
}
{
printf "%6d %s\n", TIME - LAST_TIME, $0;
fflush();
LAST_TIME = TIME;
}
Now we can pipe the "transaction flows" into this tool to get the time difference between log lines of a particular transaction:
$ ~/.local/bin/smtrace-to-transaction-flow.awk /opt/CA/siteminder/log/smtracedefault.log \
| ~/.local/bin/smtrace-log-diff-milliseconds.awk \
| head --lines=20
0 [03/05/2026][20:47:56.900][3004336][139626064885312][CServer.cpp:1532][ThreadPool::Run][][Dequeuing a Normal Priority message, from IP ::ffff:10.252.37.117 with Port No 40670. Current count is 0][][00:00:00.000043][]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][CServer.cpp:6371][CServer::ProcessRequest][][Enter function CServer::ProcessRequest][][][]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmAuthUser.cpp:1537][CSmAuthUser::CSmAuthUser][][Enter function CSmAuthUser::CSmAuthUser][][][]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmAuthUser.cpp:1590][CSmAuthUser::CSmAuthUser][][Leave function CSmAuthUser::CSmAuthUser][][00:00:00.000009][]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][Sm_Az_Message.cpp:155][CSm_Az_Message::ProcessMessage][][Enter function CSm_Az_Message::ProcessMessage][][][]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmMessage.cpp:574][CSmMessage::ParseAgentMessage][s32158/r84520][Receive request attribute 208, data size is 14][][][*10.252.37.123]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmMessage.cpp:574][CSmMessage::ParseAgentMessage][s32158/r84520][Receive request attribute 221, data size is 46][][][ab50392f-923dd9d9-6c2171aa-e39af60d-48843da2-5]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmMessage.cpp:574][CSmMessage::ParseAgentMessage][s32158/r84520][Receive request attribute 200, data size is 24][][][bd605339-dx-smag-0]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmMessage.cpp:574][CSmMessage::ParseAgentMessage][s32158/r84520][Receive request attribute 217, data size is 68][][][http://bd605339-dx-smag.us-west1-b.c.lims001-solution-eng01.internal]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmMessage.cpp:574][CSmMessage::ParseAgentMessage][s32158/r84520][Receive request attribute 201, data size is 136][][][/affwebservices/public/saml2sso?SMASSERTIONREF=QUERY&SPID=IAMShowcase&SAMLTRANSACTIONID=13640b2b-7c55fc56-f49b31f3-3a514506-7fff404f-a2f]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmMessage.cpp:574][CSmMessage::ParseAgentMessage][s32158/r84520][Receive request attribute 202, data size is 3][][][GET]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmMessage.cpp:574][CSmMessage::ParseAgentMessage][s32158/r84520][Receive request attribute 134, data size is 5][][][FALSE]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][Sm_Az_Message.cpp:208][CSm_Az_Message::ProcessMessage][s32158/r84520][** Received agent request.][][][bd605339-dx-smag-0]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][Sm_Az_Message.cpp:393][CSm_Az_Message::AnalyzeAzMessage][][Enter function CSm_Az_Message::AnalyzeAzMessage][][][]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][Sm_Az_Message.cpp:401][CSm_Az_Message::AnalyzeAzMessage][][Leave function CSm_Az_Message::AnalyzeAzMessage][][00:00:00.000006][]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][IsProtected.cpp:52][CSm_Az_Message::IsProtected][][Enter function CSm_Az_Message::IsProtected][][][]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][IsProtected.cpp:75][CSm_Az_Message::IsProtected][][Received request from agent, check agent api version.][][][1536]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][IsProtected.cpp:98][CSm_Az_Message::IsProtected][][Starting IsProtected processing.][][][]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmAuthorization.cpp:544][CSmAz::IsProtected][][Enter function CSmAz::IsProtected][][][]
0 [03/05/2026][20:47:56.900][3004336][139626064885312][SmAuthorization.cpp:642][CSmAz::IsProtected][][Not Protected: No matching rules found for resource.][][][]
Above all the timestamps are the same so the time difference is all zero.
Putting It All Together To Identify Bottlenecks
Now that we have these tools, let's either ignore lines that have zero time difference, or search for lines that have long time differences.
Let's look for log differences of 2 milliseconds or longer (first digit is either 2-9, or has 2 or more digits
$ tail -F /opt/CA/siteminder/log/smtracedefault.log \
| ~/.local/bin/smtrace-to-transaction-flow.awk \
| ~/.local/bin/smtrace-log-diff-milliseconds.awk \
| stdbuf -o L grep -E '^ *([2-9]|[0-9]{2,})' \
| head
2 [03/05/2026][20:57:50.307][3004336][139625880344128][SmDsLdapConnMgr.cpp:1226][CSmDsLdapConn::SearchExts][][LDAP search of uid=* took 0 seconds and 1736 microseconds][][][]
2 [03/05/2026][20:57:50.310][3004336][139625880344128][SmDsLdapConnMgr.cpp:1226][CSmDsLdapConn::SearchExts][][LDAP search of cn=* took 0 seconds and 1787 microseconds][][][]
9 [03/05/2026][20:57:50.321][3004336][139625880344128][AuthnRequestProtocol.java][closeupProcess][198e36db-62c9ea8f-18219978-dd186cf8-8588b25a-4d][No need to append updated session spec.][][][]
2 [03/05/2026][20:57:51.103][3004336][139625871951424][SmDsLdapConnMgr.cpp:1226][CSmDsLdapConn::SearchExts][][LDAP search of (uid=user0000) took 0 seconds and 2075 microseconds][][][]
3 [03/05/2026][20:57:51.109][3004336][139625871951424][SmSSProvider.cpp:514][CSmSSProvider::CreateSession][][Leave function CSmSSProvider::CreateSession][][][]
2 [03/05/2026][20:57:52.392][3004336][139625897129536][SmDsLdapConnMgr.cpp:1226][CSmDsLdapConn::SearchExts][][LDAP search of objectclass=* took 0 seconds and 2144 microseconds][][][]
3 [03/05/2026][20:57:52.422][3004336][139626056492608][SmDsLdapConnMgr.cpp:1226][CSmDsLdapConn::SearchExts][][LDAP search of (uid=user0000) took 0 seconds and 3194 microseconds][][][]
2 [03/05/2026][20:57:52.441][3004336][139625863558720][AssertionHandlerSAML20.java][preProcess][a4efd0b2-03c4cab0-bc932777-72a85e72-426ac4a8][Start to validate the SAML2.0 Authn request.][][][]
6 [03/05/2026][20:57:52.447][3004336][139625863558720][SmDsLdapConnMgr.cpp:1226][CSmDsLdapConn::SearchExts][][LDAP search of uid=* took 0 seconds and 6152 microseconds][][][]
2 [03/05/2026][20:57:52.450][3004336][139625863558720][SmDsLdapConnMgr.cpp:1226][CSmDsLdapConn::SearchExts][][LDAP search of cn=* took 0 seconds and 1662 microseconds][][][]
As you can see above, the most common log line is related to LDAP searches of the user store.
Hopefully these are useful examples of developing simple tools to gather the metrics necessary to identify bottlenecks.
Brian Dyson
Customer Engagement
-------------------------------------------
Original Message:
Sent: Mar 05, 2026 02:39 PM
From: Brian Dyson
Subject: SiteMinder Performance Issue – High Response Time (~28s) During 2K Load Test
As I stated previously, performance tuning is about identifying and addressing bottlenecks.
So, the first (and most difficult) step is to identify the bottlenecks in the system. This can be done by examining metrics of the operations. Some metrics may need to be enabled, for example adding "%D" (The time taken to serve the request, in microseconds) to the Apache HTTPD LogFormat so the access log includes the duration of each transaction from Apache HTTPD's perspective.
Another metric is the time between log lines associated to a thread handling a transaction. For this, ensure that "PreciseTime" is configured in the SiteMinder trace log (Access Gateway trace, FWS trace, and Policy Server trace) configuration files.
If bottlenecks are not easily identified, then it's best to have a hypothesis for a potential bottleneck and ask "What would it look like, in terms of measurable metrics, if this hypothetical bottleneck was happening?" Even for these conjectures, it's useful to conduct a "thought experiment" at lower and upper bounds and again ask "What would the metrics look like if ...?".
The same can be done with the configurations to gain confidence that the configurations are actually tuning the system as expected.
As an example of configuration validation, let's take the HCO change for maximum connections (M) set to 800 and the policy server configured with 256 worker threads (W). Let's assume there is a single access gateway and that the maximum number of SM AG threads (4000 above) is greater than the maximum number of connections. Under full load the single SM AG can only have M requests in flight since there are only this many connections to the policy server. On the policy server side, since there are W worker threads, we would expect to see W of those transactions being operating on by the 256 worker threads, and the remaining M - W = 800 - 256 = 544 requests in the request queue.
The request queue can be checked by configuring the SM PS to write periodic statistics to the smps.log file (see Log Policy Server Statistics Periodically). Alternatively, the SM PS trace logs the queue depth when a request is enqueued and dequeued:
[Dequeuing a Normal Priority message, from IP 10.11.12.13 with Port No 1234. Current count is 0]
So, under load, you would expect to see "Current count is 544", where the value 544 would fluctuate a bit but be close to that value.
Similarly, validate that there actually are 800 connections from the SM AG to the SM PS server using tools like "ss" or "netstat".
As a thought experiment example, let's consider what it would look like if too many threads were configured for a system's resources. In this thought experiment we can consider the "edge" or "limit" of a system that is clearly misconfigured. For example, what would it look like if the system had only 1 core but was configured for 1 million threads? We would expect there to be very high memory utilization, since each thread requires stack memory, and also expect there to be high latency due to context switching. The memory utilization can be observed with tools like "top" or "vmstat". A tool like "vmstat" can also show the number of context switches which can suggest that this may cause high latency. I think an actual metric that infers high latency due to context switching would be the time difference between log lines (in milliseconds) - especially log lines that tend to little-to-no time differences (that is, the thread was context switched off of the processor core during the part of the execution of a transaction that typically takes very little time). A simple tool could be developed that analyzes, say, the SM PS trace log, tracks when a particular thread is in a transaction (from "Dequeuing a Normal Priority message" to "Leave function CServer::ProcessRequest"), extracts the source file and function, calculates the time difference between log lines (for this specific thread) and saves metrics in terms of [(source file1, function1) => (source file2, function2),TIME-BIN] = COUNT, where TIME-BIN is a set of logarithmic bins of time (like 0-1ms, 1-2ms, ..., 9-10ms, 10-19ms, 20-29ms, ..., 90-99ms, 100-199ms, 200-299ms, ...) to construct a histogram. Then check if the there is a wide range of time differences for some specific pairs of log lines (identified by source file and function).
The reality is that performance tuning requires deep thinking in terms of operating systems, performance analysis, and computer science. ;-D
Even with 256 SM PS worker threads, unless there are actually 256 processor cores assigned to the system, it may not scale with the number of worker threads due to context switching and limits to other resources like only having 32 connections for authentication to a backend user store. With only 32 connections, under load that means that there are 224 threads waiting for a connection. If the backend user store is even a little bit slow, then this can quickly cause poor performance. In this case, it's better to scale out the number of policy servers rather than scaling up the number of worker threads.
Another perspective is to think of the queueing of requests. In the example above, there may be 800 in-flight requests but only 256 are being handled at one time with 2x more in the queue. A new request needs to wait for the ones being handled already and the 2x already in the queue with a total delay of about 3x the median response time. Similarly when pushing 2500 load test requests and only 800 can be moved to the policy server at a time, then a new request needs to wait for those 800 and the 1700 waiting on the SM AG to be sent to the SM PS which is again about 3x the median response time for a new request to wait. Confirm that these numbers are actually observed by checking the SM AG and SM PS trace logs.
Confirm, too, if 800 connections in the HCO is truly the right number as we typically recommend starting with 100 and then increasing slowly. Remember that it takes time to check if a connection has readable data (via the poll() system call, or similar) and these system call may take more time based on the number of resources that must be checked. The lesson is that the "right" numbers heavily depend on the systems being tuned.
The question might be "What are the configuration changes needed to improve performance?" but that's not the right question. Instead ask, "What would the metrics look like if ... was a bottleneck, and how do I get those metrics?".
Hope this helps,
Brian Dyson
Customer Engagement
References
https://techdocs.broadcom.com/us/en/symantec-security-software/identity-security/siteminder/12-9/configuring/policy-server-configuration/Log-Policy-Server-Statistics-Periodically.html
Original Message:
Sent: Mar 05, 2026 01:52 AM
From: P RAMARAO
Subject: SiteMinder Performance Issue – High Response Time (~28s) During 2K Load Test
Dear Brian,
Thank you for the suggestion. It worked, and the response time has been reduced to around 4 seconds. Please find the screenshot of the 2k load test results attached.

Is there any possibility to further reduce the response time to around 1 second? Below are the parameter values that I have tuned.
SAG Server configuration
Path: \path to\secure-proxy\httpd\conf\extra\
Edit/ add in httpd-ssl.conf
Listen 443
ListenBacklog 2048
Mpm configuration
Path: Path: \path to\secure-proxy\httpd\conf\extra\
Edit httpd-mpm.conf
<IfModule mpm_event_module>
ThreadsPerChild 25
MaxRequestWorkers 4000
ServerLimit 160
</IfModule>
server.conf
Path: \path to\secure-proxy\proxy-engine\conf
(AJP13 Tomcat)
ajp13.max_threads =4000
ajp13.accept_count=500
(HTTP Pool)
http_connection_pool_max_size=4000 (Must match MaxRequestWorkers) 8000
Policy sever:
In smconsole:
MaxThreads 256
Max Connections=16384
Az cache size=1024
Admin UI:
Host Config Object (HCO)
Maximum Sockets Per Port =800
User Directory
Pool Size=32
Original Message:
Sent: Feb 20, 2026 01:53 PM
From: Brian Dyson
Subject: SiteMinder Performance Issue – High Response Time (~28s) During 2K Load Test
The default configuration for traditional SiteMinder Web Agents should be sufficient.
Remember, performance tuning is evidence-based. It's best to find specific evidence that suggests a specific tuning to improve performance. Measure before and after tuning to confirm the change is having the intended effect.
Also, it's best to work from the front to the back in the order that a transaction flows through the system, moving bottlenecks from the front-end web servers to the policy servers and finally to backend servers (user stores, session store, etc).
For traditional web agents, my previous suggestions are still mostly the same. Check Apache HTTPD listen backlog, enable mod_status, tune MPM configuration, etc.
------------------------------
Brian Dyson
Broadcom IMS Customer Engagement
Original Message:
Sent: Feb 20, 2026 05:27 AM
From: P RAMARAO
Subject: SiteMinder Performance Issue – High Response Time (~28s) During 2K Load Test
Hi Brian,
Thank you for the update. The information you provided is very helpful.
Could you please share the recommended tuning parameters for the Web Agent? We have installed the Web Agent on top of the Apache web server and would like to fine-tune the configuration.
Additionally, could you provide the web agent configuration files where these parameters can be modified?
Environment Details:
Your guidance on the recommended settings and best practices for this setup would be greatly appreciated.
Best Regards,
Ramarao P
Original Message:
Sent: Feb 18, 2026 01:03 PM
From: Brian Dyson
Subject: SiteMinder Performance Issue – High Response Time (~28s) During 2K Load Test
It's likely that the SiteMinder infrastructure needs to be tuned to support this load.
When performance tuning, it's best to perform the following process:
- measure specific metrics during a load test
- use the metrics to identify the bottleneck(s)
- adjust tuning parameters related to bottleneck resources
- perform another load test and gather new metrics
- compare metrics before and after tuning to see if the configuration changes are having the intended effect
It's also useful to imagine the flow of data to help identify areas that may require tuning.
Here are some suggestions based on the flow of data:
- Configure the SiteMinder Access Gateway (SM AG) Apache HTTPD listen backlog so the Linux kernel negotiates and establishes more TCP connections, placing them into a backlog queue for the httpd process to accept.
Monitor the backlog using iproute2's "ss" command, e.g. "ss -lnt '( sport == 443 )'" and compare the send queue (configured backlog to the listen() system call) to the receive queue (number of connections currently in the backlog). - Tune the SM AG Apache HTTPD message processing module (MPM). These are things like maximum number of threads, threads per child process, etc. The default is 400 maximum request workers (total number of threads). Pushing 2000 requests to only 400 worker threads is already going to limit the throughput. Tune the MPM settings (SM AG uses "worker" MPM IIRC) to increase the total number of workers. The increase will need to balance against the number of processor cores and be aware that more threads leads to more context switching. Consider increasing the number of cores if necessary.
Monitor by enabling mod_status (configure for local-only access) and observe busy workers compared to max workers. - The SM AG architecture has Apache HTTPD "front ending" an Apache Tomcat backend. The MPM tuning MUST match with the AJP tuning on Apache Tomcat (in the SM AG server.conf file). Configure "ajp13.max_threads" above the HTTPD max request workers. By default ajp13.max_threads is 410 which is above the default of 400 HTTPD max request workers. Consider increasing ajp13.accept_count too which sets Tomcat's listen backlog.
- Tune the SiteMinder Policy Server (SM PS) host configuration object (HCO). The default max connections in an HCO is 20 but this is far too low for the SM AG (while it's a good default for traditional SiteMinder Web Agents). We typically recommend setting HCO max connections to 100 as part of initial tuning. Otherwise, when the SM AG needs to send a request to the SM PS, it must contend with a small connection pool of agent connections.
Monitor connection pool latency by comparing the transaction time in the SM AG trace log to the time taken on the SM PS. These times can be calculated by "diffing" the timestamps in each component's trace logs. - Tune the SM PS user directory "Pool Size" parameter. This increases the number of LDAP connections used by the SM PS.
Monitor by checking the latency to acquire a connection from the connection pool in the SM PS trace logs (with LDAP tracing enabled, diff the log timestamps between lines) or infer the latency by diffing the timestamps for a high-level LDAP operation in the logs compared to the time to actually perform the operation. - Tune the user directory itself. We usually see that backend systems that the SM PS depends on contribute the most latency in a SiteMinder infrastructure. Check if the backend user store and session store systems can be tuned for better performance.
- Increase SM PS worker threads if CPU utilization is low and there is a "large" backup of messages (more than 100) in the normal priority queue. If using a Symantec Directory LDAP session store, then also increase the session store's maximum number of connections to at least 50% the number of SM PS worker threads.
Monitor by configuring periodic statistics to be written to the smps.log file every 60 seconds. - Increase SM PS priority worker threads. These threads negotiate new agent connections. The default is 5 and the maximum is 20 and large customers need to set this to the maximum.
Monitor the high priority queue in the smps.log file statistics.
These items should be a good start to tuning the environment.
Remember that performance tuning is not only "turning knobs up" - it's important to actually measure each tuning change so you are confident that the adjustments are having the intended effect.
The order of items above is from front end systems toward back end systems - effectively shifting transactions and operations further into the infrastructure by removing bottlenecks near the front of the system to reveal bottlenecks near the back of the system.
Hope this helps give you ideas for tuning a SiteMinder Access Gateway system.
Here is an image that helps to visualize the different tuning options:
Finally, here are the links referenced in the image above:
------------------------------
Brian Dyson
Broadcom IMS Customer Engagement
Original Message:
Sent: Feb 17, 2026 01:15 AM
From: P RAMARAO
Subject: SiteMinder Performance Issue – High Response Time (~28s) During 2K Load Test
Hello everyone,
I was performing a 2K concurrent user load test for SiteMinder using JMeter.
During the test, I observed that the response time for the authentication FCC pages is quite high (around 28,000 ms).
Could you please help identify the possible root cause and suggest how to tune the relevant parameters?
Please find the reference statistics below.

Environment Details:
Env 1:
Env 2:
Appreciate your help and recommendations in advance.
Best Regards,
Ramarao P
-------------------------------------------