The other day I was thinking about the streaming launch cycle utilized in our streaming product and realized it would be a good idea to post it on Connect. This way if anyone has any concerns around this process, this will hopefully provide a good level of comfort. The entire launch cycle beginning with the endpoint requesting a package and receiving it happens over multiple phases between all components, which puts the request through not only the initial authentication but a validation of that authentication. Take a look at the chart below for a visual representation with an explanation directly following it.
As you can see there are multiple levels of validation before an endpoint is allowed to stream an application package. And not to worry, the entire process is very quick...subject to the bandwidth and amount of latency present on the pipe of course.
So, to resume, there are actually two tokens: One generated from Backend Appstream and provided back to the client from launch server as an "application" response in order to uniquely identify a future streaming process. This token has nothing to do with LB management.
Than we have another one ("infrastructure token") placed inside the http reaquest from Streaming server to client during the pre-streaming handshake, in order to assure that whole streaming session will be properly managed by LB mechanisms (and carried over by the same STS).
May be some schematical drill/down (following Gene's approach) about what's happening in term of application and infrastruccture tokens in various steps could clarify better how the whole process works in an LB environment and magnify possible point of failures..
I think the KB could be very useful but I cannot access it. Could you please it attach to this forum?I hope it helps me reconstruct whats behind the scenes
thank you
Let me try to explain this better - as you might know from the diagram abaove the streaming front end is composed of launch server and streaming server. Launch server taking take of application launch requests and user portal at a high level. Streaming server handling block streaming requests, session handling and client communication, etc. Now, this should give some background to what I am about to explain.
As Gene commented the client is somehow assigned or associated with a front end server (typically by way of source IP stickiness). This is what facilitates for subsequent request that is part of the same HTTP session to go back to the same server. There are few things that happen before the real "streaming" session starts, i.e. between step 6 and step 7. The "infrastructure" token, as you referred to it comes into play when a "streaming" session is started. This is typically started as session negotiation happens. This is basically, as you referred, the application token or the session id created in the backend is passed back to the client and that is sent as part of the negotiation to the streaming server. When this request goes to a streaming server, the "infrastrcture" token is acquired and attached as part of the HTTP response that comes back to the client. This process is the handshake between the client and the server. This is when the "infrastructure" token association between the client and a server is done.
About the iRule - there is a good KB article @ http://www.symantec.com/business/support/resources/sites/BUSINESS/content/staging/HOW_TO/10000/HOWTO10701/en_US/1.0/Best%20practice%20for%20configuring%20a%20BIG%20IP%209%200%20with%20SWS%206%201_V1.0.pdf?__gda__=1285978584_93581989301de487ab759ae8a3438705
If you don't have access to this let me know I can attach it to this forum.
Hope this helps.
I'm confused because according to Gene the client in step 7 has already received the token from the LS (in step6) while Nirmal states that in step 7 "a new client would not have received a token yet".
Also I'm puzzled with a token question: is the TOKEN that Gene was talking about in his scheme the same TOKEN that Nirmal refers to when disserting about LB http inspection mechanism?
Personally I understand that the former is an APPLICATION TOKEN (randomically?) created by DB node and assigned each time a client asks for a package, independent from the underlying server architecture (Load Balanced or not).
Instead the latter token (the one involved in LB inspection) is an "infrastructure" TOKEN statically configured on every Streaming Server (from the basic Configuration page of the of Appstream Console) in order to uniquely identify the server only when a load balancing mechanism is implemented.
For example we have implemented a LB configuration following the guidelines I found in a Symantec best-practice. I configured in each Basic Configuration page of our two streaming server the TOKEN parameter with 100 and 200. For example here you'll find the "100" labbelled server.
We consequently implemented in our LB system (Foundry) the "iRule" to match those tokens for example:
Server1 (HTTP::Header AppstreamKey=100)
stasvpbp01 (10.68.1.136)
Are we wrong?
In the example you pointed out, step 7 and step 10 - the end point does not receive a token until the response from the server is received. Session stickiness is based on http header sent as part of the response. The way it works is, when the end point makes a request for a package, there is a handshake. If the client had ever communicated to this streaming infrastructure, it would have had a token from the server. The load balancer inspects this info from the http request and if a token is found it directs the request to the appropriate server. In the case you pointed out (step 7), a new client would not have recieved a token yet. Hence the load balancer uses the routing algorithm (default round robin) to send to the next server in the order. Now, when the response comes back, the server would have filled its token in the response. This is how the communication is establed between the client and the server. Going forward any request from that end point as long as a session is active will be directed to the same server based on token inspection by the load balancer. This is why iRule as used in BigIP load balancer (or equivalent in other load balancers) is critical.
Great question! There should not be any impact to streaming when you have a LB set up. The session stickiness allows your session to reconnect to the same server you were connected to, however, if you have your launch server backup configured then a failure on the server you are streaming from should not impact you since the client will automatically try to establish a connection to the backup. The only other thing that you would need to do is make sure that any streaming server that is configured as backup to another, has the same packages available to stream.
Thank you Gene for the contribution, but I was wondering if the LB "VIP-aware" SWS configuration alters or changes (if ever) the Launch Cycle Diagram that you depicted in your previous article.
Do "stickiness" and "recovery" rules that stays behind the scenes of a LB configuration interfere with an ongoing Launch Cycle process?
In other words, do you think that a node failure (i.e. one STS/LS goes down) or a standard LB operation (round roubin assignment), that occurrs after or during some specific point of the diagram could break down the whole process and generate a problem that could not be self-recovered and managed by the streaming architecture?
For example point 7 (when client request streaming after obtaining the TOKEN) and point 10 (when client begins streaming) seems two good moments to investigate this issue.
May be there's no real issue going on since I lacks of more specifc knowledge on these subjects to address it propoerly , but any comment would be appreciated.
@achojwal I wrote an article to describe the difference in architecutre between using a load balancer and not using one. Please follow this link. https://www-secure.symantec.com/connect/articles/symantec-workspace-streaming-load-balancer-impact-architecture
Excellent idea to have this available. This would be a great reference point for customers (and for those of us who are new to the product) when asked to provide a high-level overview.
Hi Sarah. Thank you for the feedback. Maybe the best way to describe it would be as a checks and balances kind of process. We are validating the requester receiving the package is the same one that requested it. To do this you need to generate a token of some kind, pass it to the requester, then verify that it's still the same client talking to the server.