DX Unified Infrastructure Management

 View Only
  • 1.  Tunnel with very high latency

    Posted Sep 11, 2017 09:59 PM

    Friends,

     

    I have some secondary hubs that communicate by radio (satellite) and the minimum latency is 500ms.

     

    We made various settings:

    - Increased timeout in the tunnel hub, following the guidelines of the documentation;
    - We enable to ignore the IP verification, because the communication of these HUBs arrive using a unique NAT (Via Internet);
    - We changed the firewall rule to a higher timeout;

    Nothing solved the tunnel falls.

     

    During queue failures, I completely lose access to the secondary HUB by UIM Manager, but I continue to access the server using the RDS connection over the Internet.

     

    Note: We try to use instead of GET / ATTACH for POST, but with POST we lose the QUEUE alarm, so if the HUB is unavailable, we will not know if the client is up or down.

     

    I remember that the tunnel client connects to the server tunnel over the Internet and not VPN.

     

    During this process, we describe that there is a communication VPN in another location. So we put a probe of net_connect to monitor ping (latency) and telnet on port 48002 (Hub Nimbus).

     

    The latency even being high (500ms) has no drop, meaning the pings are permanent. This is also true of telnet. That is, there is no communication failure.

     

    The conclusion I get is that there is no tunnel break, but rather, due to a very high latency, some failure should occur during the sending of alarms and qos messages.

     

    I want to know if there is any maximum network latency limit, so I will know that this is the root cause.

     

    The interesting thing is that I already used hubs with 3G modem and worked perfectly.

     

    But this scenario is causing me many problems.

     

    Thank you.



  • 2.  Re: Tunnel with very high latency

    Broadcom Employee
    Posted Sep 12, 2017 12:14 AM

    Hi

     

    If there is latency in the path there is data compression feature started in hub 7.92HF6 .

    You can  try with enabling this feature after upgrading to latest hub7.92HF9/robot_update 7.92HF9
    in the path from source to destination to see if this helps 

    ====

    7.92HF6: To improve the throughput performance of tunnel communication between hubs in a high latency network, data compression can now be configured.

     

    https://support.nimsoft.com//downloads/hub792HF9/CA_UIM_Hub_and_Robot_792HF9_Release_Notes.pdf



  • 3.  Re: Tunnel with very high latency

    Posted Sep 12, 2017 06:24 AM

    Hi,

     

    interesting this information. Thank you.

     

    I am currently using version 7.91.

     

    I can only update the HUB Tunnel and the secondary HUBs with this version and evaluate the behavior if it will help in queue failures, right?

     

    Can this version be used in version 8.40 (Primary and Web Portal)?

     

    Thank you.



  • 4.  Re: Tunnel with very high latency

    Broadcom Employee
    Posted Sep 12, 2017 10:50 PM

    You need to deploy the hotfix to both end (HUB that sends data / HUB that receives data).



  • 5.  Re: Tunnel with very high latency

    Posted Sep 12, 2017 11:45 PM

    Hi,

     

    Yes. I made it. Following the procedure I enabled the configuration in the secondary hubs where I have problem. I set it to level nine compression.

     

    In LOG I see the result of compression.

     

    However, my tunnels continue to turn red. If I turn it off and on, it turns green for a while and right after, red. From time to time it returns to normal and red again.

     

    The strangest thing is that I see the status of the tunnel connection as active. But the response time is abnormally high.

    NOTE: All secondary hubs arrive in the tunnel (proxy hub) with the same NAT, as they pass through the central office. So I set the tunnel server to "ignore_ip" and "disable_ip".

     

    I will add the images below.

     

    Hub Secundary 1 with Datacenter (Tunnel - Proxy HUB)

    Tunnel 1

    Hub Secundary 2...5 with Datacenter (Tunnel - Proxy HUB)

    Tunnel 2

    Central Office with Datacenter (Tunnel - Proxy HUB)

    Tunnel 3

     

    Status connections all:

     

    Try since this occurs only for a group of five sub hubs, and since the other secondary hubs are working perfectly, I ask you to validate whether my strategy can help reduce this problem.

     

    I currently have a primary and a tunnel server, doing the proxy hub function. That is, for these secondary hubs with high latency, we dedicated an exclusive tunnel, doing proxy hub and delivering to the primary alarms and qos.

     

    But the tunnel (proxy) is in a datacenter and these secondary hubs connect to it using the Internet from the central office. That is, the secondary hubs communicate by radio (backhaul network) with the central office and use the Internet to communicate with the datacanter where the server tunnel is.

     

    I am thinking of the client's central office, creating a tunnel server, so that the secondary hubs communicate directly with the central office and this tunnel (hub hub) concentrates the alarms / qos and sends it to the datacenter tunnel.

     

    With that, I would avoid one of these jumps and consolidate all the alarms and qos in the client's office.

     

    I'm adding the current and proposed scenario:

     

    Scenario Actual

    Scenario Actual

     

    Scenario Proposed


    Thanks for the opinions.