I've checked the documentation and there doesn't seem to be much info on the sizing requirments for a secondary hub. I've posted a suggested improvement to the documentation thread but seperately, can anyone advise me:
What are the minimum CPU/Memory/Disk space recomendations for a secondary hub server? If I have a secondary hub server in my DMZ that will be relaying data back to my primary over a tunnel what size should that secondary be?
I would like to be able to work out if I can share that server and use it for multiple tasks, not just as a dedicated secondary, does anyone know what the overhead of a secondary is and when its safe or not safe to share the system.
Any help greatly appreciated.
Sorry, a secondary hub in my terms is a hub which takes over function of the primary in case he went down, but thats perhaps only my wordbuilding :->, so I was a bit confused about the sizing question.
But as far as I understood you want simply a communication between the robots in the DMZ and and your mainhub, I have all hubs (despite the mainhub + backup) running on linux, w/ 2CPU and 2 GB RAM, and enough discspace for the archive, because I alway export our archive via distsrv to the hubs (but thats some sort of personal tic ;-)) and aprox 30 Robots behind that one...the machine only do the tunnel nothing more
I mean a "relay hub" that acts as a tunnel client to get data out of an on-premise system into the cloud or something similar.
So it is the Robot + the Tunnel Client and the Hub components. The Robots and Probes talk to this server to relay the information out to the Primary servers.
Matthias says Linux with 2GB RAM and 2 x CPUs is enough. Anyone else using smaller or different for this purpose? anyone know of an offical recommendation?
I use roughly the same sort of setup in some cases and haven't bumped into any issues. I assume you can probably use less too. Don't know about any official specs.
I think even 1CPU should work, to be honest, its only stupid I/O with some tunneling.
2GB+2CPU is our default minimum-set on Virtual Machines, an if I had a look on the Dashboards of the VM (not monitored with NMS ;->) it even doesnt use the 2 CPU and the 2 GB neither ;->
But soleyly speaking of Linux not Winblows!
Well but it depends all on the amount of traffic an Messages taht will come though the tunnel, if you have a VM with 200 Guests with 20 Merics each and QoS, and NetApp-Filers with a Netapp-Probe, and lotsa robots with QoS and Monitoring, then this will possible not fit with "my machine" :->
Thanks for the replies.
The smallest VM I can get is a 1 x 1Ghz CPU with 1.7GB of RAM and 10GB disk space running CentOS 64bit so I will give that a try. It will only be acting as a tunnel for 3 other servers.
Disk size is mainly related to how long you want to be able to spool messages if the destination is down, based on message flow rate. If things are down, do you want to be able to spool messages for 1 hour? 1 day? 1 week? And so on.
This needs a bit of updating do to the undocumented requirement of running ppm on hubs. Toggling java parameters gets it running on lower mem servers with unknown ramifications, but the default is mx1024m which is a max size of 1G for the ppm probe before it aggressively garbage collects and refuses to stop growing.
My current thoughts are that nothing should go out with less memory than a high end smart phone, so 2G. I think programmers adjust their coding practices to the rapid deflation of hardware cost, and trying to run super lean tends to bite you. If you are doing a lot of vmware or network monitoring, or possibly even rsp, you may need more. Ditto if you are running baseline_engine at the edge.
Of course if they can't even tell you the software required for an edge hub, I wouldn't trust any documented numbers on resources to allocate to an edge hub.