So were updating and changing our ntservices probe to make use of the new features. So for boxes with the probe already running I created a super pack that has the following 3 packages in the "Dependencies" tab but when I deploy this package after I've already deployed it onto my test box, nothing is actually deployed b/c it thinks the package is already there and skips.
How do you force a super package to re-deploy the packages in the bundle?
[Dependencies] - Tab
But on a box with these packages already, any changes I do to the 2_ntservices_restart never actually gets deployed as the cfg doens't change. Nothing changes.
The Dependency Properties dialog:
I'm guessing whats happening is that it checks that if the package is present and if yes then skip. So how do you force a re-deploy?
Utilize the version number in the dependency.
Looks like you have "ge <blank>" which just tests for existence.
Make it "ge 1.23" (or whatever the version is).
Also note that it's not recursive (or depth first depending on your perspective), it just checks the version requirement and if met, moves to the next dependency. Doesn't matter if something in one of the subordinate packages might have a need to be updated too.
So, if you have a super package that has a dependency of a custom CDM package and that package is dependant on the CDM probe and you update the CDM probe, the super package won't install the new CDM probe. you need to increment the versions and dependencies in the whole hierarchy.
Wish there was some automation there, or a simple check on modification date but there's not.
Thanks Garin. Yeah, I had the version # in there 1st and it didn't re-deploy so i tried removing the version # but no good. I guess I have to bump the version # of the package every time I want to re-deploy after I make updates/changes to it. Kind of annoying but thanks for clarifying.
Good morning All....
I kinda have the same issue...but with the actual robot version. With the release of 7.97HF3 I'm finding that if I specify the actual version AND Build number in the super pkg dependencies area, it still will not deploy the HF3 version of the robot...only the 7.97. Is this a limitation of the super pkg to only deploy a pkg that has a status of "OK" and not "Local"? I've tried setting the Type to both GE and EQ...doesn't work.
Well version is supposed to be just a number. But when it's compared, it's generally compared as a string. That leads to some weirdness.
But I think that if you specified "gt 7.97 " it would work for you. I'd not use the build criteria - I've not had that ever appear to work correctly.
Or delete the 7.97 package from your archive so that when it went looking it can only find the HF3 one.
I tried your recommendation of setting it to 7.97 with gt (greater than) selected and that works. The packages are now being pulled down with the HF3 pkg. We also found that the servers running the request.cfg file were having issues with the image they had downloaded to them. Thanks for the advice.
The only issue we are having now is that although the request.cfg file is being processed, and the correct probes are being deployed, they are NOT pulling down the configured .cfx files....something that has worked for the last 2 years.
The question I have is since the original files were created using a much earlier version of the probe, should re-create them with the current version then copy them to the super-pkg that is deploying them? I wouldn't think so, but can't figure out why I can manually update the probes from the archive with the same config but the processing of the superpkg will not update the cfg files.
I wouldn't think that would help - nothing has changed, that I'm aware of, in the package format in forever - I still have packages created in 2012 that work without issue.
One thing I have seen is that the controller records the successful install of a tab, apparently, if it successfully downloads the files, not if it is successful in deploying them. So if there's an issue with the deployment (file access issue for instance) the section will report a failure during install but the controller marks it as complete. So when the retry happens as a result of the error, it doesn't retry the failed section but moves on.
But otherwise the only other thing that I've seen mess with the successful install of sections in a super package is using a controller version on an OS version that's not supported by that controller version.
Well...I stand corrected. The deployment of the probes worked but they did NOT copy over the cfg (cfx) files with the custom settings. I was actually able to duplicate the error by trying to manually deploy the super pkg to the same robots. It would fail with the following screens:
This is with 7.97 defined as the version number and the gt box checked.
I changed the selection to eq and attempted another manual re-deployment of the super pkg and it worked great.
It appears there may be an issue with it checking for a newer version...?? (maybe just the HF3 pkg). I'll be interested to see when an actual release comes out if it behaves the same way. I guess I could test using 7.93 and remove the HF3 from the archive to see if it would successfully install 7.97.
We have an environment of 230+ Citrix servers that get re-provisioned every other night. They have a cloud copy of the robot on them set to initialize after two reboots, where they engage the request.cfg to install the super pkg we are using (robot update, cdm, ntevl, & ntservices). This way we are not having to manually deploy robots\pkg every day. Been working great for 2+ years until last week. Seems I had to tweak the pkg a little because of HF3 (and some issues with the new image they had on the servers).
Thanks for your input!
Update - Have just duplicated the issue with another super pkg and a whole other set of servers (regular Intel based Windows, not Citrix). If you have 7.97 set as the version and the gt boxed checked (to install 7.97HF3), it will fail with the same errors that I posted earlier,
Something to keep in mind is you should never put robot_update inside your super package as part of the default install especially once you goto ade and no longer use distsrv as there is no guarantee in order. I have seen this first hand where a robot update may run right in between the whole package setup .
Did you open a case for that? It actually sounds like a serious defect as the whole concept of a "package" is that there's some semblance of predictable order to it.
On the other hand, in the specific screen shot above, I've never seen anything in the documentation that would indicate that the cdm package would be checked before the robot_update package. All of them are dependencies of the current section/tab but the implication is that there's no dependency among the individual items in the list. But certainly everything that's in the win32 tab would be satisfied before hitting the cdm-cfg configuration tab as the solution to the ordering of the items in the list is to separate them into their own tabs so that the left to right order is correct.
Packages like this are complicated by the restart timings. When updating the hub and controller I've also found that sometimes you need an artificial delay to get past the restart. Something like the first tab updating robot_update, then the second tab has nothing but in the miscellaneous post install command something like "ping -n 60 localhost" on windows or "ping -c 60 localhost" on linux to get a 60 second delay. Then your third tab has the next set of dependencies/files to install.
UIM superpackage - Is it OK to add robot_update to - CA Knowledge
I agree there should be a set order but there isn't in my experience it is just like ADE has horrible string comparison if you choose a package to send and you have a sub dependency in this example the vs2017 package fiscal and the reboot issue there was no guarantee that it picked the latest version. So as a Example i have seen a push of 100 devices with 80 of them vs2017 package 1.01 and 20 servers with vs2017 package version 1.0 which could of caused reboots. We shouldn't be having to delete older package version to ensure things get pushed correctly if a version is not specified it should be the latest version not randomly picked version.
Disclaimer Number of Servers is scaled down to make things easier in the math