The behaviour of the process engine and job scheduler wouldn't be any different in that scenario than if they were running in a background service, they'd just be carrying an overhead for the unused app/web/xog servlets as well. I would suggest keeping job schedulers and process engines within the bg services.
Both the process engine and job scheduler have 'caps' on the amount of concurrency they can deal with (by default, 10 concurrent jobs, and the process engine is a bit more granular with multiple 'queues' of processes in different states ('pipelines', such as preconditions, execution actions, and post-conditions, which any given active process instance transitions around and around)). They are greedy self-balancers, taking on any/all work that comes their way as they can, and in the case of the process engine also looks out for items (process instances) that are held by another service/PE but that seems to be non-functional due to it not having updated their state in a long while, at which point those process instances are free for the (re)taking any other process engine that is running/active.
The idea being they will try to provide continuity (as much as possible) in the event that any other service that may be running jobs/processes goes offline.
What (and why) exactly are you needing to control/restrict in order to impose some different method of work balancing, and how do you intend to have any service outages dealt with? A lot of the communication between these services to notify them of the need to start new processes happens over multicast, and it's the same multicast ports and addresses that the apps need to use to remain synchronised, so you can't really segregate app+bg pairs off from others in the cluster or anything in order to try and direct traffic to specific engines/schedulers.