Hi Alex
hands up - I have some experience. First you should spend some thoughts on where you want to deploy the LLM. If you can access the internet, OpenAI would be interesting. They have their API (and pricing) documented on their homepage. AWS Bedrock / Azure / GCP might be an option too. Last but not least you could also use a locally deployed huggingface model. However, if you deploy it on common server hardware, the speed might not be satisfying.
Im mostly experienced with OpenAI / Bedrock models, these are really easy to use. OpenAI has REST webservices, Bedrock can be easily accessed using the Boto3 python client. It depends on what you plan to use to access the models - REST JOBS objects or OS JOBS with some scripting (my favorite is #2, you have more possibilities and you're just faster in implementation).
Usually, when you create automations, you implement some kind of escalation workflow that is triggered in case of issues. You can use this workflow to extend the escalation with LLM capabilities. What I did is gathering the object definitions & failed job report via the Automic REST API and then inject it into the prompt for error analysis. There are many more possibilities to extend this, like RAG with a knowledge database or agents (many LLMs feature tools support / agents).
If you're into the topic, check out my YouTube video where I'm demonstrating the usage of the ChatGPT Plugin to interact with Automic (https://www.youtube.com/watch?v=Oj3hI7iQiBc&t). It's in German, but Youtube has automated english subtitles. The video is from June 2023, in the meantime you would use agents instead of plugins.
Regards
Joel