Automic Workload Automation

 View Only

Automic Automation and Serverless Computing

By Tobias Stanzel posted Jan 19, 2021 10:22 AM

  

What is Serverless Computing?


Serverless computing is a cloud computing execution model in which the cloud provider allocates machine resources on-demand, managing the servers on behalf of their customers—allowing developers to focus their time and efforts on the business logic for their applications and processes.


Some of the core attributes that distinguish the serverless model are:

  • The model requires no management or operation of infrastructure.
  • It executes code on-demand, and on a per-request basis 
  • Scales seamlessly with the number of requests.
  • Pay only for consumed resources, never paying for idle capacity.
  • Enables a polyglot platform where you can use the best of breed languages for each function.

Fundamentally, this model is about focusing more on code and less on infrastructure.

Serverless versus Function-as-a-Service (FaaS)

Often used interchangeably but we should consider FaaS to be a subset of serverless. 


The serverless model applies to many service categories, such as compute, storage, database where the management, configuration and billing of backend servers are abstracted from the end-user. On the other hand, FaaS, while perhaps more commonly used in serverless architectures, focuses on the event-driven computing pattern where the code only executes in response to events or requests.


The leading cloud providers all have multiple services available that fit the serverless model, e.g. AWS Lamda and Amazon API Gateway, Azure Functions, and Google Cloud Functions and App Engine.

Using Automic Automation in a Serverless Environment

Where does Automic Automation fit in the landscape of a serverless system? 


For the remainder of this discussion, I will focus on AWS Lambda as a serverless technology, but you can extract the concepts for any provider.


AWS Lambda can run your code in response to events, such as changes to data in an Amazon S3 bucket, an Amazon DynamoDB table, or a REST call from the Amazon API gateway. It can easily trigger/use any other AWS service. 

 

What if my business process requires me to integrate from the AWS cloud services to my on-premises Mainframe system? Automic Automation is perfect for automating workloads on mainframes and across distributed systems on-premise and in the cloud. What we need is a reliable integration between those systems.


Imagine the following use case: Your mobile trading application generates and stores transaction data on S3 cloud storage. To verify and process these transactions must be processed in your on-prem application running on a mainframe.



How can we trigger a mainframe job every time a file is uploaded by the application to an S3 Bucket? 


A polling mechanism would be an option, using an Automic job that checks the S3 bucket for new files every minute or so. Using these semaphore concepts is as old as computing, but it is inefficient and will always incur delays based on the polling frequency. We need a proper event-driven approach for today's modern applications.


A more robust solution is to use an AWS Lambda function that listens to the S3 bucket and triggers every time a new file is uploaded, this function then runs an Automic Job by calling the Automic REST API. Automic can then download the file from S3 cloud storage and transfer it to the mainframe and complete the processing needs.



This provides us with real-time responses to events occurring in the serverless environment, allowing complex workflows to complete processing across our environments and bringing visibility to operations and applying SLAs to the delivery of the downstream processing.


How to Implement in Automic Automation


I have provided detailed steps to implement and all the materials you will need to run this in your environment, you can find the materials here, start with the implementation guide.  


The milestones involved to implement are:


  1. Prepare Automic to receive the Event via rest
  2. Create/Choose an S3 Bucket
  3. Create Lambda Function based on template and set it up
    • Make sure password is encrypted

I hope you find the use-case and materials interesting, comment back on the blog or directly message me on any questions or ideas on making the guide better.


Tobias

0 comments
78 views

Permalink