Design Solutions for Serverless Computing


Use Azure Functions to implement event-driven actions; design for serverless computing using Azure Container Instances; design application solutions by using Azure Logic Apps, Azure Functions, or both; determine when to use API management service

Use Azure Functions to implement event-driven actions; design application solutions by using Azure Logic Apps, Azure Functions, or both

Azure Serverless computing in Azure has three primary service offerings: Azure Functions, Logic Apps, and Event Grid. We don’t necessarily have to use all of them in a system, but most uses of Azure Serverless computing that I’ve seen have these elements put together to provide a fully functional system. Let’s talk about them in that order.

If we need to write a small chunk of codes to process data, integrate disparate systems, work with Internet-of-Things (IoT), or build simple APIs and microservices, Azure Functions could be a fitting cloud service of choice. The idea behind Azure Functions is that we only have to bring our codes, which perform a particular function, to Azure and need not be concerned about the infrastructure at all. At the time of writing, Azure Functions supports the following languages.

Azure will allocate and scale the required infrastructure to run our function. Unlike IaaS where we have to pre-allocate compute resources (before we even put together the software pieces) and Web App Service where we have to specify instances and other container nuances to cater for anticipated workload, Azure Functions will automatically allocate resources to support our function’s needs. If our function receives 1 million requests, Azure Functions will allocate resources accordingly without pre-allocating them or manual intervention from users.

We only have to pay for the time spent running our code – billed based on per-second consumption and executions. However, we have to keep in mind that these codes have to be stored somewhere (the same goes for the data our Function has processed that we want to keep) and that a storage account will be created for each Functions app. So on top of the fee when the codes are run, we also have to pay for storage and networking rates. The minimum execution time and used memory for a single function execution is 100 ms and 128 MB respectively. Note that Azure Functions can be used with Azure IoT for no charge.

Azure Logic Apps provides a visual designer to put together several processes from on-premise or cloud as workflows. It can integrate with many connectors, both on-premise or cloud, and easily link them to other APIs. A Logic App instance starts with a trigger (i.e., new data added, specific criteria has been met, etc.) and then can begin many combinations of actions, conversations, and condition logic. See more examples here. On the other hand, there are some limits and configuration that we should be aware of.

Every step in a Logic App definition is an action, which includes triggers, control flow steps, and calls to built-in connectors. We are billed every time an action is run + connector executions (Standard or Enterprise) + integration accounts (Basic or Standard). Refer to Azure Logic App Pricing documentations first here, then here, and then the Azure Pricing Calculator. Successful and unsuccessful executions are both metered.

In case you are also wondering, here’s a comparison between Flow, Azure Functions, Azure Logic Apps, and Azure App Service WebJobs. Additionally, here are some views from MVPs regarding Microsoft BizTalk and Logic Apps– systems that appear identical in purpose. Both Azure Functions and Logic Apps can be coded using VSTS or in Azure Portal.

Customers can choose to implement authentication in a customized Logic Apps, or leverage Azure AD to secure Logic Apps, on top of the encryption at rest and in-transit.

Essentially, Azure Event Grid orchestrates relaying of events from several source applications/services (event sources) to one or more event handlers. For example, if a VM is created in Azure IaaS, that event can be picked up by Event Grid and pass it to Azure Automation so that organization-specific hardening can be done to it.

In my opinion, here are Event Grid’s main differentiators compared to other applications’ built-in event notification features:

  • Advanced filtering- Filter on event type or event publish path to ensure event handlers only receive relevant events.
  • Fan-out- Subscribe multiple endpoints to the same event to send copies of the event to as many places as needed.
  • Reliability- Utilize 24-hour retry with exponential back off to ensure events are delivered.
  • Custom Events- use Event Grid route, filter, and reliably deliver custom events in your app.

This service is on apay-per-operation model, that is customers pay per 1 million operations run. Operations include all ingress events, advanced match, delivery attempt, and management calls. Plan pricing includes a monthly free grant of 100,000 operations.

Compared to other events / messaging services, Event Grid is event-driven and reactive in nature that uses publish-subscribe operating model. It does everything mentioned above without having the actual data itself; just the discrete event information which is passed along to event handlers.


Determine when to use API management service

The main idea behind Azure API Management is having a hub where organizations can collate their APIs. Another thing is, instead of baking everything in our microservices, like methods to publish APIs, rate limiting, authentication, quotas, and etc., we can have all these services in one place and kind of de-bloat our microservices. From another perspective, API Management could appear like what cloud providers did with IT – make a pool of resources/services (pool of APIs for this case) available for external parties or internal users to take advantage of.

We can do a slew of things with that collection of APIs; such as:

  • Publish our APIs and provide documentation so that external/internal users or partners can use our APIs.
  • Serve as an API gateway that accepts API calls and routes them to backend services, verifies credentials, enforces quotas and rate limits, caches backend responses, and log calls metadata for analytics purposes.

There are more things that we can do with API Management which can be found here, and it also helps address these items:

  • Discoverability- What APIs are available and what do they do?
  • Onboarding- How can consumers request access to APIs and learn how to use them?
  • Security- How do I secure and control access to my data/services provided by the backend systems?
  • Monitoring- who is using my APIs and how by how much? Are my APIs online and working as expected?
  • Lifecycle- How do I evolve my APIs without adversely impacting consumers?


About the author


Add comment

By elaguni

Your sidebar area is currently empty. Hurry up and add some widgets.