Determine when a container-based solution is appropriate; determine when container-orchestration is appropriate; determine when Azure Service Fabric (ASF) is appropriate; determine when Azure Functions is appropriate; determine when to use API management service; determine when Web API is appropriate; determine which platform is appropriate for container orchestration; consider migrating existing assets versus cloud native deployment; design lifecycle management strategies
Determine when a container-based solution is appropriate
In general, organizations would opt for a container-based solution for the following reasons:
- Provide isolation for applications deployed in a single resource pool, which can be as low as a server that runs one OS.
- Use highly dynamic and fully orchestrated microservices to build applications.
- Achieve application agility, flexibility, portability, and scalability on any infrastructure – cloud or on-premise.
Here’s a feature comparison between Azure App Services, Cloud Service, Virtual Machine, and Service Fabric.
Determine when a container orchestration is appropriate
Like other infrastructure resources, it would be beneficial to implement container orchestration to efficiently provision, deploy, operate, scale, connect, secure, and eventually decommission this kind of resource. We want a multitude of processes to be as automated as possible while maintaining flexibility (for those out of automation-scope tasks) when dealing with a massive number of containers/microservices. With containers, more specifically, here are some functions that an orchestrator could help with:
- Service Discovery
- Load Balancing
- Secrets / configuration / storage management / API management
- Health checks / logging / monitoring
- Auto-[scaling / restart / healing] of containers and nodes
- CI / CD
Determine when Azure Service Fabric (ASF) is appropriate
In an oversimplified statement, Azure Service Fabric is a distributed systems platform that aids to orchestrate packaging, deploying, and managing containerized microservices that make up an application, across infrastructures – cloud and/or on-premises.
Here are some key capabilities of Service Fabric:
- Deploy to Azure or to on-premises datacenter that run Windows or Linux.
- Develop scalable applications that are composed of microservices (stateless or stateful) by using Service Fabric programming models, containers, or any code.
- Deploy different versions of the same application side by side, and upgrade each application independently, and manage their lifecycle.
- Scale out or scale in the number of nodes in a cluster, while doing the same thing for your applications. A resource balancer can orchestrate the redistribution of applications across the cluster.
- Monitor and diagnose the health of your applications and set policies for performing automatic repairs.
In some parts of the official documentation, it is recommended to use Service Fabric for applications that will have stateful microservices; else, consider using other Azure Serverless solutions.
Determine when Azure Functions is appropriate; Determine when to use API management service
Please refer to the contents of this page for these topics: https://wp.me/p9cCkE-2Wd
Determine when Web API is appropriate
From what I’ve gathered from forums such as this post, there used to be a time when Web API was distinguished differently from Web App Service. It used to handle authentication, CORS, and API metadata which can now be deployed in Web App as well. Now, we’re redirected to Azure Web App Service upon clicking the link to show Web API’s documentation.
Determine which platform is appropriate for container orchestration
Microsoft Azure offers several container orchestration services namely Azure Service Fabric, Azure Container Service for DC/OS, and Azure Container Service for Kubernetes which is also known today as Azure Kubernetes Service. These services are very similar to each other because they primarily deal with containers/microservices and aims to bring the same set of benefits to developing applications using such approach. Their differences become apparent when we start to orchestrate containers/microservices. Obviously, Azure Container Service for DC/OS uses Apache Mesosphere Datacenter Operating System, and Azure Container Service for Kubernetes uses Kubernetes. Yes, those non-Microsoft orchestration tools are offered in Azure Container Service. On another note, Azure Service Fabric provides guidance on how applications should be written – either using Reliable Services or Actor model.
Here’s a comprehensive post to a simple application made up of containers put together in 2 different ways – Service Fabric and Azure Container Service for Kubernetes.
Consider migrating existing assets versus cloud native deployment
It’s crucial that organizations are crystal clear with their objectives because they will affect several things that will help them reach their goal. Among those things are the resources that organization will invest on and build upon. Choosing to migrate existing assets versus cloud-native deployment is no different, but each approach has obvious pros and cons, which may vary depending on the type of resources that will be migrated to the cloud. For this particular exam objective, we’ll discuss migrating existing monolith application versus cloud-native application deployment, excluding the general benefits of Cloud Services.
For starters let’s take a gander on this diagram taken from “What Does Shared Responsibility in the Cloud Mean?” which will be useful in this discussion.
Lift and Shift (migrate existing monolith applications)
- The learning curve of the resources to use in the cloud is almost flat. If the existing application is installed on a dual-core, 8GB RAM, 250GB HDD physical server, we’ll simply get a similarly specced VM in the cloud.
- Isolation is more traditional albeit familiar – per VM; no sharing of kernel. The same traditional concepts apply when it comes to load balancing, HA, data protection, and resiliency, which is mostly dependent on the infrastructure/VM.
- Application code rarely requires refactoring, at least in the nascent days of the cloud consumption journey.
- May cost more to run in comparison to the cloud-native counterpart.
- Large area of the stack has to be managed by the customer.
- Traditional approach to isolation, scaling, HA, resiliency, etc., could mean significant dependency on infrastructure, automation skill set, or both.
Cloud-native Application Deployment
- Organizations could simply start coding their applications and leverage readily available, highly-integrated cloud platforms.
- Flexible, more granular, and efficient cloud-scaling as opposed to scaling by the VM.
- Possibly faster and more effective approach to failures (i.e., restarting an instance of a service on another node vs restarting an entire VM).
- Pay-as-you-use consumption model for lower TCO and cost-effective use of resources.
- Could be simpler to adopt more flexible and agile application deployment models because of lack of infrastructure dependency.
- The learning curve of the resources to use in the cloud could be steep, especially for those who are who are used to monolith application deployment.
- Significantly low degree of control of the underlying infrastructure, and kernels may be shared with other customers.
- Highest apps refactoring, especially for non-containerized applications.
- Potential cloud vendor lock in.
Design lifecycle management strategies
There are several “lifecycle” that need to be managed in an Enterprise Datacenter, and that makes me a little bit unsure which one this sub-section is referring to. To take a stab on it and in light of the previous topics, let’s assume that we are being asked to design lifecycle management strategies for application development.
First off, Microsoft published an article that suggests a way organizations could modernize their existing applications so that they could fully take advantage of the cloud. Here’s a simple diagram that depicts the concept:
While organizations take strides to modernize their applications, they also need to transform the inner cogs – the people, the processes, and the tools – to automate and make every step efficient. Hence, the DevOps buzzword in today’s application development – unifying people, processes, and tools in every stage of application development to efficiently and continuously provide value to end users. Here are some ideas that could help us with this:
- Sprinkle seamless opportunities to provide feedback (and ways how to address them) at every stage and in every department.
- Re-evaluate, analyze, and define business functions/domains.
- Re-assess the tools in use and make sure they are still up to the new tasks at hand.
- Automate processes everywhere (i.e., manual hand-offs, task switching, waiting for replies, testing, release, and so forth).