Posts in this series:
- Evaluating the Landscape
- A Generic Host
- Azure WebJobs
- Azure Container Instances
- Azure Functions
- Azure Container Apps
In the last post, we looked at Azure WebJobs as a means of deploying messaging endpoints. And while that may work for smaller loads and simpler systems, as the number of message endpoints grows, dealing with a "sidecar" in a WebJob starts to become untenable.
Once we graduate from WebJobs, what's next? What can balance the ease of deployment of WebJobs with the flexibility to scale only the endpoint as needed?
The closest we come to this is Azure Container Instances. Unlike Kubernetes in Azure Kubernetes Service (AKS), you don't have to manage a cluster yourself. This might change in the future, as Kubernetes becomes more widespread, but for now ACIs are a much simpler step than full on Kubernetes.
With ACIs, I can decide how large each individual instance is, and how many instances to run. As load increases or decreases, I can (manually) spin up or down services. Initially, we might keep things simple and create relatively small instance sizes, and then provision larger ones as we need to.
But first, we need a container!
Deploying Into a Container
From an application perspective, nothing much changes. In fact, we can use the exact same application from our WebJobs instance, except, we don't need to do the
ConfigureWebJobs part. It's just a console application!
Different from WebJobs is the instructions to "run" the endpoint. With WebJobs, we needed that
run.cmd file. With a container, we'll need a
Dockerfile to describe out to build and run our container instance:
FROM microsoft/dotnet:2.2-runtime-alpine AS base FROM microsoft/dotnet:2.2-sdk-alpine AS build WORKDIR /src COPY . . WORKDIR /src/DockerReceiver RUN dotnet publish -c Release -o /app FROM base AS final WORKDIR /app COPY --from=build /app . ENTRYPOINT ["dotnet", "DockerReceiver.dll"]
In .NET Core 3.0, this can be simplified to have a self-running executable but with this still on 2.2, we have to define the entrypoint as
With this in place, I can test locally by using Docker directly, or Docker Compose. I went with Docker Compose since it's got out-of-the-box support in Visual Studio, and I can define environment variables more easily:
version: '3.4' services: dockerreceiver: image: nsbazurehosting-dockerreceiver build: context: . dockerfile: DockerReceiver/Dockerfile
Then in my local overrides:
version: '3.4' services: dockerreceiver: environment: - USER_SECRETS_ID=<user secrets guid here> volumes: - $APPDATA/Microsoft/UserSecrets/$USER_SECRETS_ID:/root/.microsoft/usersecrets/$USER_SECRETS_ID
With this in place, I can easily run my application inside or outside of a container. I don't really need to configure any networks or anything like that - the containers don't communicate with each other, only with Azure Service Bus (and ASB doesn't come in a containerized format for developers).
I can run with Docker Compose locally, and send a message to make sure everything is connected:
dockerreceiver_1 | dockerreceiver_1 | info: DockerReceiver.SaySomethingAlsoHandler dockerreceiver_1 | Received message: Hello World
Now that we have our image up and running, it's time to deploy it into Azure.
Building and Running in Azure
We first need a place to store our container, and for that, we can use Azure Container Registry, which will contain the built container images from our build process. We can do this from the Azure CLI:
az acr create --resource-group NsbAzureHosting --name nsbazurehosting --sku Basic
With the container registry up, we can add a step to our Azure Build Pipeline to build our container:
steps: - task: Docker@1 displayName: 'Build DockerReceiver' inputs: azureSubscriptionEndpoint: # subscription # azureContainerRegistry: nsbazurehosting.azurecr.io dockerFile: DockerReceiver/Dockerfile imageName: 'dockerreceiver:$(Build.BuildId)' useDefaultContext: false buildContext: ./
And deploy our container image:
steps: - task: Docker@1 displayName: 'Push DockerReceiver' inputs: azureSubscriptionEndpoint: # subscription # azureContainerRegistry: nsbazurehosting.azurecr.io command: 'Push an image' imageName: 'dockerreceiver:$(Build.BuildId)'
And now our image is pushed! The final piece is to deploy in our build pipeline. Unfortunately, we don't have any built-in step for pushing an Azure Container Instance, but we can use the Azure CLI task to do so:
az container create --resource-group nsbazurehosting --name nsbazurehosting --image nsbazurehosting.azurecr.io/dockerreceiver:latest --cpu 1 --memory 1.5 --registry-login-server nsbazurehosting.azurecr.io --registry-username $servicePrincipalId --registry-password $servicePrincipalKey
Not too horrible, but we do need to make sure we allow access to the service principal in the script to authenticate properly. You'll also notice that I was lazy and just picked the latest image, instead of the one based on the build ID.
In terms of the container size, I've just kept things small (I'm cheap), but we can adjust the size as necessary. With this in place, we can push code, our container builds, and gets deployed to Azure.
Using Azure Container Instances
Unfortunately, ACIs are a bit of a red-headed stepchild in Azure. There's not a lot of documentation, and it seems like Azure is pushing us towards AKS and Kubernetes instead of individual instances.
For larger teams or ones that want to completely own their infrastructure, we can build complex topologies, but I don't have time for that.
ACIs also don't have any kind of dynamic scale-out, they're originally designed for spinning up and then down, and the billing reflects this. Recently, the price came down enough to be just about equal as running a same size App Service instance.
However, we can't dynamically increase instances or sizes, so if you want something like that, Kubernetes will be the way to go. It's worth starting with just ACIs, since it won't require a k8s expert on staff.
Up next, I'll be look at hashtag serverless with Azure Functions.