In this post, I’m going to tackle staying safe and up-to-date with containers. Doing that can be challenging and not always intuitive. This post describes our approach to helping you with that — largely via our container image publishing system — and with associated guidance of the images we publish. The post has a strong bias to Linux, because there is more to know and more nuance on Linux. It replaces a similar 2018 post, Staying up-to-date with .NET Container Images.

I decided to start 2021 with an update on .NET containers, and answer common questions we hear. I posted similar content in past years: 20172018, and 2019. This year, I’m planning on publishing a series of posts, each dedicated to a thematic slice of the container experience. I’m hoping to get some of my colleagues to post, too. These posts will cover how we’ve made .NET into a great container platform, but also suggestions on how you can be a great container user.

I’ll start by telling you a little about the team that works on the image publishing slice of our container experience. Knowing more about what we do helps you better understand the images you are using.

The container publishing team is made up of three developers — Dan, Matt, and Michael — and one Program Manager — Rich (me). You can follow what we’re doing in our two primary repos: dotnet/dotnet-dockermicrosoft/dotnet-framework-docker. We triage issues in those repos every week, and try to address everything reported or asked for in issues, discussions, or pull requests. You’ll also find Dockerfiles for all .NET images, and samples that demonstrate common ways of using them.

On the face of it, our job is easy. We produce new container images for .NET servicing and preview releases. We are not responsible for building .NET (a larger team takes care of that). We only need to write Dockerfiles that unpack and copy .NET builds into a container image. As is often the case, theory doesn’t track closely to reality.

Container pulls are hard to count (layers vs manifest-only pulls) but it is safe to say there are ten million .NET image pulls a month. There are two things that are ever-present in our minds, as fundamental requirements of that scale. The first is that a lot of people are counting on us to deliver software that is high-quality and safe. The second is that there is an inherent diversity of needs demanded by the developers and devops professionals driving all those image pulls. The pull rate has grown to that level, in part, because we satisfy a lot of those needs. Those needs are what we continue to focus on as we consider what to do next. For example, we publish images for three Linux distros, as opposed to just one.

Much of that will come as no surprise. Less obvious is how we manage updates for Linux distro base images that we support — Alpine, Debian, and Ubuntu — that we (and by extension you) rely on. It was obvious from our early container days that managing base image updates was a task for a cloud service and not people. In response, we built significant infrastructure that watches for base image updates and then re-builds and re-publishes .NET images in response. This happens multiple times a month, and in rare cases, multiple times a day.

Dockerfiles

.NET Dockerfiles rely on versioned URLs that reference public and immutable SHA2-validated .NET builds and other resources via HTTPS.

Dockerfiles are a text-based recipe format for defining container images, part shell script, part declarative format, part (arguably) functional programming language. There are many positive aspects to Dockerfiles. Perhaps the the most compelling part is the concept of layers, their hash-based identity and the caching system that is built of top of those characteristics.

We know many people use our Dockerfiles to produce their own images or as a starting point to producing images that are different in some way. We endeavor to make our Dockerfiles best practice, self-consistent, and easy to use. We’ve always thought of the Dockerfiles and the resulting images as equally important deliverables of our team.

Comments

Post a comment