Dependency Pinning with Infrastructure as Code

In an earlier post entitled Terraform Plans, Modules, and Remote State, I point out the need to use a provider, such as Amazon Web Services (AWS), to translate configuration values into cloud resources. The provider is key for translating API requests to the target environment. Providers are versioned independently from Terraform – they use their own version numbers on a separate release schedule.

This brings up an interesting design consideration: determining the ideal process for dependency pinning when writing declarative code. Pinning sets boundaries on dependencies used to deliver infrastructure as code. The primary objective is to avoid breaking changes introduced by the deprecation of existing code. This makes pinning even more important when using automation to deliver resources!

In this post, I review the nuts and bolts of dependency pinning, showcase design patterns for the Terraform CLI version, and finish with design patterns for provider versions.

The Nuts and Bolts of Dependency Pinning

Pinning is constructed out of mathematical comparisons. This process has two parts: the requirement and the version. In the example below, I declare that my code must use Terraform CLI version 0.12 or higher in order to run.

terraform {
  required_version = ">= 0.12"

The operator >= programmatically states “greater than or equal to” in code. Other operators include:

  • != meaning “exactly not equal to”
  • = meaning “exactly equal to”
  • < meaning “less than”
  • … and so forth

Additionally, the expression ~> is a “pessimistic constraint operator.” This operator means “greater than or equal to this version, but no higher than the current major version.” Understanding a pessimistic constraint operator requires understanding semantic versioning. In a nutshell, releases that follow semantic versioning adhere to a major, minor, bug fix numbering schema.

Thus, this operator ensures the dependency stays within the current major version while also being able to upgrade to newer minor releases and update with bug fixes. Both change types are “non-breaking” changes, meaning the existing Terraform configuration continues to function properly after upgrading.

The example below declares that my Azure provider is at least version 2.15.0, while also allowing any higher version, so long as the major release stays at 2.

provider "azurerm" {
  version = "~>2.15.0"
  features {}

Thus, 2.99.99 is an acceptable version, while 3.0.0 is not. Breaking changes may be introduced in 3.0.0. This configuration mitigates that risk.

CLI Pinning Design Patterns

I use pinning when expressing infrastructure as code. At a minimum, pinning with >= provides a process for others to understand what version was used to build and test the code. Using a separate file to declare the minimum acceptable Terraform CLI version maintains modularity and conforms to the 0.12upgrade logic. Execution fails if the Terraform CLI version is not equal to or greater than the expressed value.

The example below requires Terraform CLI version 0.13.

terraform {
  required_version = ">= 0.13"

Terraform’s lack of a non-zero major version does not adhere the traditional sense of semantic versioning. Despite this, I still avoid using a pessimistic constraint operator for the CLI. Instead, I control the Terraform version used via the containers running my code within continuous integration.

  • Production configurations use a stable release in the containerized runner, which is 0.12 at the time of this post. In fact, my runners are often hard bound to a specific release, such as 0.12.28, instead of using latest or stable. This avoids upgrade surprises. 🙂
  • Test configurations use version 0.13betaX.

When version 0.13 is made Generally Available, I plan to switch my runners over to version 0.13. This continues to satisfy the required_version parameter.

Provider Pinning Design Patterns

Most of my providers, such as AWS and Azure, are pulled from the Terraform registry on-demand when my Continuous Integration workflow spins up a new containerized runner. This improves operational excellence as the process is simple, lightweight, and requires no administrative effort. Provider version control is handled by the file in the root module.

For example, one of my Terraform configurations uses both the aws and random providers. I pin both providers using a pessimistic constraint operator. This ensures compliance with the desired major version while also allowing the provider authors to improve, update, and bug fix the code.

terraform {
  required_providers {
    aws    = "~> 2.67"
    random = "~>2.2.1"

Child modules that use these providers will inherit the required_provider values automatically. I typically avoid putting any dependency statements in child modules that are loaded locally. The exception to this is when a child module is being loaded remotely / externally. In that case, I would add a dependency pinning to ensure the consumer knows what versions I feel are supported by my code.

Next Steps

Please accept a crisp high five for reaching this point in the post!

If you’d like to learn more about Infrastructure as Code, or other modern technology approaches, head over to the Guided Learning page.