Martin Rylko
  • Services
  • Blog
  • About
  • Contact
  • Get in Touch
Martin Rylko

Senior Cloud Architect & DevOps Engineer. Specializing in Microsoft Azure, IaC, Cloud Security and AI.

Navigation

  • Services
  • Blog
  • About
  • Contact

Collaboration

Looking for an experienced architect for your Azure project? Get in touch.

rylko@cloudmasters.cz

© 2026 Martin Rylko. All rights reserved.

Built in the cloud. Deployed via Azure Static Web Apps.

Home/Blog/Terraform Azure Modules: Private Registry and Testing
All articlesČíst česky

Terraform Azure Modules: Private Registry and Testing

8/15/2025 5 min
#Terraform#Azure#IaC#DevOps#Testing

Here is a pattern I see in almost every organization that has been using Terraform for more than a year: five projects, five copies of the same VNet module, each subtly different because someone "fixed" something in one copy and never propagated it. Six months later, project A has NSG rules that project B does not, and nobody can explain why.

The fix is not discipline. The fix is infrastructure: a module registry with versioned, tested modules that teams consume like any other dependency.

Effort: 3-5 days to modularize existing Terraform, 1 day per module for testing Cost: Terraform Cloud free tier (5 users) or ~$20/user/month for Teams; Azure Container Registry Basic ~$5/month for private module hosting Prerequisites: Existing Terraform Azure project, Go 1.21+ for Terratest, CI/CD pipeline (GitHub Actions or Azure DevOps)

What Changed in 2025

The Terraform module ecosystem shifted significantly this year:

  • Terraform 1.7+ native testing framework. The terraform test command replaces the need for Terratest in many scenarios. You write .tftest.hcl files alongside your module and run assertions without Go, without a test harness, without a separate CI step. For unit-level validation, this is a game changer.
  • OpenTofu 1.8 as a credible alternative. After the BSL license change, OpenTofu matured into a production-ready fork. Module compatibility is nearly 100% -- most teams can swap the binary and their modules work unchanged. Worth evaluating, but not something you need to decide today.
  • Azure Verified Modules (AVM) became the Microsoft-endorsed pattern for Terraform modules targeting Azure. These are community-maintained modules that follow a strict interface contract. The older terraform-azurerm-caf-enterprise-scale module is in extended support and will be archived in August 2026.
  • Module source support for Azure Container Registry. You can now publish and consume Terraform modules from ACR using the azurerm backend, giving teams a private registry without needing Terraform Cloud.

Why This Matters

Without a module registry, Terraform codebases develop a specific kind of rot. It is not that the code breaks -- it is that it diverges.

Project A creates a VNet module with three subnets and a default NSG. Project B copies it and adds a fourth subnet for AKS. Project C copies from Project A (not B) and adds a service endpoint for Key Vault. Now you have three VNet "modules" with different feature sets, none of which are tested, none of which have a version number.

When a security team says "all VNets must have a Network Watcher flow log enabled," someone has to find every copy, understand each variant, and patch them individually. In a 10-project organization, this turns a 30-minute change into a multi-day effort.

Module versioning prevents this. Version 1.x of your VNet module creates a VNet with subnets. Version 2.0 adds mandatory flow logs. Every project that uses ~> 1.0 keeps working. Projects upgrade to ~> 2.0 on their own schedule. One module, one source of truth.

Implementation: Module Design Patterns

Module Structure

A well-structured Terraform module follows a predictable file layout:

terraform-azurerm-vnet/
  main.tf          # Resource definitions
  variables.tf     # Input variables with descriptions and defaults
  outputs.tf       # Output values for consumers
  versions.tf      # Required provider versions
  README.md        # Usage examples
  tests/
    main.tftest.hcl   # Native Terraform tests
  examples/
    basic/
      main.tf      # Minimal working example
    complete/
      main.tf      # All options exercised

A Real Azure VNet Module

Here is a VNet module with sensible defaults that we actually use across client projects. It creates a virtual network with configurable subnets, optional NSG association, and diagnostic logging:

# terraform-azurerm-vnet/main.tf
 
resource "azurerm_virtual_network" "this" {
  name                = var.name
  location            = var.location
  resource_group_name = var.resource_group_name
  address_space       = var.address_space
  dns_servers         = var.dns_servers
 
  tags = merge(var.tags, {
    managed_by = "terraform"
    module     = "terraform-azurerm-vnet"
  })
}
 
resource "azurerm_subnet" "this" {
  for_each = var.subnets
 
  name                            = each.key
  resource_group_name             = var.resource_group_name
  virtual_network_name            = azurerm_virtual_network.this.name
  address_prefixes                = [each.value.address_prefix]
  service_endpoints               = lookup(each.value, "service_endpoints", [])
  default_outbound_access_enabled = lookup(each.value, "default_outbound_access", true)
}
 
resource "azurerm_network_security_group" "this" {
  for_each = { for k, v in var.subnets : k => v if lookup(v, "create_nsg", true) }
 
  name                = "nsg-${each.key}"
  location            = var.location
  resource_group_name = var.resource_group_name
 
  tags = var.tags
}
 
resource "azurerm_subnet_network_security_group_association" "this" {
  for_each = azurerm_network_security_group.this
 
  subnet_id                 = azurerm_subnet.this[each.key].id
  network_security_group_id = each.value.id
}
# terraform-azurerm-vnet/variables.tf
 
variable "name" {
  type        = string
  description = "Name of the virtual network"
}
 
variable "location" {
  type        = string
  description = "Azure region for the virtual network"
  default     = "westeurope"
}
 
variable "resource_group_name" {
  type        = string
  description = "Name of the resource group"
}
 
variable "address_space" {
  type        = list(string)
  description = "Address space for the virtual network"
  default     = ["10.0.0.0/16"]
}
 
variable "dns_servers" {
  type        = list(string)
  description = "Custom DNS servers. Empty list uses Azure-provided DNS"
  default     = []
}
 
variable "subnets" {
  type = map(object({
    address_prefix         = string
    service_endpoints      = optional(list(string), [])
    create_nsg             = optional(bool, true)
    default_outbound_access = optional(bool, true)
  }))
  description = "Map of subnet configurations"
  default     = {}
}
 
variable "tags" {
  type        = map(string)
  description = "Tags applied to all resources"
  default     = {}
}
# terraform-azurerm-vnet/outputs.tf
 
output "vnet_id" {
  value       = azurerm_virtual_network.this.id
  description = "The ID of the virtual network"
}
 
output "vnet_name" {
  value       = azurerm_virtual_network.this.name
  description = "The name of the virtual network"
}
 
output "subnet_ids" {
  value       = { for k, v in azurerm_subnet.this : k => v.id }
  description = "Map of subnet names to their IDs"
}
 
output "nsg_ids" {
  value       = { for k, v in azurerm_network_security_group.this : k => v.id }
  description = "Map of NSG names to their IDs"
}

Native Terraform Testing (1.7+)

Instead of writing Go code with Terratest, you can now validate module behavior with .tftest.hcl files. Here is a test for the VNet module:

# terraform-azurerm-vnet/tests/main.tftest.hcl
 
variables {
  name                = "vnet-test-module"
  location            = "westeurope"
  resource_group_name = "rg-test-modules"
  address_space       = ["10.100.0.0/16"]
  subnets = {
    "snet-app" = {
      address_prefix    = "10.100.1.0/24"
      service_endpoints = ["Microsoft.KeyVault"]
    }
    "snet-data" = {
      address_prefix = "10.100.2.0/24"
      create_nsg     = true
    }
  }
  tags = {
    environment = "test"
    purpose     = "module-validation"
  }
}
 
run "vnet_creates_with_correct_address_space" {
  command = plan
 
  assert {
    condition     = azurerm_virtual_network.this.address_space[0] == "10.100.0.0/16"
    error_message = "VNet address space does not match expected CIDR"
  }
}
 
run "subnets_create_with_correct_prefixes" {
  command = plan
 
  assert {
    condition     = azurerm_subnet.this["snet-app"].address_prefixes[0] == "10.100.1.0/24"
    error_message = "App subnet prefix does not match"
  }
 
  assert {
    condition     = azurerm_subnet.this["snet-data"].address_prefixes[0] == "10.100.2.0/24"
    error_message = "Data subnet prefix does not match"
  }
}
 
run "nsg_created_for_subnets_by_default" {
  command = plan
 
  assert {
    condition     = length(azurerm_network_security_group.this) == 2
    error_message = "Expected 2 NSGs (one per subnet with create_nsg=true)"
  }
}
 
run "tags_include_managed_by" {
  command = plan
 
  assert {
    condition     = azurerm_virtual_network.this.tags["managed_by"] == "terraform"
    error_message = "managed_by tag missing from VNet"
  }
}

Running terraform test produces output like this:

$ terraform test
tests/main.tftest.hcl... in progress
  run "vnet_creates_with_correct_address_space"... pass
  run "subnets_create_with_correct_prefixes"... pass
  run "nsg_created_for_subnets_by_default"... pass
  run "tags_include_managed_by"... pass
tests/main.tftest.hcl... tearing down
tests/main.tftest.hcl... pass

Success! 4 passed, 0 failed.

No Go installation. No test harness. Just HCL assertions. For unit-level validation (does the plan produce the expected resources with the expected attributes?), this covers 80% of what teams previously needed Terratest for.

Publishing to Azure Container Registry

If you prefer not to use Terraform Cloud, Azure Container Registry can host your modules as OCI artifacts:

# Create a Basic-tier ACR (~$5/month)
az acr create \
  --resource-group rg-platform-shared \
  --name acrterraformmodules \
  --sku Basic
 
# Login to ACR
az acr login --name acrterraformmodules
 
# Package and push the module
cd terraform-azurerm-vnet
tar -czf ../terraform-azurerm-vnet-2.0.0.tar.gz .
oras push acrterraformmodules.azurecr.io/terraform/azurerm/vnet:2.0.0 \
  --artifact-type application/vnd.hashicorp.terraform.module \
  ../terraform-azurerm-vnet-2.0.0.tar.gz

Consumers reference the module with a version constraint:

module "network" {
  source  = "acrterraformmodules.azurecr.io/terraform/azurerm/vnet"
  version = "~> 2.0"
 
  name                = "vnet-platform-prod"
  location            = "westeurope"
  resource_group_name = azurerm_resource_group.network.name
  address_space       = ["10.0.0.0/16"]
 
  subnets = {
    "snet-app-prod" = {
      address_prefix    = "10.0.1.0/24"
      service_endpoints = ["Microsoft.KeyVault", "Microsoft.Sql"]
    }
    "snet-aks-prod" = {
      address_prefix = "10.0.4.0/22"
    }
  }
}

The ~> 2.0 constraint allows patch updates (2.0.1, 2.1.0) but blocks breaking changes (3.0.0). Teams upgrade major versions explicitly.

Real-World Results

The most instructive moment came during a v1.x to v2.0 migration. Version 2.0 of our VNet module added mandatory NSG creation (previously optional). Three projects used ~> 1.0 and were unaffected. When teams were ready to upgrade, they added moved blocks to prevent Terraform from destroying and recreating NSGs:

# In the consuming project's main.tf, during v2.0 migration
moved {
  from = azurerm_network_security_group.legacy["snet-app"]
  to   = module.network.azurerm_network_security_group.this["snet-app"]
}

Without module versioning, the "fix" would have been applied to every copy simultaneously, with no rollback path. Instead, the three projects upgraded over two weeks, each validating with terraform plan before applying.

Module adoption metrics across one client organization (8 projects, 6 months):

MetricBefore modulesAfter modules
VNet configurations8 unique copies1 module, 8 consumers
Time to add flow logs to all VNets~3 days (find, patch, test each copy)2 hours (update module, consumers auto-pull)
Drift between environmentsRegular (weekly portal changes)Rare (pipeline enforces state)
New project network setup4+ hours (copy-paste, customize)20 minutes (reference module, set variables)

Key Takeaways

  • Start with structure, not features. A module with main.tf, variables.tf, outputs.tf, and a test file is already better than 200 lines of inline Terraform. Add features incrementally.
  • Use terraform test for new modules. Native testing covers unit validation without Go. Reserve Terratest for integration tests that need to create real Azure resources.
  • Version everything. Even internal modules. The ~> 2.0 constraint pattern prevents surprise breaking changes. Semver is cheap insurance.
  • Consider Azure Verified Modules (AVM) before writing from scratch. Microsoft maintains these as reference implementations. If one exists for your resource type, start there and customize rather than reinventing.
  • Evaluate OpenTofu, but do not migrate on a deadline. Module compatibility is high but not 100%. Test your specific modules before committing to the switch.

If you are looking for the foundational Terraform patterns that this module design builds on -- remote state, drift detection, naming conventions -- see our Terraform Azure best practices guide. The module patterns here are the natural next step after those basics are in place.

Need help modularizing your Terraform codebase or setting up a private module registry? Our infrastructure consulting services include module design, testing, and CI/CD pipeline setup for Azure-focused teams.

Tags:#Terraform#Azure#IaC#DevOps#Testing
LinkedInX / Twitter

About the author

Martin Rylko

Martin Rylko

Senior Cloud Architect & DevOps Engineer

14+ years in IT – from on-premises datacenters and Hyper-V clustering to cloud infrastructure on Microsoft Azure. I specialize in Landing Zones, IaC automation, Kubernetes and security compliance.

Email LinkedInFull profile

Frequently Asked Questions

Should I use Azure Container Registry or Terraform Cloud as my private module registry?▾
Azure Container Registry (ACR) works well for Azure-centric teams -- it supports OCI artifacts natively, integrates with Entra ID RBAC, and costs about $5/month for Basic tier. Terraform Cloud private registry is better if you already use TFC for state management and want built-in module documentation and version browsing. For most Azure-only shops, ACR is simpler and cheaper.
How should I test Terraform modules before publishing?▾
Use a three-layer testing approach: terraform validate and terraform fmt for syntax checks (seconds), terraform plan with mock variable files for logical validation (minutes), and terraform test (native since v1.7) or Terratest for integration tests that deploy real resources and verify behavior (10-20 minutes). Run integration tests in a dedicated sandbox subscription with auto-cleanup.
Is OpenTofu compatible with my existing Terraform modules?▾
OpenTofu 1.6+ is compatible with Terraform 1.5.x module syntax and state format. Most modules work without changes. The divergence starts with features added after the fork -- OpenTofu has state encryption and some different provider lock behavior. If you are evaluating OpenTofu, test your module suite against both runtimes before committing to a switch.
How do I handle breaking changes when updating a shared module?▾
Follow semantic versioning strictly: breaking changes get a major version bump (v1.x to v2.0). Publish the new version alongside the old one so consumers can migrate at their own pace. Document a migration guide in the module README. In CI/CD, pin exact module versions and use Dependabot or Renovate to create PRs when new versions are available.

You might also like

Terraform Azure Best Practices: Modules & CI/CD

Terraform Azure best practices for production projects. Covers remote state locking, module structure, drift detection, naming conventions, and testing.

Read

Bicep CI/CD: GitHub Actions Pipeline for Azure

Build a production Bicep deployment pipeline with GitHub Actions. Covers what-if previews, environment approvals, OIDC authentication, and rollback strategies.

Read

Kubernetes AKS Production Checklist for Architects

Kubernetes AKS production readiness checklist covering Azure CNI networking, Workload Identity RBAC, cluster autoscaling, monitoring, and DR strategy.

Read