Working Around a Long Standing Terraform AWS Provider Bug

November 3, 2025/

Our CloudFront distribution kept showing up in every Terraform plan even when nothing changed. The culprit was origin_shield.

When origin_shield is disabled, AWS does not include it in the refreshed state. Terraform sees it missing and tries to re-add it with enabled = false on every plan. The obvious fix was to make the block dynamic so it only appears when actually enabled.

dynamic "origin_shield" {
  for_each = var.enable_cloudfront_origin_shield ? [1] : []
  content {
    enabled              = var.enable_cloudfront_origin_shield
    origin_shield_region = data.aws_region.current.name
  }
}

That fixed the drift. But it introduced a different problem.

When enable_cloudfront_origin_shield changes from true to false, the dynamic block stops emitting the block. Terraform plans to remove origin_shield. The plan looks correct. But after apply, origin shield is still enabled in the AWS Console. The provider silently does nothing. This is a known bug filed in April 2022 with no movement in over three years.

Static block causes repeated drift. Dynamic block fixes drift but breaks disabling. Neither works on its own.

The workaround is a null_resource that only exists when origin shield is disabled. It runs a script that calls the AWS API directly to set OriginShield.Enabled = false.

resource "null_resource" "cloudfront_origin_shield_disable" {
  count = var.create_cloudfront == "yes" && !var.enable_cloudfront_origin_shield ? 1 : 0

  triggers = {
    enable_origin_shield = var.enable_cloudfront_origin_shield
    distribution_id      = aws_cloudfront_distribution.default[0].id
    script_hash          = filemd5("${path.module}/bin/disable-cloudfront-origin-shield.sh")
  }

  provisioner "local-exec" {
    command = "${path.module}/bin/disable-cloudfront-origin-shield.sh ${aws_cloudfront_distribution.default[0].id} wordpress"
  }

  depends_on = [aws_cloudfront_distribution.default]
}

The script fetches the current distribution config, checks if origin shield is already disabled, and exits early if so. The core of it patches the config with jq and pushes it back via the AWS CLI.

CLOUDFRONT_CONFIG=$(aws cloudfront get-distribution-config --id "$DISTRIBUTION_ID" --output json)
ETAG=$(echo "$CLOUDFRONT_CONFIG" | jq -r '.ETag')
DISTRIBUTION_CONFIG=$(echo "$CLOUDFRONT_CONFIG" | jq '.DistributionConfig')

jq --arg origin_id "$ORIGIN_ID" '
  .Origins.Items = [
    .Origins.Items[] |
    if .Id == $origin_id then
      .OriginShield.Enabled = false
    else
      .
    end
  ]
' <<< "$DISTRIBUTION_CONFIG" | aws cloudfront update-distribution \
    --id "$DISTRIBUTION_ID" \
    --distribution-config /dev/stdin \
    --if-match "$ETAG"

The null_resource only triggers once on transition. If origin shield gets re-enabled and then disabled again, the triggers change forces a re-run.

Calling the AWS API directly in a provisioner is not ideal. But the bug has been open for years with no fix in sight. Noisy plans erode trust in the plan output. This keeps things clean until the upstream fix lands.

Related posts