Checkov for Terraform and Kubernetes IaC Security Testing

Checkov for Terraform and Kubernetes IaC Security Testing

Checkov is an open-source static analysis tool for Infrastructure as Code (IaC). It scans Terraform, Kubernetes, CloudFormation, Helm, Dockerfiles, and more for security misconfigurations and compliance violations. With 1000+ built-in checks and the ability to write custom checks in Python or YAML, Checkov catches misconfigurations before they're deployed to production.

Key Takeaways

Checkov scans IaC files without deploying them. It reads your Terraform files, Kubernetes manifests, or CloudFormation templates and checks them against security policies before any cloud resources are created.

1000+ built-in checks cover CIS Benchmarks, NIST, and HIPAA. Checkov checks map to compliance frameworks so you can report against specific standards.

Custom checks can be written in Python or YAML. YAML-based checks are simpler for attribute-level checks. Python checks are necessary for complex cross-resource logic.

Checkov understands Terraform modules and variable references. It resolves var. references and evaluates module inputs, catching misconfigurations that simple text matching would miss.

Use --soft-fail to report without blocking CI during onboarding. Start with --soft-fail to see all findings without failing the pipeline, then gradually enable --hard-fail-on for specific check IDs as you fix them.

Installation

# pip
pip install checkov

<span class="hljs-comment"># Homebrew
brew install checkov

<span class="hljs-comment"># Docker
docker pull bridgecrew/checkov

Basic Usage

Scan Terraform

# Scan the current directory
checkov -d .

<span class="hljs-comment"># Scan a specific Terraform directory
checkov -d ./terraform/prod/

<span class="hljs-comment"># Scan a specific file
checkov -f main.tf

Scan Kubernetes Manifests

checkov -d k8s/ --framework kubernetes

Scan CloudFormation

checkov -d cloudformation/ --framework cloudformation

Scan Helm Charts

checkov -d helm/ --framework helm

Scan Dockerfiles

checkov -f Dockerfile --framework dockerfile

Reading Checkov Output

Checkov's table output shows:

Check: CKV_AWS_18: "Ensure the S3 bucket has access logging enabled"
    FAILED for resource: aws_s3_bucket.data_bucket
    File: /terraform/s3.tf:1-20
    Guide: https://docs.bridgecrew.io/docs/s3_13-enable-logging

        1 | resource "aws_s3_bucket" "data_bucket" {
        2 |   bucket = "my-data-bucket"
        3 | }

Each finding shows:

  • Check ID: the specific policy that failed (CKV_AWS_18)
  • Resource: which Terraform resource failed
  • File and line numbers: where to find it
  • Documentation link: why this matters and how to fix it

Common Terraform Findings and Fixes

S3 Bucket Security

# ❌ Fails multiple checks
resource "aws_s3_bucket" "data" {
  bucket = "my-data-bucket"
}

# ✅ Passes S3 security checks
resource "aws_s3_bucket" "data" {
  bucket = "my-data-bucket"
}

resource "aws_s3_bucket_versioning" "data" {
  bucket = aws_s3_bucket.data.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "data" {
  bucket = aws_s3_bucket.data.id
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "aws:kms"
    }
  }
}

resource "aws_s3_bucket_public_access_block" "data" {
  bucket                  = aws_s3_bucket.data.id
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

resource "aws_s3_bucket_logging" "data" {
  bucket        = aws_s3_bucket.data.id
  target_bucket = aws_s3_bucket.logs.id
  target_prefix = "s3-access-logs/"
}

Security Groups

# ❌ Overly permissive
resource "aws_security_group" "web" {
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]  # CKV_AWS_25: SSH open to world
  }
  
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]  # CKV_AWS_277: unrestricted egress
  }
}

# ✅ Restrictive
resource "aws_security_group" "web" {
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]  # HTTPS only
  }
  
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["10.0.0.0/8"]  # Internal only
  }
}

RDS Security

# ❌ Insecure RDS
resource "aws_db_instance" "db" {
  engine         = "postgres"
  instance_class = "db.t3.micro"
  publicly_accessible = true      # CKV_AWS_17: RDS publicly accessible
  storage_encrypted   = false     # CKV_AWS_16: encryption disabled
  deletion_protection = false     # CKV_AWS_293: no deletion protection
  backup_retention_period = 0     # CKV_AWS_133: no backups
}

# ✅ Secure RDS
resource "aws_db_instance" "db" {
  engine               = "postgres"
  instance_class       = "db.t3.micro"
  publicly_accessible  = false
  storage_encrypted    = true
  deletion_protection  = true
  backup_retention_period = 7
  multi_az             = true
  
  enabled_cloudwatch_logs_exports = ["postgresql", "upgrade"]
}

Kubernetes Security Checks

# ❌ Insecure pod spec
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
        - name: app
          image: myapp:latest
          # No securityContext — running as root
          # No resource limits
# ✅ Secure pod spec
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      securityContext:
        runAsNonRoot: true        # CKV_K8S_6: don't run as root
        runAsUser: 1000
        fsGroup: 2000
      containers:
        - name: app
          image: myapp:1.2.3    # CKV_K8S_14: pin image tag
          securityContext:
            allowPrivilegeEscalation: false  # CKV_K8S_20
            readOnlyRootFilesystem: true     # CKV_K8S_22
            runAsNonRoot: true
            capabilities:
              drop:
                - ALL
          resources:
            limits:
              cpu: "500m"
              memory: "512Mi"
            requests:
              cpu: "100m"
              memory: "128Mi"

Filtering Checks

Skip Specific Checks

# Skip checks you've intentionally accepted
checkov -d . --skip-check CKV_AWS_7,CKV_AWS_18

Run Only Specific Checks

# Run only CIS AWS Benchmark checks
checkov -d . --check CKV_AWS_1,CKV_AWS_2,CKV_AWS_3

<span class="hljs-comment"># Run only a framework
checkov -d . --framework terraform

Check Severity Filtering

# Only show HIGH and CRITICAL severity findings
checkov -d . --check-severity HIGH,CRITICAL

Writing Custom Checks

YAML Custom Check (Simple)

YAML checks work for attribute-level assertions:

# custom-checks/enforce_tags.yaml
metadata:
  name: "Enforce required tags"
  id: "CKV_CUSTOM_1"
  category: "GENERAL_SECURITY"

definition:
  and:
    - cond_type: attribute
      resource_types:
        - aws_instance
        - aws_s3_bucket
        - aws_rds_cluster
      attribute: tags.Environment
      operator: exists
    - cond_type: attribute
      resource_types:
        - aws_instance
        - aws_s3_bucket
        - aws_rds_cluster
      attribute: tags.Owner
      operator: exists

Run with custom checks:

checkov -d . --external-checks-dir ./custom-checks/

Python Custom Check

Python checks support complex cross-resource logic:

# custom-checks/check_s3_has_logging.py
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck

class S3BucketHasLogging(BaseResourceCheck):
    def __init__(self):
        name = "Ensure S3 bucket logging is configured"
        id = "CKV_CUSTOM_S3_1"
        supported_resources = ['aws_s3_bucket_logging']
        categories = [CheckCategories.LOGGING]
        super().__init__(name=name, id=id, categories=categories, 
                        supported_resources=supported_resources)

    def scan_resource_conf(self, conf):
        target_bucket = conf.get('bucket', [None])[0]
        target_prefix = conf.get('target_prefix', [None])[0]
        
        if not target_bucket:
            return CheckResult.FAILED
        if not target_prefix:
            return CheckResult.FAILED
            
        return CheckResult.PASSED

scanner = S3BucketHasLogging()

Output Formats

# CLI table (default)
checkov -d .

<span class="hljs-comment"># JSON
checkov -d . --output json > checkov-results.json

<span class="hljs-comment"># JUnit XML for CI systems
checkov -d . --output junitxml > checkov-results.xml

<span class="hljs-comment"># SARIF for GitHub Code Scanning
checkov -d . --output sarif > checkov.sarif

<span class="hljs-comment"># GitHub annotations format
checkov -d . --output github_failed_only

CI/CD Integration

GitHub Actions

# .github/workflows/checkov.yml
name: IaC Security Scan
on: [push, pull_request]

jobs:
  checkov:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      
      - name: Run Checkov IaC Scan
        uses: bridgecrewio/checkov-action@master
        with:
          directory: terraform/
          framework: terraform
          output_format: sarif
          output_file_path: checkov.sarif
          soft_fail: false
          quiet: true
      
      - name: Upload SARIF to GitHub
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: checkov.sarif

Onboarding with soft-fail

When introducing Checkov to an existing codebase, start with --soft-fail to see all findings without blocking CI:

# See what you have
checkov -d terraform/ --soft-fail --output json > findings.json

<span class="hljs-comment"># Count findings by check ID
<span class="hljs-built_in">cat findings.json <span class="hljs-pipe">| jq <span class="hljs-string">'.results.failed_checks | group_by(.check_id) <span class="hljs-pipe">| map({check_id: .[0].check_id, count: length}) <span class="hljs-pipe">| sort_by(.count) <span class="hljs-pipe">| reverse'

<span class="hljs-comment"># Fix high-volume findings first, then enable hard failure for those checks
checkov -d terraform/ --hard-fail-on CKV_AWS_18,CKV_AWS_16

Connecting to Prisma Cloud / Bridgecrew

Checkov integrates with Prisma Cloud (formerly Bridgecrew) for centralized findings, drift detection, and compliance reporting:

# Run with Bridgecrew platform reporting
checkov -d . --bc-api-key <span class="hljs-variable">$BC_API_KEY --repo-id <span class="hljs-variable">$GITHUB_REPOSITORY

<span class="hljs-comment"># View results in Prisma Cloud dashboard

This enables:

  • Historical trending of security findings
  • Compliance framework reporting (CIS, NIST, PCI-DSS)
  • Policy-as-code with automatic PR comments

Summary

Checkov provides comprehensive IaC security scanning:

  • Terraform: 700+ checks covering AWS, GCP, Azure security best practices
  • Kubernetes: 100+ checks covering pod security, RBAC, network policies
  • CloudFormation: 200+ checks
  • Dockerfiles: best practices for minimal attack surface
  • Custom checks: YAML for attribute checks, Python for complex logic

Integration path:

  1. Run checkov -d . --soft-fail to get a baseline
  2. Fix CRITICAL findings first
  3. Enable --hard-fail-on for fixed check IDs in CI
  4. Gradually expand hard-fail coverage as your team addresses findings

Read more