Writing Custom Checkov Policies: Python and YAML for IaC Security Rules
Checkov ships with over 1,000 built-in checks — but every organization has requirements those built-ins don't cover. Naming conventions, internal tagging standards, specific CIDR allowlists, cross-resource validation — these need custom policies.
Checkov supports two custom policy formats: Python classes for complex logic, and YAML for declarative attribute checks. Both integrate with the same runner and produce the same output format.
When to Write Custom Policies
Write a custom Checkov policy when:
- A built-in check is close but not quite right (e.g., you need
Environmenttag from an approved list, not just anyEnvironmentvalue) - You have internal naming conventions (
team-*-prod-*resource naming) - You need cross-resource validation (every S3 bucket must have a corresponding CloudTrail log)
- Compliance frameworks specific to your organization require rules not in public frameworks
YAML Custom Checks (No Python Required)
YAML checks are the easiest way to add custom rules. They work for attribute-level assertions and support most common patterns.
Basic YAML Check Structure
# checkov/custom_policies/s3_require_environment_tag.yaml
metadata:
name: "S3 buckets must have an approved Environment tag"
id: "CKV2_CUSTOM_1"
category: "GENERAL_SECURITY"
scope:
provider: terraform
definition:
and:
- cond_type: attribute
resource_types:
- aws_s3_bucket
attribute: "tags.Environment"
operator: within
value:
- production
- staging
- development
- testingRun with your custom policy directory:
checkov -d ./terraform --external-checks-dir ./checkov/custom_policiesOperators Available in YAML Checks
# String operators
operator: equals # exact match
operator: not_equals
operator: contains # substring
operator: not_contains
operator: starting_with
operator: ending_with
operator: regex_match # Python re.match
# Existence operators
operator: exists # attribute is present and non-null
operator: not_exists
# Comparison operators
operator: greater_than
operator: less_than
operator: greater_than_or_equal
operator: less_than_or_equal
# Set operators
operator: within # value in list
operator: not_withinYAML Check with and / or Logic
metadata:
name: "RDS must be encrypted with customer key or Multi-AZ"
id: "CKV2_CUSTOM_2"
category: "ENCRYPTION"
scope:
provider: terraform
definition:
or:
- cond_type: attribute
resource_types:
- aws_db_instance
attribute: kms_key_id
operator: exists
- cond_type: attribute
resource_types:
- aws_db_instance
attribute: multi_az
operator: equals
value: "true"Enforcing Naming Conventions
metadata:
name: "EC2 instances must follow team naming convention"
id: "CKV2_CUSTOM_3"
category: "CONVENTION"
scope:
provider: terraform
definition:
cond_type: attribute
resource_types:
- aws_instance
attribute: tags.Name
operator: regex_match
value: "^(platform|data|infra|security)-[a-z0-9-]+-[0-9]+$"Python Custom Checks
For logic that YAML can't express — computed values, cross-resource lookups, conditional rules based on other attributes — write Python classes.
Check Structure
All Checkov Python checks inherit from one of:
BaseResourceCheck— evaluates a single resourceBaseResourcePairCheck— evaluates pairs of resources (e.g., S3 bucket + bucket policy)BaseGraphCheck— access to the full resource graph
# checkov/custom_checks/check_s3_logging_enabled.py
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
class S3LoggingCheck(BaseResourceCheck):
def __init__(self):
name = "Ensure S3 bucket has server access logging enabled"
id = "CKV2_AWS_CUSTOM_10"
categories = [CheckCategories.LOGGING]
supported_resources = ['aws_s3_bucket']
super().__init__(name=name, id=id, categories=categories,
supported_resources=supported_resources)
def scan_resource_conf(self, conf):
"""
conf is a dict of the resource's Terraform attributes.
Return CheckResult.PASSED or CheckResult.FAILED.
"""
logging = conf.get("logging")
if not logging:
return CheckResult.FAILED
# logging is a list with one element (Terraform block representation)
if isinstance(logging, list):
logging = logging[0] if logging else None
if not logging or not logging.get("target_bucket"):
return CheckResult.FAILED
return CheckResult.PASSED
# Register the check
scanner = S3LoggingCheck()Testing Custom Python Checks
Write unit tests for your checks using pytest and Checkov's test helpers:
# tests/test_s3_logging_check.py
import pytest
import hcl2
import io
from checkov.terraform.checks.resource.registry import Registry
from checkov.common.models.enums import CheckResult
# Import to register the check
from checkov.custom_checks.check_s3_logging_enabled import scanner
class TestS3LoggingCheck:
def test_passes_with_logging(self):
# Simulate a resource config with logging enabled
resource_conf = {
"bucket": ["my-bucket"],
"logging": [{
"target_bucket": ["my-logs-bucket"],
"target_prefix": ["s3-access-logs/"],
}],
}
result = scanner.scan_resource_conf(conf=resource_conf)
assert result == CheckResult.PASSED
def test_fails_without_logging(self):
resource_conf = {
"bucket": ["my-bucket"],
}
result = scanner.scan_resource_conf(conf=resource_conf)
assert result == CheckResult.FAILED
def test_fails_with_empty_logging_block(self):
resource_conf = {
"bucket": ["my-bucket"],
"logging": [{}],
}
result = scanner.scan_resource_conf(conf=resource_conf)
assert result == CheckResult.FAILEDRun tests:
pytest tests/test_s3_logging_check.py -vCross-Resource Pair Check
Check that every resource of type A has a corresponding resource of type B:
# Ensure every aws_s3_bucket has a corresponding aws_s3_bucket_versioning
from checkov.terraform.checks.resource.base_resource_pair_check import BaseResourcePairCheck
from checkov.common.models.enums import CheckResult, CheckCategories
class S3VersioningPairCheck(BaseResourcePairCheck):
def __init__(self):
name = "Ensure S3 bucket has versioning enabled via aws_s3_bucket_versioning"
id = "CKV2_AWS_CUSTOM_11"
categories = [CheckCategories.BACKUP_AND_RECOVERY]
supported_resources = ['aws_s3_bucket']
resource_type_configs = ['aws_s3_bucket_versioning']
super().__init__(
name=name,
id=id,
categories=categories,
supported_resources=supported_resources,
resource_type_configs=resource_type_configs,
)
def scan_resource_conf(self, conf, entity_type):
return CheckResult.UNKNOWN
def scan_pair_conf(self, conf, entity_configuration1, entity_configuration2):
# entity_configuration1: the aws_s3_bucket
# entity_configuration2: list of aws_s3_bucket_versioning for this bucket
if not entity_configuration2:
return CheckResult.FAILED
for versioning in entity_configuration2:
versioning_config = versioning.get("versioning_configuration", [{}])
if isinstance(versioning_config, list):
versioning_config = versioning_config[0] if versioning_config else {}
status = versioning_config.get("status", ["Suspended"])
if isinstance(status, list):
status = status[0]
if status.lower() == "enabled":
return CheckResult.PASSED
return CheckResult.FAILED
scanner = S3VersioningPairCheck()Graph Check for Complex Cross-Resource Rules
Graph checks access the full resource dependency graph. Use them when you need to traverse relationships:
from checkov.terraform.checks.graph_checks.base_graph_check import BaseGraphCheck
from checkov.common.models.enums import CheckResult, CheckCategories
class ECSTaskDefinitionEncryptedSecrets(BaseGraphCheck):
def __init__(self):
name = "ECS task definitions must not use hardcoded secrets in environment variables"
id = "CKV2_AWS_CUSTOM_12"
categories = [CheckCategories.ENCRYPTION]
supported_entities = ['aws_ecs_task_definition']
block_type = "resource"
super().__init__(
name=name,
id=id,
categories=categories,
supported_entities=supported_entities,
block_type=block_type,
)
def scan_resource_conf(self, conf):
container_defs = conf.get("container_definitions", ["[]"])
if isinstance(container_defs, list):
container_defs = container_defs[0]
import json
try:
containers = json.loads(container_defs)
except (json.JSONDecodeError, TypeError):
return CheckResult.UNKNOWN
secret_patterns = [
'password', 'secret', 'key', 'token', 'credential', 'api_key'
]
for container in containers:
env_vars = container.get('environment', [])
for env in env_vars:
name_lower = env.get('name', '').lower()
value = env.get('value', '')
for pattern in secret_patterns:
if pattern in name_lower and value and not value.startswith('arn:'):
return CheckResult.FAILED
return CheckResult.PASSED
scanner = ECSTaskDefinitionEncryptedSecrets()Organizing Custom Policies
Structure your custom policy directory to match Checkov's conventions:
checkov/
custom_policies/
yaml/
s3_tagging.yaml
ec2_naming.yaml
rds_backup.yaml
python/
__init__.py
s3_logging.py
ecs_secrets.py
rds_parameter_group.py
tests/
test_s3_logging.py
test_ecs_secrets.py
checkov.yaml # Project-wide Checkov configProject-wide Checkov configuration:
# checkov.yaml
soft-fail: false
external-checks-dir:
- ./checkov/custom_policies/yaml
- ./checkov/custom_policies/python
framework:
- terraform
check:
- CKV_AWS_*
- CKV2_AWS_*
- CKV2_CUSTOM_*
skip-check:
- CKV_AWS_144 # Skip cross-region replication check for dev environments
output: cli
compact: trueRunning Custom Checks in CI
# .github/workflows/checkov-custom.yml
name: IaC Security + Custom Policies
on:
pull_request:
paths:
- 'terraform/**'
- 'checkov/**'
jobs:
checkov:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Checkov with custom policies
uses: bridgecrewio/checkov-action@master
with:
directory: terraform/
config_file: checkov.yaml
soft_fail: false
output_format: sarif
output_file_path: results.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: results.sarif
- name: Test custom Python checks
run: |
pip install checkov pytest
pytest checkov/custom_policies/tests/ -vThe SARIF upload integrates violations directly into the GitHub PR UI as code annotations.
Metadata Fields for Custom Checks
When writing Python checks, set metadata fields that appear in Checkov's output:
class MyCheck(BaseResourceCheck):
def __init__(self):
name = "Human-readable check name"
id = "CKV2_CUSTOM_20"
categories = [CheckCategories.GENERAL_SECURITY]
supported_resources = ['aws_security_group']
# Optional metadata
self.benchmark = "CIS AWS 1.4"
self.guideline = "https://docs.internal/security/sg-rules"
super().__init__(
name=name,
id=id,
categories=categories,
supported_resources=supported_resources,
)ID naming convention:
CKV_— original Checkov check formatCKV2_— updated Checkov format (preferred for new checks)CKV2_CUSTOM_— your organization's custom checks
Use a consistent prefix so your custom checks are easy to identify in output and easy to skip when needed.
Versioning and Distributing Custom Policies
Share custom policies across teams by publishing them as a package:
# Install from a private git repo
pip install git+https://github.com/myorg/checkov-policies.git
<span class="hljs-comment"># Or as a local package
pip install -e ./checkov-policies/The installed package auto-registers its checks when imported — Checkov picks them up without any additional configuration.
Summary
Custom Checkov policies fill the gap between generic security rules and your organization's specific requirements:
- YAML checks for attribute validation — fast to write, no Python needed
- Python
BaseResourceCheckfor computed logic, conditional rules - Python
BaseResourcePairCheckfor two-resource relationships - Python graph checks for full resource graph traversal
- Unit test every check with pytest — policies without tests drift
- SARIF output integrates violations into GitHub PR annotations
Start with YAML for simple attribute rules. Graduate to Python when you need conditionals, regex on computed values, or multi-resource validation. The test pattern for custom checks is identical to application code: write the test first, then the policy.