It’s now easy to run custom Rego policies against your live AWS account(s) with Trivy, as of version v0.33.0.
In this post I’ll run through several example policies to demonstrate how it works and hopefully give you the foundations to write your own policies.
What is Trivy?
Trivy is a multifunctional, open-source security scanner. It can scan various targets (filesystems, containers, git repositories and more) in order to discover security issues (vulnerabilities, misconfigurations, and secrets). In short, Trivy can find a bunch of different types of security issue in pretty much anything you point it at, for free.
What is Rego?
Rego is a purpose-built declarative language designed purely for the definition of policy. Rego processes and transforms structured input documents like JSON using simple, human-readable assertions, making it a flexible and powerful tool for defining and applying policy.
Custom Policies
Trivy has been able to scan an AWS account for some time now, using the trivy aws
command. This applies a collection of built-in policies, documented on the AVD website. These policies are designed to find common misconfigurations in AWS accounts, and ensure best-practice is followed. However, they are not designed to be a complete security solution - they are a starting point. It is often desirable to embellish these policies with your own custom rules, to ensure your organisation’s specific security requirements are met.
Let’s look at an example scenario:
The security department at Unreliable Systems Ltd. has decided that S3 buckets should not have a name containing 13
, as it is unlucky, and therefore more likely to lead to data-loss.
This could be achieved with the following custom policy:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# METADATA
# title: No unlucky buckets
# description: Buckets should not be named with "13" in the name
# scope: package
# schemas:
# - input: schema["input"]
# custom:
# severity: CRITICAL
# input:
# selector:
# - type: cloud
package user.unreliable.rule1
deny[res] {
bucket := input.aws.s3.buckets[_]
contains(bucket.name.value, "13")
res := result.new("unlucky bucket detected", bucket.name)
}
Let’s break this down. The first few lines are metadata yaml, used to describe the policy and provide some context. The scope
field is used to attach the metadata to this package. Right now, each rule should live in it’s own package, and thus scope
should always be set to package
.
The schemas
field is used to define the input schema for the policy. This is used to validate the rule’s use of the input document. This means if you were to accidentally reference a property that doesn’t exist within the input schema, Trivy would detect it and warn you instead of passing the rule. Therefore it’s always a good idea to define the input schema for your rules (using schema["input"]
).
The custom.severity
field is used to define the severity of the rule. The severity can be one of CRITICAL
, HIGH
, MEDIUM
, LOW
or UNKNOWN
. The default severity is UNKNOWN
.
The custom.input
field is used to define the input selector for the rule. This is used to define which resources the rule should be applied to. The input selector
is a list of objects, each with a type
field. The type
field can be one of cloud
, kubernetes
, rbac
, dockerfile
, toml
, json
or yaml
. Omitting an input selector will mean the rule is applied to all input types, which is not usually desirable.
The policy could be applied with the following Trivy command (where ./policies
is a directory where your *.rego
policies are kept):
1
trivy aws --region us-east-1 --policy-path ./policies --policy-namespaces user --service s3
The --policy-namespaces
flag is used to select which policies to apply. Our example above defines it’s package as user.unreliable.rule1
, so we can select is uging --policy-namepaces user
. The --service
flag is used to define which AWS service to check. The rule will run fine without this flag, but it means Trivy will only gather data for this service (since we know the rule only cares about this service), instead of scanning the whole of an AWS account, which would be very slow in comparison.
Summary
I hope that’s a good introduction to writing custom policies against your live AWS infrastructure. I think it’s a powerful tool when you can run the exact same rego policies against the Terraform code used to create said infrastructure. For further reading, I’d recommend checking out the Rego documentation.
Comments powered by Disqus.