Adding custom rules in Datree
7 min read
What is Datree?
Datree is a CLI tool that supports Kubernetes owners in their roles and it helps by preventing developers from making errors in their Kubernetes configuration files before they reach production and cause failure. It does so by providing a policy enforcement solution to run automatic checks for rule violations. It can be used on the command line to run policies against Kubernetes manifest files and Helm charts. You can include Datree's policy check as part of your CI/CD pipeline or run it locally before every commit.
If you are new to Datree, check out my introductory blog post to get started with the tool.
Getting started with custom rules
Out of the box, Datree offers 34 rules for you to test spread across various categories such as:
Containers
Workload
CronJob
Security
Networking
Deprecation
In addition to the tool’s built-in rules, you can also write any rules that you wish and test them against your Kubernetes manifests to check for rule violations. The custom rule engine is based on JSON Schema, so it supports both YAML and JSON declarative syntax.
Prerequisites
The policy as code feature must be turned on to use your custom rules. Learn more about it in my blog.
How to write a custom rule
After you have downloaded the policies.yaml
file, it will look something like this:
apiVersion: v1
customRules: null
policies:
- name: My_Policy
rules:
# - identifier: CONTAINERS_MISSING_IMAGE_VALUE_VERSION
# messageOnFailure: Incorrect value for key `image` - specify an image version to avoid unpleasant "version surprises" in the future
# - identifier: CONTAINERS_MISSING_MEMORY_REQUEST_KEY
# messageOnFailure: Missing property object `requests.memory` - value should be within the accepted boundaries recommended by the organization
- identifier: CONTAINERS_MISSING_CPU_REQUEST_KEY
messageOnFailure: Missing property object `requests.cpu` - value should be within the accepted boundaries recommended by the organization
# - identifier: CONTAINERS_MISSING_MEMORY_LIMIT_KEY
# messageOnFailure: Missing property object `limits.memory` - value should be within the accepted boundaries recommended by the organization
# - identifier: CONTAINERS_MISSING_CPU_LIMIT_KEY
# messageOnFailure: Missing property object `limits.cpu` - value should be within the accepted boundaries recommended by the organization
# - identifier: INGRESS_INCORRECT_HOST_VALUE_PERMISSIVE
# messageOnFailure: Incorrect value for key `host` - specify host instead of using a wildcard character ("*")
# - identifier: SERVICE_INCORRECT_TYPE_VALUE_NODEPORT
# messageOnFailure: Incorrect value for key `type` - `NodePort` will open a port on all nodes where it can be reached by the network external to the cluster
# - identifier: CRONJOB_INVALID_SCHEDULE_VALUE
# messageOnFailure: 'Incorrect value for key `schedule` - the (cron) schedule expressions is not valid and, therefore, will not work as expected'
# - identifier: WORKLOAD_INVALID_LABELS_VALUE
# messageOnFailure: Incorrect value for key(s) under `labels` - the vales syntax is not valid so the Kubernetes engine will not accept it
# - identifier: WORKLOAD_INCORRECT_RESTARTPOLICY_VALUE_ALWAYS
# messageOnFailure: Incorrect value for key `restartPolicy` - any other value than `Always` is not supported by this resource
# - identifier: HPA_MISSING_MINREPLICAS_KEY
# messageOnFailure: Missing property object `minReplicas` - the value should be within the accepted boundaries recommended by the organization
# - identifier: HPA_MISSING_MAXREPLICAS_KEY
# messageOnFailure: Missing property object `maxReplicas` - the value should be within the accepted boundaries recommended by the organization
Here you can see a list of rules under the policies
tag. In your policies.yaml
file, you can add a new custom rule by creating a customRules
tag that consists of the following:
identifier
: this is a unique ID that is associated with your policyname
: this will be shown as a title when the rule failsdefaultMessageOnFailure
: this message will be used when themessageOnFailure
tag in thepolicies.yaml
file for a particular rule is empty.schema
: here is where our rule logic resides in the form of JSON or YAML
This rule can now be added as a new entry in your policies
key.
Examples
Example 1
Let's say you want to create a custom rule that only allows the number of running replicas between 1-5. As mentioned above, we are going to create a custom rule with the 4 tags, that will be listed under the customRules
tag as below:
customRules:
- identifier: CUSTOM_POLICY_REPLICAS
name: Make sure correct number of replicas are running
defaultMessageOnFailure: Make sure running replicas are between 1 - 5
schema:
if:
properties:
kind:
enum:
- Deployment
then:
properties:
spec:
properties:
replicas:
minimum: 1
maximum: 5
required:
- replicas
We now need to reference this rule in our policy. This can be done simply by:
apiVersion: v1
policies:
- name: Kunal
isDefault: true
rules:
- identifier: CUSTOM_POLICY_REPLICAS
messageOnFailure: Incorrect number of replicas # this message will now be used
customRules:
- identifier: CUSTOM_POLICY_REPLICAS
name: Make sure correct number of replicas are running
defaultMessageOnFailure: Make sure running replicas are between 1 - 5
schema:
if:
properties:
kind:
enum:
- Deployment
then:
properties:
spec:
properties:
replicas:
minimum: 1
maximum: 5
required:
- replicas
Notice how the structure is similar to the built-in rules inside the policies. The only difference is now we are using our own rules and messages.
Publish your policy with the custom rule
$ datree publish policies.yaml
Published successfully
Make changes in your test file
$ vi ~/.datree/k8s-demo.yaml
This is what my configuration file looks like. As you can see, I can set the number of replicas to 6, which is a rule violation.
apiVersion: apps/v1
kind: Deployment
metadata:
name: rss-site
namespace: test
labels:
owner: --
environment: stage
app: web
spec:
replicas: 6
selector:
matchLabels:
app: web
template:
metadata:
namespace: test
labels:
app: we
spec:
containers:
- name: front-end
image: nginx:latest
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: "64Mi"
cpu: "500m"
limits:
cpu: "500m"
ports:
- containerPort: 80
- name: rss-reader
image: datree/nginx@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: Custom-Header
value: Awesome
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
cpu: "64m"
memory: "128Mi"
limits:
memory: "128Mi"
ports:
- containerPort: 88
Run the test
$ datree test ~/.datree/k8s-demo.yaml -p Kunal
>> File: ../.datree/k8s-demo.yaml
[V] YAML validation
[V] Kubernetes schema validation
[X] Policy check
❌ Ensure that deployment has replicas between 1-5 [1 occurrence]
— metadata.name: rss-site (kind: Deployment)
💡 Incorrect number of replicas
(Summary)
- Passing YAML validation: 1/1
- Passing Kubernetes (1.18.0) schema validation: 1/1
- Passing policy check: 0/1
+---------------------------------+----------------------------------------------------------+
| Enabled rules in policy “Kunal” | 1 |
| Configs tested against policy | 1 |
| Total rules evaluated | 1 |
| Total rules failed | 1 |
| Total rules passed | 0 |
| See all rules in policy | https://app.datree.io/login?cliId=4uptzi3mQxNDwaUon7Q4qB |
+---------------------------------+----------------------------------------------------------+
As you can see, the custom rule works!
You can also filter out your rules in the Datree dashboard:
Similarly, it can also be seen in the history section of your dashboard:
NOTE: You cannot make changes directly to the dashboard when the policy as the code option is turned on.
Example 2
Let's take a look at another use case. Imagine you only want to provide the memory usage for your Kubernetes configurations in the range of 256Mi - 512Mi. You can create a custom rule for this resource quota as:
customRules:
- identifier: CUSTOM_MEMORY_LIMIT
name: Make sure correct memory limit is configured
defaultMessageOnFailure: Memory limit should be between 256Mi - 512Mi
schema:
properties:
spec:
properties:
containers:
items:
properties:
resources:
properties:
limits:
properties:
cpu:
resourceMinimum: 250m
resourceMaximum: 500m
Adding the rule to the policy would look something like this:
apiVersion: v1
policies:
- name: Default
isDefault: true
rules:
- identifier: CUSTOM_MEMORY_LIMIT
messageOnFailure: ""
customRules:
- identifier: CUSTOM_MEMORY_LIMIT
name: Make sure correct memory limit is configured
defaultMessageOnFailure: Memory limit should be between 256Mi - 512Mi
schema:
properties:
spec:
properties:
containers:
items:
properties:
resources:
properties:
limits:
properties:
memory:
resourceMinimum: 256Mi
resourceMaximum: 512Mi
Publish your policy with the custom rule
$ datree publish policies.yaml
Published successfully
Creating a test file
apiVersion: v1
kind: Pod
metadata:
name: kunals-pod
spec:
containers:
- name: cpu
image: nginx
resources:
requests:
memory: "120Mi"
cpu: "340m"
limits:
memory: "800Mi"
cpu: "400m"
Run the test
$ datree test pod.yaml -p Kunal
>> File: pod.yaml
[V] YAML validation
[V] Kubernetes schema validation
[X] Policy check
❌ Make sure correct memory limit is configured [1 occurrence]
— metadata.name: kunals-pod (kind: Pod)
💡 Memory limit should be between 256Mi - 512Mi
(Summary)
- Passing YAML validation: 1/1
- Passing Kubernetes (1.18.0) schema validation: 1/1
- Passing policy check: 0/1
+---------------------------------+----------------------------------------------------------+
| Enabled rules in policy “Kunal” | 1 |
| Configs tested against policy | 1 |
| Total rules evaluated | 1 |
| Total rules failed | 1 |
| Total rules passed | 0 |
| See all rules in policy | https://app.datree.io/login?cliId=4uptzi3mQxNDwaUon7Q4qB |
+---------------------------------+----------------------------------------------------------+
As you can see, the custom test fails because the memory in the configuration file was set out of the bounds specified.
NOTE: Notice that it used the message in defaultMessageOnFailure
since the messageOnFailure
was left empty.