Skip to main content

Running

The simpliest way to run preq and the latest community CREs is using standard input. CREs will be automatically updated as new releases are available.

First try these examples

Once you've installed preq, let's give it a try on a few examples. This will help you quickly see how things work before pointing it at your data.

Try the following command:

curl -s https://docs.prequel.dev/demo/application.log | preq

You should see that CRE-2024-0007 was detected:

Parsing rules           done! [3 rules in 3ms; 433 rules/s]
Problems detected done! [1 in 7ms; 144/s]
Reading stdin done! [208.64KB in 4ms; 53.01MB/s]
Matching lines done! [1.01K lines in 4ms; 275.29K lines/s]
CRE-2024-0007 critical [2 hits @ 2025-03-11T10:00:19-04:00]
tip

Run with preq -o - for a detailed CRE report.

Now check out these additional examples

Now try your data

Congrats, you're ready to try preq on your own data! First start with piping data to preq via stdin. Here are some examples.

kubectl preq pg17-postgresql-0

Once you know which data sources you want to regularly test, generate a data source template and set up a scheduled job.

CRE reports

preq prints CRE detections to standard out. It also records details for each detection in a JSON report. The concise details on standard out list the CRE ID, its severity, the number of hits in the data for the CRE, and the first observed timestamp. Check out the CRE schema for more details.

The report provides additional context on the problem, the impact, its mitigation, and references. It also records the specific matches in the data by the CRE rule with their timestamps.

preq-report.json
[
{
"cre": {
"id": "CRE-2024-0007",
"title": "RabbitMQ Mnesia overloaded recovering persistent queues",
"category": "message-queue-problems",
"tags": [
"cre-2024-0007",
"known-problem",
"rabbitmq"
],
"author": "Prequel",
"description": "- The RabbitMQ cluster is processing a large number of persistent mirrored queues at boot. \n",
"impact": "- RabbitMQ is unable to process any new messages and can cause outages in consumers and producers.\n",
"cause": "- The Erlang process, Mnesia, is overloaded while recovering persistent queues on boot. \n",
"mitigation": "- Adjusting mirroring policies to limit the number of mirrored queues\n- Remove high-availability policies from queues\n- Add additional CPU resources and restart the RabbitMQ cluster\n- Use [lazy queues](https://www.rabbitmq.com/docs/lazy-queues) to avoid incurring the costs of writing data to disk \n",
"references": [
"https://groups.google.com/g/rabbitmq-users/c/ekV9tTBRZms/m/1EXw-ruuBQAJ"
],
"applications": [
{
"name": "rabbitmq",
"version": "3.9.x"
}
]
},
"hits": [
{
"timestamp": "2025-03-11T09:00:19-05:00",
"entry": "2025-03-11 14:00:19.421865+00:00 [erro] \u003c0.229.0\u003e Discarding message {'$gen_cast',{force_event_refresh,#Ref\u003c0.449530684.1179910147.46753\u003e}} from \u003c0.229.0\u003e to \u003c0.3159.0\u003e in an old incarnation (1741605434) of this node (1741701615) \u003cA\u003e"
},
{
"timestamp": "2025-03-11T09:00:22-05:00",
"entry": "2025-03-11 14:00:20.144956+00:00 [warn] \u003c0.247.0\u003e Mnesia('rabbit@rabbitmq-0.svc.cluster.local'): ** WARNING ** Mnesia is overloaded: {dump_log,write_threshold}"
},
{
"timestamp": "2025-03-11T09:00:20-05:00",
"entry": "2025-03-11 14:00:19.421872+00:00 [erro] \u003c0.229.0\u003e Discarding message {'$gen_cast',{force_event_refresh,#Ref\u003c0.449530684.1179910147.46753\u003e}} from \u003c0.229.0\u003e to \u003c0.3156.0\u003e in an old incarnation (1741605434) of this node (1741701615)"
},
{
"timestamp": "2025-03-11T09:00:23-05:00",
"entry": "2025-03-11 14:00:20.177194+00:00 [warn] \u003c0.247.0\u003e Mnesia('rabbit@rabbitmq-0.svc.cluster.local'): ** WARNING ** Mnesia is overloaded: {dump_log,write_threshold}"
}
],
"id": "CRE-2024-0007",
"rule_hash": "",
"rule_id": "5UD1RZxGC5LJQnVpAkV11A",
"timestamp": "2025-03-11T09:00:19-05:00"
}
]

Custom report name

The report will be saved to preq-report-<timestamp-epoch>.json unless -o is specified to change the name of the report.

cat /var/log/rabbitmq.log | preq -o myreport.json 

Example output

Parsing rules           done! [1 rules in 1ms; 497 rules/s]
Problems detected done! [1 in 2ms; 494/s]
Reading stdin done! [2.88KB in 1ms; 1.97MB/s]
Matching lines done! [14 lines in 1ms; 9.57K lines/s]
CRE-2024-0007 critical [2 hits @ 2025-03-11T09:00:19-05:00]

Wrote report to myreport.json

Skip generating a report

Use -o "" to avoid generating a report file. This is useful during development.

cat /var/log/rabbitmq.log | preq -o ""

Example output

Parsing rules           done! [1 rules in 1ms; 497 rules/s]
Problems detected done! [1 in 2ms; 494/s]
Reading stdin done! [2.88KB in 1ms; 1.97MB/s]
Matching lines done! [14 lines in 1ms; 9.57K lines/s]
CRE-2024-0007 critical [2 hits @ 2025-03-11T09:00:19-05:00]

Send report to standard out

Use -o "-" to send the report to standard out instead of the filesystem.

cat ./examples/02-example.log | preq -r ./examples/02-set-multiple-example-good-window.yaml -d -o - 

Example output

Parsing rules           done! [1 rules in 1ms; 923 rules/s]
Problems detected done! [1 in 1ms; 915/s]
Reading stdin done! [822B in 0s; 4.92MB/s]
set-example-2 critical [1 hits @ 2019-02-05T06:07:39-06:00]
Matching lines done! [10 lines in 0s; 57.80K lines/s]
[
{
"cre": {
"id": "set-example-2"
},
"hits": [
{
"timestamp": "2019-02-05T06:07:39-06:00",
"entry": "2019/02/05 12:07:39 [emerg] 1655#1655: bind() to test"
},
{
"timestamp": "2019-02-05T06:07:38-06:00",
"entry": "2019/02/05 12:07:38 [emerg] 1655#1655: bind() to foo bar"
},
{
"timestamp": "2019-02-05T06:07:43-06:00",
"entry": "2019/02/05 12:07:43 [emerg] 1655#1655: still could not bind() to baaaz"
}
],
"id": "set-example-2",
"rule_hash": "",
"rule_id": "",
"timestamp": "2019-02-05T06:07:39-06:00"
}
]

Silent mode

Use -q to stop printing progress and CRE summaries to standard out.

cat ./examples/02-example.log | preq -r ./examples/02-set-multiple-example-good-window.yaml -d -q -o - 

Example output

[
{
"cre": {
"id": "set-example-2"
},
"hits": [
{
"timestamp": "2019-02-05T06:07:39-06:00",
"entry": "2019/02/05 12:07:39 [emerg] 1655#1655: bind() to test"
},
{
"timestamp": "2019-02-05T06:07:38-06:00",
"entry": "2019/02/05 12:07:38 [emerg] 1655#1655: bind() to foo bar"
},
{
"timestamp": "2019-02-05T06:07:43-06:00",
"entry": "2019/02/05 12:07:43 [emerg] 1655#1655: still could not bind() to baaaz"
}
],
"id": "set-example-2",
"rule_hash": "",
"rule_id": "",
"timestamp": "2019-02-05T06:07:39-06:00"
}
]

Debug logs

Use -l <LEVEL> to print debug logs at the info, debug, trace, error, or warn level. Logs are sent to standard error.

cat /var/log/rabbitmq.log | preq -r ~/rule.yaml -l error

Example output

Apr  2 23:25:04.512448 ERR engine.go:207 > Duplicate rule hash id. Aborting... id=5UD1RZxGC5LJQnVpAkV11A
Apr 2 23:25:04.512531 ERR engine.go:463 > Failed to load rules error="duplicate rule hash id=5UD1RZxGC5LJQnVpAkV11A cre=CRE-2024-007"
Apr 2 23:25:04.512555 ERR engine.go:491 > Failed to compile rules error="duplicate rule hash id=5UD1RZxGC5LJQnVpAkV11A cre=CRE-2024-007"
Apr 2 23:25:04.512569 ERR preq.go:231 > Failed to load rules error="duplicate rule hash id=5UD1RZxGC5LJQnVpAkV11A cre=CRE-2024-007"
Rules error: duplicate rule hash id=5UD1RZxGC5LJQnVpAkV11A cre=CRE-2024-007

Custom rules

A key feature of preq is receiving automatic updates of the latest CRE rules from the community. You can also add custom rules.

Use -r to run another rule document in addition to the community CRE rules. Ensure the rules do not contain duplicate IDs, like CRE ID, rule ID, or rule hash.

cat /var/log/rabbitmq.log | preq -r ~/my-new-rules.yaml

Example output

Parsing rules           done! [1 rules in 0s; 649 rules/s]
Problems detected done! [1 in 1ms; 643/s]
Reading stdin done! [2.88KB in 0s; 2.50MB/s]
Matching lines done! [14 lines in 0s; 12.20K lines/s]
CRE-2024-0007 critical [2 hits @ 2025-03-11T09:00:19-05:00]

Wrote report to preq-report-1743654939.json

Use -d to disable running community CREs while developing a new rule.

Accept updates

Use -y to avoid interactive input prompts when new CRE or preq updates are available for download.

cat /var/log/rabbitmq.log | preq -r ~/rule.yaml -d -y

Example output

package name: preq-public-rules.0.3.5.7eda0f45.yaml.gz
Downloading update ... done! [4.19KB in 0s; 37.77MB/s]
package name: preq-public-rules.0.3.5.7eda0f45.yaml.gz.sha256
package name: preq-public-rules.0.3.5.7eda0f45.yaml.gz.sig
ECDSA signature and sha256 hash verified
Parsing rules done! [1 rules in 1ms; 713 rules/s]
Problems detected done! [1 in 1ms; 706/s]
Reading stdin done! [2.88KB in 0s; 3.42MB/s]
Matching lines done! [14 lines in 0s; 16.75K lines/s]
CRE-2024-0007 critical [2 hits @ 2025-03-11T09:00:19-05:00]

Wrote report to preq-report-1743655171.json

Generate data source template

Use -g to generate a data source template from your installed CRE rules package.

$ preq -g

Example output

Wrote data source template to data-sources-0.3.12.yaml

Edit the template to point the data sources to the locations of the logs on your system. See Data Sources for more information.

Use a data source template

Use -s to provide a data sources configuration file.

preq -s ./examples/40-sources.yaml 

Example output

Parsing rules           done! [3 rules in 1ms; 14 rules/s]
Problems detected done! [0 in 1m24.687s; 0/s]
Reading my-gke-metrics done! [28.01GB in 1m24.685s; 330.79MB/s]
Matching lines done! [114.42M lines in 1m24.685s; 1.35M lines/s]

Wrote report to preq-report-1743656723.json

Generate a Kubernetes cronjob

To generate a Kubernetes cronjob to run preq on a regular interval against specific pods, deployments, services, jobs, or configmaps:

preq -j

Example output:

Cronjob template written to cronjob.yaml

Combine with -o - to print the cronjob template to stdout:

preq -j -o -

Example output:

# ---------------------------------------------------------------------------
# preq cronjob template
#
# PRE-RUN Create/refresh the ConfigMap that the CronJob expects:
#
# Option 1: Use default latest rules with a Slack notification webhook
#
# kubectl create configmap preq-conf \
# --from-file=config.yaml=/home/tony/.preq/config.yaml \
# --from-file=.ruletoken=/home/tony/.preq/.ruletoken \
# --from-file=/home/tony/.preq/prequel-public-cre-rules.0.3.12.e73806d4.yaml.gz=/home/tony/.preq//home/tony/.preq/prequel-public-cre-rules.0.3.12.e73806d4.yaml.gz \
# --dry-run=client -o yaml | kubectl apply -f -
#
# The --dry-run/apply pattern lets you update the ConfigMap idempotently.
#
# These configuration files are automatically created by preq the first time it is executed locally by the kubectl client.
#
# NOTE: This template assumes the config.yaml file is configured to use a Slack notification webhook. Visit
# https://docs.prequel.dev/configuration to learn how to modify the configuration file to add a notification webhook (e.g. Slack).
#
# notification:
# type: slack
# webhook: https://hooks.slack.com/services/.....
#
# Option 2: Use custom rules with a Slack notification webhook
#
# To add custom rules to this job, update the config.yaml file to add the path to your custom rules file where it will be mounted
# in the cronjob filesystem.
#
# rules:
# paths:
# - /.preq/custom-rules.yaml
#
# Then create the configmap with the following command:
#
# kubectl create configmap preq-conf \
# --from-file=config.yaml=/home/tony/.preq/config.yaml \
# --from-file=.ruletoken=/home/tony/.preq/.ruletoken \
# --from-file=/home/tony/.preq/prequel-public-cre-rules.0.3.12.e73806d4.yaml.gz=/home/tony/.preq//home/tony/.preq/prequel-public-cre-rules.0.3.12.e73806d4.yaml.gz \
# --from-file=custom-rules.yaml=/local/path/to/custom-rules.yaml \
# --dry-run=client -o yaml | kubectl apply -f -
#
# IMPORTANT:
#
# 1. Uncomment the command in the job below to add a POD to monitor. Use labels to select the POD for a service.
# 2. Update the schedule to run at the frequency you want. This runs every 10 minutes by default.
# 3. Change the -o "preq-cronjob-<POD>: " output prefix to the name of the cronjob or how you want to identify these notifications in Slack.
#
# ---------------------------------------------------------------------------
apiVersion: v1
kind: ServiceAccount
metadata:
name: preq
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: preq
rules:
- apiGroups: ['']
resources: ['pods', 'pods/log']
verbs: ['get', 'list', 'watch']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: preq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: preq
subjects:
- kind: ServiceAccount
name: preq
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: preq-cronjob
spec:
schedule: "*/10 * * * *" # every 10 minutes
concurrencyPolicy: Forbid # don’t start a new run until the prior run finishes
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
backoffLimit: 1
template:
spec:
containers:
- name: preq-cronjob
image: prequeldev/kubectl-krew-preq:latest
command:
- /bin/sh
- -c
- |
############
# IMPORTANT: Uncomment the command in the job below
#
# * If you want to monitor a pod using labels to select the POD for a service, use the following commands:
# POD=$(kubectl -n default get pods -l app.kubernetes.io/instance=<LABEL> -o jsonpath='{.items[0].metadata.name}')
# kubectl preq "$POD" -y -o "preq-cronjob-<POD>: "
#
# * If you want to monitor pods in a deployment, use the following command:
# kubectl preq deployment/<DEPLOYMENT> -y -o "preq-cronjob-<DEPLOYMENT>: "
#
# * If you want to monitor pods in a job, use the following command:
# kubectl preq job/<JOB> -y -o "preq-cronjob-<JOB>: "
#
# * If you want to monitor pods in a service, use the following command:
# kubectl preq service/<SERVICE> -y -o "preq-cronjob-<SERVICE>: "

volumeMounts:
- name: preq-conf
mountPath: /.preq
readOnly: true
restartPolicy: Never
volumes:
- name: preq-conf
configMap:
name: preq-conf
serviceAccountName: preq

Command line options reference

preq -h

Example output

Usage: preq-linux-amd64 [flags]

Flags:
-h, --help Show context-sensitive help.
-d, --disabled Do not run community CREs
-g, --generate Generate data sources template
-j, --cron Generate Kubernetes cronjob template
-l, --level=STRING Print logs at this level to stderr
-o, --name=STRING Output name for reports, data source templates, or notifications
-q, --quiet Quiet mode, do not print progress
-r, --rules=STRING Path to a CRE rules file
-s, --source=STRING Path to a data source Yaml file
-v, --version Print version and exit
-y, --accept-updates Accept updates to rules or new release