This is a self contained demo that show how you can setup a KinD cluster with the following features:
- Single node cluster
- MetalLB for routing requests to your ingress without having to use NodePorts, port forwards etc
- Ingress-Nginx with modsecurity enabled and the default OWASP CRS rules activated
- ElasticSearch and Kibana installed within the cluster
- The opentelemetry collector installed and configured to push the pod and modsecurity logs into ElasticSearch
I have only tested this on a Linux machine since I don't own a Mac of Windows machine. So there might be slight differences and some things may not work on Mac/Windows. On Windows, use something like Ubuntu on WSL2.
- docker (Docker/Ranger desktop might work, but I am not sure, since on Mac/Windows the "DOCKER_MACHINE" layer might get in your way)
- helm
- kind
- kubectl
- curl (or some other command line http client like wget or httpie)
Optional, nice to have
- k9s
- jq
All these tools can be installed through your distro's package manager, or homebrew (MacOS/Linux)
The image below shows the major components installed in this demo.
kind create cluster --config ./kind/demo-cluster.yamlkubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yamlFind out your Kind cluster docker network IPAM config to let MetalLB use for its address pool
docker network inspect -f '{{ .IPAM.Config }}' kindWhich would get you something like this as output. Take note of the network range.
[{fc00:f853:ccd:e793::/64 map[]} {172.18.0.0/16 172.18.0.1 map[]}]Update the file ./loadbalancer/metal-lb-pool.yaml to reflect your IPAM range and apply the config map
kubectl apply -f ./loadbalancer/metal-lb-pool.yamlCert Manager is used by the ECK stack to generete certificates for internal communication. Furthermore a self-signed issuer is created to allow you to expose the ingress resources on HTTPS.
Install the cert manager using:
helm install \
cert-manager oci://quay.io/jetstack/charts/cert-manager \
--version v1.18.2 \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=trueAnd when the cert manager is up and running, install the self-signed cluster issuer using:
kubectl apply -f ./cert-mananger/cluster-issuer.yamlIngress is installed with some extra (override) values to enable mod_security and the default owasp-crs rules as per the ingress-nginx documentation
controller:
config:
# Enables Modsecurity
enable-modsecurity: "true"
# Update ModSecurity config and rules
modsecurity-snippet: |
# this enables the default OWASP Core Rule Set
Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
# Enable prevention mode. Options: DetectionOnly,On,Off (default is DetectionOnly)
SecRuleEngine On
# Enable scanning of the request body
SecRequestBodyAccess On
# Enable XML and JSON parsing
SecRule REQUEST_HEADERS:Content-Type "(?:text|application(?:/soap\+|/)|application/xml)/" \
"id:200000,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML"
SecRule REQUEST_HEADERS:Content-Type "application/json" \
"id:200001,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=JSON"
# Reject if larger (we could also let it pass with ProcessPartial)
SecRequestBodyLimitAction Reject
# Send ModSecurity audit logs to the stdout (only for rejected requests)
SecAuditLog /var/log/nginx/modsecurity_audit.log
# format the logs in JSON
SecAuditLogFormat JSON
# could be On/Off/RelevantOnly
SecAuditEngine RelevantOnly
Add the ingress nginx repo
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && helm repo updateAnd install the chart with the modsecurity plugin enabled and the owasp-crs rules active
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--values ./ingress-nginx/helm-values.yamlkubectl apply -f ./demo-app/echo-deployment.yamlTo get the loadbalancer ip address of the ingress controller
LB_IP=$(kubectl -n ingress-nginx get svc ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].ip}')Next generate an ingress definition for the demo app and apply directly:
cat << EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
spec:
ingressClassName: nginx
rules:
- host: echo-${LB_IP}.nip.io
http:
paths:
- backend:
service:
name: echoserver
port:
number: 80
path: /
pathType: Prefix
EOFValidate the app is reachable by calling (the | jq part is optional, but gives you a nice readable output)
curl echo-${LB_IP}.nip.io | jqAnd the following request should get you in trouble (HTTP 403 forbidden)
curl -vv "echo-{$LB_IP}.nip.io/?param=\"><script>alert(1);</script>"Check the logs
curl -vv "echo-{$LB_IP}.nip.io/?param=\"><script>alert(1);</script>"There should be something like this at the end of the logs
2025/08/21 17:09:24 [error] 1193#1193: *9743 [client 10.244.0.1] ModSecurity: Access denied with code 403 (phase 2). Matched "Operator `Ge' with parameter `5' against variable `TX:BLOCKING_INBOUND_ANOMALY_SCORE' (Value: `20' ) [file "/etc/nginx/owasp-modsecurity-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "222"] [id "949110"] [rev ""] [msg "Inbound Anomaly Score Exceeded (Total Score: 20)"] [data ""] [severity "0"] [ver "OWASP_CRS/4.15.0"] [maturity "0"] [accuracy "0"] [tag "anomaly-evaluation"] [tag "OWASP_CRS"] [hostname "echo-172.18.200.200.nip.io"] [uri "/"] [unique_id "175579616486.476418"] [ref ""], client: 10.244.0.1, server: echo-172.18.200.200.nip.io, request: "GET /?param="><script>alert(1);</script> HTTP/1.1", host: "echo-172.18.200.200.nip.io"
10.244.0.1 - - [21/Aug/2025:17:09:24 +0000] "GET /?param=\x22><script>alert(1);</script> HTTP/1.1" 403 146 "-" "curl/8.14.1" 125 0.000 [default-echoserver-80] [] - - - - 99aa89aadf01e5a1c82ae586598758c3
Before we can send any data to ElasticSearch, we first need to deploy it. We will deploy it on our demo cluster. Since we're running a KinD (K8S in Docker) cluster we need to make sure you docker runtime has enough resources. Please follow the guidelines on the ElasticSearch Docker page.
Add the Elastic helm repo
helm repo add elastic https://helm.elastic.co && helm repo updateAnd install the elastic operator chart using
helm upgrade --install eck-operator --namespace elastic-system --create-namespace elastic/eck-operatorNext, add the ECK quickstart chart, which install ElasticSearch and Kibana in a single node setup with some sane defaults.
helm upgrade --install es-kb-quickstart elastic/eck-stack -n elastic-stack --create-namespaceIt will take a few minutes before all is up and running. You can use the kubectl -n elastic-stack get all command to see if all pod are up and ready.
You can get the initial super user password for logging into the Kibana UI, by using
KIBANA_SU_PWD=$(kubectl -n elastic-stack get secrets elasticsearch-es-elastic-user -o jsonpath='{.data.elastic}' | base64 -d) && echo $KIBANA_SU_PWDIf you port-forward the Kibana UI, you should be able to login with the username elastic and the password you just fetched
kubectl -n elastic-stack port-forward deployments/es-kb-quickstart-eck-kibana-kb 5601:5601To login open your browser at https://localhost:5601/ (and accept the certificate warning)
You also need this password to setup the OpenTelemetry collector.
Add the OTEL helm repo
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts && helm repo updateNext, update the ./otel/helm-values.yaml file on line 10 with the elastic superuser password you fetched earlier.
And install the collector
helm upgrade --install otel-collector open-telemetry/opentelemetry-collector \
--namespace log-collection --create-namespace \
--values ./otel/helm-values.yamlAfter a few minutes you should see data coming into ElasticSearch.
Hit the echo service a few times with good and bad requests, and you should see them in the logs

