Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
105 changes: 105 additions & 0 deletions coco/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# Proplet on Confidential Containers (CoCo)

This directory contains resources for deploying Proplet on a Kubernetes cluster enabled with Confidential Containers (Kata Containers).

## Prerequisites

* A Kubernetes cluster with [Confidential Containers](https://confidentialcontainers.org/) (Kata Containers) installed.
* `kubectl` configured to access the cluster.
* `docker` for building images (or another OCI builder).
* A default StorageClass for handling ephemeral storage (optional but recommended).

## Cluster Setup (Quick Start)

To set up a local testing environment with Kind and Confidential Containers:

1. **Create a Kind Cluster**:
Comment thread
SammyOina marked this conversation as resolved.
```bash
kind create cluster --name coco-test --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
EOF
```

2. **Install the CoCo Operator**:
Deploy the Confidential Containers Operator to install Kata Containers and required components.

```bash
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.8.0
```

*Note: Check the [CoCo Operator releases](https://github.com/confidential-containers/operator/releases) for the latest version.*

3. **Wait for Installation**:
Wait for the `cc-runtime` runtime class to become available:

```bash
kubectl get runtimeclass
# Should show 'kata', 'kata-qemu', or 'kata-fc'
```

Ensure all operator pods are running:
```bash
kubectl get pods -n confidential-containers-system
```

## Deployment

The deployment setup consists of:
* `proplet.yaml`: The main Deployment manifest. Checks for `runtimeClassName: kata` (default).
* `proplet-config.yaml`: Configuration map containing `config.toml` (SuperMQ config) and environment variables.
* `deploy_coco.sh`: A helper script to build and deploy.

### 1. Configuration

1. **Edit `proplet-config.yaml`**:
* Set your `domain_id`, `client_id`, `client_key`, and `channel_id`.
* These values configure Proplet to connect to the SuperMQ message broker.

2. **Edit `proplet.yaml`**:
* Update `PROPLET_MQTT_ADDRESS` if your MQTT broker is not running locally (default `tcp://localhost:1883`).
* Update `PROPLET_INSTANCE_ID` to a unique name for this instance.
* Ensure `runtimeClassName` matches your cluster's CoCo runtime class (e.g., `kata-qemu`, `kata-fc`, or just `kata`).

### 2. Deploy

Use the helper script to build and deploy:

```bash
./deploy_coco.sh
```

Or manually:

```bash
# 1. Build image (from repository root)
cd ..
make docker_proplet

# 2. Apply manifests (from coco directory)
cd coco
kubectl apply -f proplet-config.yaml
kubectl apply -f proplet.yaml
```

## Attestation Agent

In a CoCo environment, the Attestation Agent (AA) typically runs as a guest component inside the VM.
Proplet is configured to communicate with the AA on `localhost:50002` (standard CoCo port).

To attest the environment, ensure:
1. Your Kubernetes cluster is properly configured for remote attestation (KBS/KBC setup).
2. The Attestation Agent is active in the Guest VM.

## Troubleshooting

**Pod stuck in `ContainerCreating`**:
* Check if Kata runtime is available: `kubectl get runtimeclasses`
* Check Kubelet logs for QEMU/Kata startup errors.

**Proplet fails to connect**:
* Check logs: `kubectl logs -l app=proplet`
* Verify network connectivity to the MQTT broker.
44 changes: 44 additions & 0 deletions coco/deploy_coco.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
#!/bin/bash
# SPDX-License-Identifier: Apache-2.0
# Helper script to deploy Proplet on Confidential Containers (CoCo)

set -e

SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
ROOT_DIR=$(dirname "$SCRIPT_DIR")
K8S_DIR="$SCRIPT_DIR"

# Configuration
IMAGE_NAME="ghcr.io/absmach/propeller/proplet"
IMAGE_TAG="latest"
# Define RUNTIME_CLASS with the runtime class you want to use with CoCo (e.g., kata, ccruntime)
RUNTIME_CLASS=${RUNTIME_CLASS:-kata}
Comment thread
SammyOina marked this conversation as resolved.

echo "=== Proplet CoCo Deployment ==="

# 1. Build the Proplet container image using Makefile
echo "Building Proplet container image..."
cd "$ROOT_DIR"
make docker_proplet

# 2. (Optional) Load into Kind if using Kind
if kind get clusters &> /dev/null; then
echo "Detected Kind cluster, loading image..."
kind load docker-image "${IMAGE_NAME}:${IMAGE_TAG}" || echo "Warning: Failed to load image into Kind, continuing..."
fi

# 3. Apply Kubernetes manifests
echo "Applying Kubernetes manifests..."
# Temporarily update runtimeClassName if overridden
if [ "$RUNTIME_CLASS" != "kata" ]; then
echo "Updating runtimeClassName to $RUNTIME_CLASS..."
sed -i "s/runtimeClassName: kata/runtimeClassName: $RUNTIME_CLASS/g" "$K8S_DIR/proplet.yaml"
fi

kubectl apply -f "$K8S_DIR/proplet-config.yaml"
kubectl apply -f "$K8S_DIR/proplet.yaml"

echo "=== Deployment Submitted ==="
echo "Check status:"
echo " kubectl get pods -l app=proplet"
echo " kubectl logs -l app=proplet"
25 changes: 25 additions & 0 deletions coco/proplet-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: proplet-config
data:
config.toml: |
# SuperMQ Configuration

[manager]
domain_id = "4bae1a76-afc4-4054-976c-5427c49fbbf3"
client_id = "cdaccb11-7209-4fb9-8df1-3c52e9d64284"
client_key = "507d687d-51f8-4c71-8599-4273a5d75429"
channel_id = "34a616c3-8817-4995-aade-a383e64766a8"

[proplet1]
domain_id = "4bae1a76-afc4-4054-976c-5427c49fbbf3"
client_id = "0deb859f-973d-4e2e-93cf-ec756f4fc3c8"
client_key = "17c03d05-b55d-4a05-88ec-cadecb2130c4"
channel_id = "34a616c3-8817-4995-aade-a383e64766a8"

[proxy]
domain_id = "4bae1a76-afc4-4054-976c-5427c49fbbf3"
client_id = "0deb859f-973d-4e2e-93cf-ec756f4fc3c8"
client_key = "17c03d05-b55d-4a05-88ec-cadecb2130c4"
channel_id = "34a616c3-8817-4995-aade-a383e64766a8"
52 changes: 52 additions & 0 deletions coco/proplet.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: proplet
labels:
app: proplet
spec:
replicas: 1
selector:
matchLabels:
app: proplet
template:
metadata:
labels:
app: proplet
spec:
runtimeClassName: kata
containers:
- name: proplet
image: proplet:latest
imagePullPolicy: IfNotPresent
env:
- name: PROPLET_LOG_LEVEL
value: "info"
- name: PROPLET_INSTANCE_ID
value: "proplet-k8s-001"
- name: PROPLET_CONFIG_FILE
value: "/etc/proplet/config.toml"
- name: PROPLET_CONFIG_SECTION
value: "proplet1"
- name: PROPLET_EXTERNAL_WASM_RUNTIME
value: "/usr/local/bin/wasmtime"
- name: PROPLET_MANAGER_K8S_NAMESPACE
value: "default"
- name: PROPLET_MQTT_ADDRESS
value: "tcp://localhost:1883"
- name: PROPLET_MQTT_TIMEOUT
value: "30"
- name: PROPLET_MQTT_QOS
value: "2"
- name: PROPLET_LIVELINESS_INTERVAL
value: "10"
# Since AA runs in the guest VM in CoCo (not sidecar), we access it via localhost
# if the network namespace is shared or via specific socket.
# Assuming standard loopback availability in the Pod for guest components provided by Kata 3.x+
volumeMounts:
- name: config-volume
mountPath: /etc/proplet
volumes:
- name: config-volume
configMap:
name: proplet-config