Skip to main content
Version: Next

On k3d (Locally)

Install the Agent Manager on a local Kubernetes cluster using k3d — a lightweight tool that runs Kubernetes inside Docker.

Just want it running?

The Quick Start Guide installs everything in a single command using a dev container. Use this page when you want to understand the full setup or need to customize it.

What You Will Get

Agent Manager is a two-layer system installed in two phases:

  • Phase 1 — OpenChoreo (base layer): OpenChoreo is an open-source platform that provides the Kubernetes infrastructure Agent Manager runs on. It consists of four planes: a Control Plane for API and configuration, a Data Plane for running workloads and gateways, a Workflow Plane for builds and CI pipelines, and an Observability Plane for traces, logs, and metrics via OpenSearch.

  • Phase 2 — Agent Manager : The AI agent management platform installed on top of OpenChoreo. It includes the Console (web UI), AMP API (backend), Thunder (identity provider), AI Gateway, PostgreSQL (database), Secrets Extension (OpenBao for runtime secret injection), Traces Observer (trace querying), and Evaluation Engine (automated agent evaluations).

This guide installs both layers on a single-node k3d cluster on your local machine.

info

This setup is for development and exploration. For production deployments, see the Production Considerations section.

Prerequisites

Hardware

ResourceMinimum
RAM8 GB
CPU4 cores
Disk~10 GB free

Required Tools

ToolVersionPurpose
Dockerv26.0+Container runtime
kubectlv1.32+Kubernetes CLI
Helmv3.12+Kubernetes package manager
k3dv5.8+Local Kubernetes clusters

Verify all tools:

docker --version && kubectl version --client && helm version && k3d version
macOS with Colima

If you use Colima instead of Docker Desktop, start it with sufficient resources:

colima start --vm-type=vz --vz-rosetta --cpu 4 --memory 8

Prefix the cluster creation command in Step 1 with K3D_FIX_DNS=0.

Required Ports

The following host ports must be free before installation:

Agent Manager

PortPurpose
3000Console UI
9000API
9243Internal API
9098Traces Observer
8084AI Gateway (HTTP)
8243AI Gateway (HTTPS)

OpenChoreo

PortPurpose
6550Kubernetes API
8080Control Plane (HTTP)
8443Control Plane (HTTPS)
19080Data Plane Gateway (HTTP)
19443Data Plane Gateway (HTTPS)
10082Container Registry (Workflow Plane)
11080Observer API
11082OpenSearch API
11085OpenSearch HTTPS
21893OTel Collector
22893Observability Gateway (HTTP)
22894Observability Gateway (HTTPS)

Phase 1: OpenChoreo Platform

OpenChoreo organises its infrastructure into four planes, each handling a different concern:

  • Control Plane — API server and configuration management for the platform
  • Data Plane — runs deployed workloads and API gateways
  • Workflow Plane — builds and CI pipelines for agent deployments
  • Observability Plane — trace, log, and metrics collection via OpenSearch

This phase installs all four with Agent Manager-specific configuration. Estimated time: ~15-20 minutes.

Step 1: Create k3d Cluster

Create the cluster using the Agent Manager cluster configuration, which includes all required port mappings:

curl -fsSL https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.0.0-dev/deployments/quick-start/k3d-config.yaml \
| k3d cluster create --config=-

Set the kubectl context:

k3d kubeconfig merge amp-local --kubeconfig-merge-default
kubectl config use-context k3d-amp-local

Generate machine IDs for Fluent Bit log collection:

for NODE in $(k3d node list -o json | grep -o '"name":"[^"]*"' | sed 's/"name":"//;s/"//' | grep "^k3d-amp-local-"); do
docker exec ${NODE} sh -c "cat /proc/sys/kernel/random/uuid | tr -d '-' > /etc/machine-id"
done
Verify
kubectl cluster-info --context k3d-amp-local
# Should show cluster running at https://0.0.0.0:6550

Step 2: Apply CoreDNS Configuration

Enables *.openchoreo.localhost DNS resolution inside the cluster:

kubectl apply -f https://raw.githubusercontent.com/openchoreo/openchoreo/v1.0.0-rc.1/install/k3d/common/coredns-custom.yaml

Step 3: Install Cluster Prerequisites

These are infrastructure components that OpenChoreo depends on. You only install them once per cluster.

Gateway API CRDs — standard Kubernetes resources for managing network gateways and routing:

kubectl apply --server-side \
-f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.1/experimental-install.yaml

cert-manager (v1.19.2) — automates TLS certificate issuance and renewal:

helm upgrade --install cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.19.2 \
--set crds.enabled=true \
--wait --timeout 180s

External Secrets Operator (v1.3.2) — syncs secrets from external stores (like OpenBao) into Kubernetes:

helm upgrade --install external-secrets oci://ghcr.io/external-secrets/charts/external-secrets \
--namespace external-secrets \
--create-namespace \
--version 1.3.2 \
--set installCRDs=true \
--wait --timeout 180s

kgateway (v2.2.1) — the network gateway for OpenChoreo planes:

helm upgrade --install kgateway-crds oci://cr.kgateway.dev/kgateway-dev/charts/kgateway-crds \
--create-namespace \
--namespace openchoreo-control-plane \
--version v2.2.1

helm upgrade --install kgateway oci://cr.kgateway.dev/kgateway-dev/charts/kgateway \
--namespace openchoreo-control-plane \
--create-namespace \
--version v2.2.1 \
--set controller.extraEnv.KGW_ENABLE_GATEWAY_API_EXPERIMENTAL_FEATURES=true

Step 4: Setup Secrets Store (OpenBao)

OpenBao provides secret management for the Workflow Plane and deployed agents:

helm upgrade --install openbao oci://ghcr.io/openbao/charts/openbao \
--namespace openbao \
--create-namespace \
--version 0.25.6 \
--values https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.0.0-dev/deployments/single-cluster/values-openbao.yaml \
--timeout 180s

kubectl wait --for=condition=Ready pod -l app.kubernetes.io/name=openbao -n openbao --timeout=120s

Configure the External Secrets ClusterSecretStore:

kubectl apply -f - <<'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-secrets-openbao
namespace: openbao
---
apiVersion: external-secrets.io/v1
kind: ClusterSecretStore
metadata:
name: default
spec:
provider:
vault:
server: "http://openbao.openbao.svc:8200"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "openchoreo-secret-writer-role"
serviceAccountRef:
name: "external-secrets-openbao"
namespace: "openbao"
EOF

Step 5: Install OpenChoreo Control Plane

The Control Plane is configured with Backstage disabled (Agent Manager provides its own console) and OIDC pointing to the AMP Thunder Extension:

helm install openchoreo-control-plane \
oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-control-plane \
--create-namespace \
--timeout 600s \
--values https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.0.0-dev/deployments/single-cluster/values-cp.yaml

kubectl wait --for=condition=Available \
deployment --all -n openchoreo-control-plane --timeout=600s
What the values file configures
  • Backstage disabled (AMP provides its own console)
  • OIDC pointing to AMP Thunder Extension (thunder.amp.localhost:8080)
  • OpenChoreo API hostname: api.openchoreo.localhost
  • Gateway on HTTP port 8080 / HTTPS port 8443, TLS disabled

Step 6: Setup Data Plane

Each plane needs the Control Plane's CA certificate to establish trusted communication. Copy it to the Data Plane namespace:

kubectl create namespace openchoreo-data-plane --dry-run=client -o yaml | kubectl apply -f -

CA_CRT=$(kubectl get secret cluster-gateway-ca \
-n openchoreo-control-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl create configmap cluster-gateway-ca \
--from-literal=ca.crt="$CA_CRT" \
-n openchoreo-data-plane --dry-run=client -o yaml | kubectl apply -f -
helm install openchoreo-data-plane \
oci://ghcr.io/openchoreo/helm-charts/openchoreo-data-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-data-plane \
--create-namespace \
--timeout 600s \
--values https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.0.0-dev/deployments/single-cluster/values-dp.yaml

kubectl wait --for=condition=Available \
deployment --all -n openchoreo-data-plane --timeout=600s

Register the Data Plane with the Control Plane. The command below automatically extracts the CA certificate and inserts it into the YAML — run the entire block as-is:

CA_CERT=$(kubectl get secret cluster-agent-tls \
-n openchoreo-data-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ClusterDataPlane
metadata:
name: default
namespace: default
spec:
planeID: "default"
clusterAgent:
clientCA:
value: |
$(echo "$CA_CERT" | sed 's/^/ /')
gateway:
ingress:
external:
name: gateway-default
namespace: openchoreo-data-plane
http:
host: "openchoreoapis.localhost"
listenerName: http
port: 19080
https:
host: "openchoreoapis.localhost"
listenerName: https
port: 19443
secretStoreRef:
name: amp-openbao-store
EOF

Step 7: Setup Workflow Plane

Copy the CA certificate:

kubectl create namespace openchoreo-workflow-plane --dry-run=client -o yaml | kubectl apply -f -

CA_CRT=$(kubectl get secret cluster-gateway-ca \
-n openchoreo-control-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl create configmap cluster-gateway-ca \
--from-literal=ca.crt="$CA_CRT" \
-n openchoreo-workflow-plane --dry-run=client -o yaml | kubectl apply -f -

Install the container registry and the Workflow Plane:

helm upgrade --install registry docker-registry \
--repo https://twuni.github.io/docker-registry.helm \
--namespace openchoreo-workflow-plane \
--create-namespace \
--values https://raw.githubusercontent.com/openchoreo/openchoreo/v1.0.0-rc.1/install/k3d/single-cluster/values-registry.yaml \
--timeout 120s

helm install openchoreo-workflow-plane \
oci://ghcr.io/openchoreo/helm-charts/openchoreo-workflow-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-workflow-plane \
--create-namespace \
--timeout 600s

Register the Workflow Plane:

BP_CA_CERT=$(kubectl get secret cluster-agent-tls \
-n openchoreo-workflow-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ClusterWorkflowPlane
metadata:
name: default
namespace: default
spec:
planeID: "default"
secretStoreRef:
name: default
clusterAgent:
clientCA:
value: |
$(echo "$BP_CA_CERT" | sed 's/^/ /')
EOF

kubectl wait --for=condition=Available \
deployment --all -n openchoreo-workflow-plane --timeout=600s

Step 8: Setup Observability Plane

Copy the CA certificate:

kubectl create namespace openchoreo-observability-plane --dry-run=client -o yaml | kubectl apply -f -

CA_CRT=$(kubectl get secret cluster-gateway-ca \
-n openchoreo-control-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl create configmap cluster-gateway-ca \
--from-literal=ca.crt="$CA_CRT" \
-n openchoreo-observability-plane --dry-run=client -o yaml | kubectl apply -f -

Create the ExternalSecrets for OpenSearch and Observer credentials:

kubectl apply -f - <<'EOF'
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: opensearch-admin-credentials
namespace: openchoreo-observability-plane
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: default
target:
name: opensearch-admin-credentials
data:
- secretKey: username
remoteRef:
key: opensearch-username
property: value
- secretKey: password
remoteRef:
key: opensearch-password
property: value
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: observer-secret
namespace: openchoreo-observability-plane
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: default
target:
name: observer-secret
data:
- secretKey: OPENSEARCH_USERNAME
remoteRef:
key: opensearch-username
property: value
- secretKey: OPENSEARCH_PASSWORD
remoteRef:
key: opensearch-password
property: value
- secretKey: UID_RESOLVER_OAUTH_CLIENT_SECRET
remoteRef:
key: observer-oauth-client-secret
property: value
EOF

Wait for the ExternalSecrets to sync:

kubectl wait -n openchoreo-observability-plane \
--for=condition=Ready externalsecret/opensearch-admin-credentials \
externalsecret/observer-secret --timeout=60s

Apply the custom OpenTelemetry Collector ConfigMap (required for trace ingestion):

kubectl apply -f https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.0.0-dev/deployments/values/oc-collector-configmap.yaml \
-n openchoreo-observability-plane

Install the Observability Plane:

helm install openchoreo-observability-plane \
oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-observability-plane \
--create-namespace \
--timeout 25m \
--values https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.0.0-dev/deployments/single-cluster/values-op.yaml

kubectl wait --for=condition=Available \
deployment --all -n openchoreo-observability-plane --timeout=300s

Install observability modules (logs, metrics, tracing):

# Logs module
helm upgrade --install observability-logs-opensearch \
oci://ghcr.io/openchoreo/helm-charts/observability-logs-opensearch \
--create-namespace \
--namespace openchoreo-observability-plane \
--version 0.3.8 \
--set openSearchSetup.openSearchSecretName="opensearch-admin-credentials" \
--timeout 10m

# Enable Fluent Bit log collection
helm upgrade observability-logs-opensearch \
oci://ghcr.io/openchoreo/helm-charts/observability-logs-opensearch \
--namespace openchoreo-observability-plane \
--version 0.3.8 \
--reuse-values \
--set fluent-bit.enabled=true \
--timeout 10m

# Metrics module
helm upgrade --install observability-metrics-prometheus \
oci://ghcr.io/openchoreo/helm-charts/observability-metrics-prometheus \
--create-namespace \
--namespace openchoreo-observability-plane \
--version 0.2.4 \
--timeout 10m

# Tracing module (uses the custom OTel Collector ConfigMap)
helm upgrade --install observability-traces-opensearch \
oci://ghcr.io/openchoreo/helm-charts/observability-tracing-opensearch \
--create-namespace \
--namespace openchoreo-observability-plane \
--version 0.3.7 \
--set openSearch.enabled=false \
--set openSearchSetup.openSearchSecretName="opensearch-admin-credentials" \
--set opentelemetry-collector.configMap.existingName="amp-opentelemetry-collector-config" \
--timeout 10m

Register the Observability Plane and link it to other planes:

OP_CA_CERT=$(kubectl get secret cluster-agent-tls \
-n openchoreo-observability-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ObservabilityPlane
metadata:
name: default
namespace: default
spec:
planeID: "default"
clusterAgent:
clientCA:
value: |
$(echo "$OP_CA_CERT" | sed 's/^/ /')
observerURL: http://observer.openchoreo.localhost:11080
EOF

# Link Data Plane to Observability Plane
kubectl patch clusterdataplane default -n default --type merge \
-p '{"spec":{"observabilityPlaneRef":{"kind":"ClusterObservabilityPlane","name":"default"}}}'

# Link Workflow Plane to Observability Plane
kubectl patch clusterworkflowplane default -n default --type merge \
-p '{"spec":{"observabilityPlaneRef":{"kind":"ClusterObservabilityPlane","name":"default"}}}'

Step 9: Verify OpenChoreo Installation

Before proceeding to Phase 2, confirm all planes are running:

echo "--- Control Plane ---"
kubectl get pods -n openchoreo-control-plane
echo "--- Data Plane ---"
kubectl get pods -n openchoreo-data-plane
echo "--- Workflow Plane ---"
kubectl get pods -n openchoreo-workflow-plane
echo "--- Observability Plane ---"
kubectl get pods -n openchoreo-observability-plane
echo "--- Plane Registrations ---"
kubectl get clusterdataplane,clusterworkflowplane,observabilityplane -n default

All pods should be in Running or Completed state.


Phase 2: Agent Manager Installation

With OpenChoreo running, you can now install the Agent Manager components — the API, console, identity provider, and extensions that provide the AI agent management capabilities.

The Agent Manager installs as a set of Helm charts on top of OpenChoreo. The components fall into two groups based on install order:

  1. Agent Manager Core : Gateway Operator, Thunder Extension, Agent Manager and Platform Resources (agent component types, workflow templates etc). Each depends on the one before it.
  2. Extensions : Secret Management, Observability, Evaluation extensions and the AI Gateway Extension.

Configuration Variables

Set these once before running the install commands:

export VERSION="0.0.0-dev"
export HELM_CHART_REGISTRY="ghcr.io/wso2"
export AMP_NS="wso2-amp"
export BUILD_CI_NS="openchoreo-workflow-plane"
export OBSERVABILITY_NS="openchoreo-observability-plane"
export DEFAULT_NS="default"
export DATA_PLANE_NS="openchoreo-data-plane"
export SECRETS_NS="amp-secrets"
export THUNDER_NS="amp-thunder"

# Observability endpoint for the console.
# k3d default below works out of the box. For managed clusters,
# replace with your observability gateway LoadBalancer address:
# export INSTRUMENTATION_URL="http://<obs-gateway-lb-ip>:22893/otel"
export INSTRUMENTATION_URL="http://localhost:22893/otel"

Core Components

Install these in order — each depends on the one before it.

Step 1: Gateway Operator

Manages API Gateway resources and enables secure, authenticated trace ingestion into the Observability Plane.

helm install gateway-operator \
oci://ghcr.io/wso2/api-platform/helm-charts/gateway-operator \
--version 0.4.0 \
--namespace ${DATA_PLANE_NS} \
--set logging.level=debug \
--set gateway.helm.chartVersion=0.9.0 \
--timeout 600s

Wait for the operator to be ready:

kubectl wait --for=condition=Available \
deployment -l app.kubernetes.io/name=gateway-operator \
-n ${DATA_PLANE_NS} --timeout=300s

Apply the Gateway Operator configuration (JWT/JWKS authentication and rate limiting):

Download the config, rewrite the JWKS URI for k3d (so the gateway can reach the AMP API via the Docker host network), then apply: ```bash curl -fsSL https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.0.0-dev/deployments/values/api-platform-operator-full-config.yaml \ | sed 's|http://amp-api.wso2-amp.svc.cluster.local:9000/auth/external/jwks.json|http://host.k3d.internal:9000/auth/external/jwks.json|g' \ | kubectl apply -f - ```

Grant RBAC for WSO2 API Platform CRDs to the Data Plane cluster-agent:

kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: wso2-api-platform-gateway-module
rules:
- apiGroups: ["gateway.api-platform.wso2.com"]
resources: ["restapis", "apigateways"]
verbs: ["*"]
- apiGroups: ["gateway.kgateway.dev"]
resources: ["backends"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: wso2-api-platform-gateway-module
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: wso2-api-platform-gateway-module
subjects:
- kind: ServiceAccount
name: cluster-agent-dataplane
namespace: ${DATA_PLANE_NS}
EOF

Deploy the observability gateway and trace API:

kubectl apply -f https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.0.0-dev/deployments/values/obs-gateway.yaml

kubectl wait --for=condition=Programmed \
apigateway/obs-gateway -n ${DATA_PLANE_NS} --timeout=180s

kubectl apply -f https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.0.0-dev/deployments/values/otel-collector-rest-api.yaml

kubectl wait --for=condition=Programmed \
restapi/traces-api-secure -n ${DATA_PLANE_NS} --timeout=120s
Verify
kubectl get apigateway obs-gateway -n ${DATA_PLANE_NS}
# STATUS should show "Programmed"

Step 2: Thunder Extension (Identity Provider)

Provides authentication and user management for the Agent Manager platform — login, API keys, and OAuth token exchange.

helm install amp-thunder-extension \
oci://${HELM_CHART_REGISTRY}/wso2-amp-thunder-extension \
--version ${VERSION} \
--namespace ${THUNDER_NS} \
--create-namespace \
--timeout 1800s
Verify
kubectl get pods -n ${THUNDER_NS}
# All pods should be Running

Step 3: Agent Manager (API + Console + PostgreSQL)

The core platform: a Go API server, a React web console, and a PostgreSQL database.

helm install amp \
oci://${HELM_CHART_REGISTRY}/wso2-agent-manager \
--version ${VERSION} \
--namespace ${AMP_NS} \
--create-namespace \
--set console.config.instrumentationUrl="${INSTRUMENTATION_URL}" \
--timeout 1800s
info

INSTRUMENTATION_URL was set in the Configuration Variables above. For managed clusters, update it to your observability gateway LoadBalancer address before running this command.

Wait for all components:

# PostgreSQL
kubectl wait --for=jsonpath='{.status.readyReplicas}'=1 \
statefulset/amp-postgresql -n ${AMP_NS} --timeout=600s

# API server
kubectl wait --for=condition=Available \
deployment/amp-api -n ${AMP_NS} --timeout=600s

# Console
kubectl wait --for=condition=Available \
deployment/amp-console -n ${AMP_NS} --timeout=600s
Verify
kubectl get pods -n ${AMP_NS}
# Expected: amp-postgresql-0 (Running), amp-api-xxx (Running), amp-console-xxx (Running)

Step 4: Platform Resources

Creates the default Organization, Project, Environment, and DeploymentPipeline resources that the console needs on first login.

helm install amp-platform-resources \
oci://${HELM_CHART_REGISTRY}/wso2-amp-platform-resources-extension \
--version ${VERSION} \
--namespace ${DEFAULT_NS} \
--timeout 1800s

Extensions

These can be installed in any order after Core is ready.

Step 5: Secrets Extension (OpenBao)

Provides runtime secret injection for deployed agents. Uses OpenBao as the secrets backend.

helm install amp-secrets \
oci://${HELM_CHART_REGISTRY}/wso2-amp-secrets-extension \
--version ${VERSION} \
--namespace ${SECRETS_NS} \
--create-namespace \
--set openbao.server.dev.enabled=true \
--timeout 600s
kubectl wait --for=jsonpath='{.status.readyReplicas}'=1 \
statefulset/openbao -n ${SECRETS_NS} --timeout=300s
warning

Dev mode uses an in-memory backend — secrets are lost on restart. For production, disable dev mode and configure persistent storage.

Step 6: Observability Extension (Traces Observer)

Deploys the Traces Observer service that queries and serves trace data to the console.

Create the required ExternalSecrets first:

kubectl apply -f - <<'EOF'
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: opensearch-admin-credentials
namespace: openchoreo-observability-plane
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: default
target:
name: opensearch-admin-credentials
data:
- secretKey: username
remoteRef:
key: opensearch-username
property: value
- secretKey: password
remoteRef:
key: opensearch-password
property: value
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: observer-secret
namespace: openchoreo-observability-plane
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: default
target:
name: observer-secret
data:
- secretKey: OPENSEARCH_USERNAME
remoteRef:
key: opensearch-username
property: value
- secretKey: OPENSEARCH_PASSWORD
remoteRef:
key: opensearch-password
property: value
- secretKey: UID_RESOLVER_OAUTH_CLIENT_SECRET
remoteRef:
key: observer-oauth-client-secret
property: value
EOF

Install the extension:

helm install amp-observability-traces \
oci://${HELM_CHART_REGISTRY}/wso2-amp-observability-extension \
--version ${VERSION} \
--namespace ${OBSERVABILITY_NS} \
--timeout 1800s

kubectl wait --for=condition=Available \
deployment/amp-traces-observer -n ${OBSERVABILITY_NS} --timeout=600s

Step 7: Evaluation Extension

Installs workflow templates for running automated evaluations (accuracy, safety, reasoning, tool usage) against agent traces.

helm install amp-evaluation-extension \
oci://${HELM_CHART_REGISTRY}/wso2-amp-evaluation-extension \
--version ${VERSION} \
--namespace ${BUILD_CI_NS} \
--timeout 1800s
info

The default publisher.apiKey must match publisherApiKey.value in the Agent Manager chart. Both default to amp-internal-api-key.

Step 8: AI Gateway Extension

Registers the AI Gateway with the Agent Manager and deploys the gateway stack. Install this last — it requires the Agent Manager API to be healthy and Thunder to be ready for token exchange.

helm install amp-ai-gateway \
oci://${HELM_CHART_REGISTRY}/wso2-amp-ai-gateway-extension \
--version ${VERSION} \
--namespace ${DATA_PLANE_NS} \
--set apiGateway.controlPlane.host="amp-api-gateway-manager.${AMP_NS}.svc.cluster.local:9243" \
--set agentManager.apiUrl="http://amp-api.${AMP_NS}.svc.cluster.local:9000/api/v1" \
--set agentManager.idp.tokenUrl="http://amp-thunder-extension-service.${THUNDER_NS}.svc.cluster.local:8090/oauth2/token" \
--timeout 1800s

kubectl wait --for=condition=complete job/amp-gateway-bootstrap \
-n ${DATA_PLANE_NS} --timeout=300s
Verify
kubectl get jobs -n ${DATA_PLANE_NS} | grep amp-gateway-bootstrap
# STATUS should show "Complete"

Verify and Access the Platform

Run a full status check to confirm everything is running:

# All pods across key namespaces
kubectl get pods -n openchoreo-control-plane
kubectl get pods -n openchoreo-data-plane
kubectl get pods -n openchoreo-workflow-plane
kubectl get pods -n openchoreo-observability-plane
kubectl get pods -n wso2-amp
kubectl get pods -n amp-thunder
kubectl get pods -n amp-secrets

# Helm releases
helm list -A | grep -E 'openchoreo|amp|gateway'
ServiceURL
Agent Manager Consolehttp://localhost:3000
Agent Manager APIhttp://localhost:9000
Observability Gateway (HTTP)http://localhost:22893/otel
Observability Gateway (HTTPS)https://localhost:22894/otel

Default credentials: admin / admin

Self-Signed Certificate for OTEL HTTPS

If your OTEL exporters encounter certificate errors on the HTTPS endpoint:

kubectl get secret obs-gateway-gateway-controller-tls \
-n openchoreo-data-plane \
-o jsonpath='{.data.ca\.crt}' | base64 --decode > ca.crt

export OTEL_EXPORTER_OTLP_CERTIFICATE=$(pwd)/ca.crt

Cleanup

Delete the entire k3d cluster and all resources:

k3d cluster delete amp-local

Production Considerations

This installation is designed for development and exploration. For production:

  1. Identity provider — Replace Thunder dev mode with a proper IdP (Asgardeo, Auth0, Okta)
  2. TLS — Replace self-signed certificates with CA-signed certificates
  3. Secrets backend — Disable OpenBao dev mode; configure persistent storage and proper auth
  4. Observability storage — Configure persistent volumes for OpenSearch
  5. Resource sizing — Adjust requests/limits based on workload
  6. High availability — Deploy multiple replicas of critical components
  7. Security hardening — Apply network policies, RBAC, pod security standards

Troubleshooting

Pods stuck in Pending

Usually a resource constraint. Check node capacity:

kubectl describe pod <pod-name> -n <namespace>
kubectl top nodes

Increase Colima/Docker Desktop resources if needed.

Gateway not becoming Programmed
kubectl logs -n openchoreo-data-plane -l app.kubernetes.io/name=gateway-operator
kubectl describe apigateway obs-gateway -n openchoreo-data-plane
Plane registration issues
kubectl get clusterdataplane default -n default -o yaml
kubectl logs -n openchoreo-control-plane -l app.kubernetes.io/name=openchoreo-control-plane
OpenSearch connectivity issues
kubectl get pods -n openchoreo-observability-plane -l app=opensearch
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- \
curl -v http://opensearch.openchoreo-observability-plane.svc.cluster.local:9200
Port already in use

Find the process occupying the port and stop it:

lsof -i :<port>

Reference: Configuration Files

All Agent Manager-specific configuration files used in this guide:

FilePurpose
k3d-config.yamlk3d cluster with all required port mappings
values-cp.yamlControl Plane — Backstage disabled, AMP Thunder OIDC
values-dp.yamlData Plane — gateway ports, Fluent Bit config
values-op.yamlObservability Plane — standalone OpenSearch, AMP Thunder OIDC
values-openbao.yamlOpenBao — dev mode, Kubernetes auth, pre-seeded secrets
oc-collector-configmap.yamlCustom OTel Collector ConfigMap for trace ingestion
api-platform-operator-full-config.yamlGateway Operator — JWT auth, rate limiting
obs-gateway.yamlObservability API Gateway resource
otel-collector-rest-api.yamlOTel Collector REST API resource