Integrate Quarkus Flow Apps with Data Index
Learn how to configure your Quarkus Flow applications to send workflow events to Data Index.
Overview
To integrate your Quarkus Flow app with Data Index, you need to:
-
Add Kubernetes deployment dependencies
-
Configure structured logging to stdout
-
Configure Kubernetes deployment properties
-
Build and deploy to your cluster
TL;DR for KIND (Local Development)
After completing the configuration below, deploy with:
# Build and deploy
mvn clean package -Pkind
# Load image to KIND (required - not automatic!)
kind load docker-image local/my-workflow-app:1.0.0 --name data-index-test
# Pods start automatically
Step 1: Add Dependencies
Add Kubernetes and container image dependencies to your pom.xml.
Option A: Maven Profile (Recommended)
<profiles>
<profile>
<id>kind</id>
<properties>
<quarkus.profile>kind</quarkus.profile>
</properties>
<dependencies>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-kind</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-container-image-jib</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-health</artifactId>
</dependency>
</dependencies>
</profile>
</profiles>
|
Benefits of Maven Profile:
|
Option B: Direct Dependencies
Add directly to <dependencies> section if you always deploy to Kubernetes:
<dependencies>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-kubernetes</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-container-image-jib</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-health</artifactId>
</dependency>
</dependencies>
Step 2: Configure Structured Logging
Create or update src/main/resources/application.properties:
# Application name (used in container image name)
quarkus.application.name=my-workflow-app
# Default Kubernetes deployment target
quarkus.kubernetes.deployment-target=kubernetes
# ==========================================
# Quarkus Flow Structured Logging (REQUIRED)
# ==========================================
# Enable structured logging
quarkus.flow.structured-logging.enabled=true
# Capture all workflow and task events
quarkus.flow.structured-logging.events=workflow.*
# Include workflow input/output in events
quarkus.flow.structured-logging.include-workflow-payloads=true
quarkus.flow.structured-logging.include-task-payloads=false
# Use epoch-seconds for PostgreSQL compatibility
quarkus.flow.structured-logging.timestamp-format=epoch-seconds
# Set log level
quarkus.flow.structured-logging.log-level=INFO
# ==========================================
# Console Handler for Structured Events
# ==========================================
# Create dedicated console handler for JSON events
quarkus.log.handler.console."FLOW_EVENTS_CONSOLE".enabled=true
quarkus.log.handler.console."FLOW_EVENTS_CONSOLE".format=%s%n
# Route structured logging to console handler ONLY
# CRITICAL: Use 'io.quarkiverse.flow.structuredlogging' not 'io.quarkiverse.flow'
quarkus.log.category."io.quarkiverse.flow.structuredlogging".handlers=FLOW_EVENTS_CONSOLE
quarkus.log.category."io.quarkiverse.flow.structuredlogging".use-parent-handlers=false
quarkus.log.category."io.quarkiverse.flow.structuredlogging".level=INFO
# Health checks
quarkus.smallrye-health.ui.enabled=true
|
Why epoch-seconds timestamp format? FluentBit’s PostgreSQL output plugin expects Unix epoch timestamps for |
Step 3: Configure Kubernetes Deployment
For KIND (Local Development)
Create src/main/resources/application-kind.properties:
# KIND deployment target
quarkus.kubernetes.deployment-target=kind
# CRITICAL: namespace must be 'workflows' for default FluentBit config
quarkus.kubernetes.namespace=workflows
# Container image - use single 'image' property
quarkus.container-image.build=true
quarkus.container-image.image=local/${quarkus.application.name}:1.0.0
quarkus.container-image.push=false
# Enable automatic deployment to cluster
quarkus.kubernetes.deploy=true
# Image pull policy
quarkus.kubernetes.image-pull-policy=IfNotPresent
# Service type - NodePort for easy local access
quarkus.kubernetes.service-type=NodePort
quarkus.kubernetes.node-port=30081
# Use 'prod' profile at runtime
quarkus.kubernetes.env.vars.QUARKUS_PROFILE=prod
# Resource limits
quarkus.kubernetes.resources.requests.memory=256Mi
quarkus.kubernetes.resources.limits.memory=512Mi
|
Manual Image Loading Required The |
For Cloud (GKE/EKS/AKS)
Create src/main/resources/application-kubernetes.properties:
# Cloud deployment
quarkus.kubernetes.deployment-target=kubernetes
quarkus.container-image.registry=gcr.io
quarkus.container-image.group=your-gcp-project
quarkus.container-image.push=true
# Namespace
quarkus.kubernetes.namespace=workflows
# Container image
quarkus.container-image.name=${quarkus.application.name}
quarkus.container-image.tag=1.0.0
quarkus.container-image.build=true
# Service type
quarkus.kubernetes.service-type=ClusterIP
# Runtime profile
quarkus.kubernetes.env.vars.QUARKUS_PROFILE=prod
# Resource limits
quarkus.kubernetes.resources.requests.memory=256Mi
quarkus.kubernetes.resources.limits.memory=512Mi
Step 4: Configure Production Runtime
Create src/main/resources/application-prod.properties:
# Production runtime settings
# Add your production-specific settings here:
# - Database connections
# - External service URLs
# - Production-specific configuration
# Disable file handler in Kubernetes (use console only)
# Quarkus Flow auto-creates a file handler, but pods don't have /var/log/quarkus-flow/
quarkus.log.handler.file."FLOW_EVENTS".enabled=false
|
Why disable the file handler? Quarkus Flow automatically creates a file handler for structured logging. In Kubernetes, pods don’t have write access to |
Step 5: Build and Deploy
KIND (Local Development)
# 1. Ensure kubectl context is correct
kubectl config use-context kind-data-index-test
# 2. Build and deploy
mvn clean package -Pkind
# 3. Load image to KIND (REQUIRED!)
kind load docker-image local/my-workflow-app:1.0.0 --name data-index-test
# 4. Verify deployment
kubectl get pods -n workflows
|
Pods may show |
Step 6: Verify Integration
Check Application Logs
Verify structured logging is working:
kubectl logs -n workflows -l app.kubernetes.io/name=my-workflow-app | grep eventType
Expected output (JSON events):
{"instanceId":"01KQ...", "eventType":"io.serverlessworkflow.workflow.started.v1", ...}
{"instanceId":"01KQ...", "eventType":"io.serverlessworkflow.task.started.v1", ...}
{"instanceId":"01KQ...", "eventType":"io.serverlessworkflow.task.completed.v1", ...}
Trigger a Workflow
Execute a workflow via your JAX-RS endpoint:
# Option 1: NodePort (KIND)
curl -X POST http://localhost:30081/your-endpoint \
-H "Content-Type: application/json" \
-d '{"input":"data"}'
# Option 2: Port-forward (all clusters)
kubectl port-forward -n workflows svc/my-workflow-app 8080:8080 &
curl -X POST http://localhost:8080/your-endpoint \
-H "Content-Type: application/json" \
-d '{"input":"data"}'
Query Data Index
Wait 5-10 seconds for event propagation, then query:
# Port-forward to Data Index
kubectl port-forward -n data-index svc/data-index-service 8080:8080 &
# Query workflow instances
curl -s http://localhost:8080/graphql \
-H "Content-Type: application/json" \
-d '{"query":"{ getWorkflowInstances { id name status taskExecutions { taskPosition status } } }"}' \
| jq .
Expected response:
{
"data": {
"getWorkflowInstances": [
{
"id": "01KQ...",
"name": "your-workflow-name",
"status": "COMPLETED",
"taskExecutions": [...]
}
]
}
}
Property Files Summary
| File | Purpose | Key Settings |
|---|---|---|
|
Common config (all profiles) |
|
|
KIND build-time config |
|
|
Runtime production config |
Production-specific settings |
Common Mistakes
| Mistake | Solution |
|---|---|
Wrong log category |
Use |
Missing timestamp format |
Must set |
Missing console handler |
Need |
Building without profile |
Use |
Wrong namespace |
Must be |
Missing QUARKUS_PROFILE=prod |
Add to Kubernetes env vars in properties |
Not waiting for propagation |
Events take 5-10 seconds to reach Data Index |
Forgetting kind load |
|