Your App (OTEL SDK) ↓ OTLP (gRPC :4317 or HTTP :4318)
OTel Collector (batching, tenant enrichment) ↓
Kafka (Strimzi, KRaft mode) ↓
Bridge (Python, 4 concurrent threads) ├── OTLP ETL (flatten JSON, normalize fields) ├── Anomaly Detection (z-score on error rate distributions) ├── OpenSearch Indexer (bulk index, ILM lifecycle) └── Trace Correlation (5-layer request lifecycle engine) ↓
OpenSearch (full-text search, analytics) +
Ticketing Agent (RCA via LLM → Jira/ServiceNow/PagerDuty/Slack)
Your App (OTEL SDK) ↓ OTLP (gRPC :4317 or HTTP :4318)
OTel Collector (batching, tenant enrichment) ↓
Kafka (Strimzi, KRaft mode) ↓
Bridge (Python, 4 concurrent threads) ├── OTLP ETL (flatten JSON, normalize fields) ├── Anomaly Detection (z-score on error rate distributions) ├── OpenSearch Indexer (bulk index, ILM lifecycle) └── Trace Correlation (5-layer request lifecycle engine) ↓
OpenSearch (full-text search, analytics) +
Ticketing Agent (RCA via LLM → Jira/ServiceNow/PagerDuty/Slack)
Your App (OTEL SDK) ↓ OTLP (gRPC :4317 or HTTP :4318)
OTel Collector (batching, tenant enrichment) ↓
Kafka (Strimzi, KRaft mode) ↓
Bridge (Python, 4 concurrent threads) ├── OTLP ETL (flatten JSON, normalize fields) ├── Anomaly Detection (z-score on error rate distributions) ├── OpenSearch Indexer (bulk index, ILM lifecycle) └── Trace Correlation (5-layer request lifecycle engine) ↓
OpenSearch (full-text search, analytics) +
Ticketing Agent (RCA via LLM → Jira/ServiceNow/PagerDuty/Slack)
from opentelemetry.sdk._logs import LoggerProvider
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter exporter = OTLPLogExporter( endpoint="https://otel.logclaw.ai/v1/logs", headers={"x-logclaw-api-key": "lc_proj_your_key"},
)
provider = LoggerProvider()
provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
from opentelemetry.sdk._logs import LoggerProvider
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter exporter = OTLPLogExporter( endpoint="https://otel.logclaw.ai/v1/logs", headers={"x-logclaw-api-key": "lc_proj_your_key"},
)
provider = LoggerProvider()
provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
from opentelemetry.sdk._logs import LoggerProvider
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter exporter = OTLPLogExporter( endpoint="https://otel.logclaw.ai/v1/logs", headers={"x-logclaw-api-key": "lc_proj_your_key"},
)
provider = LoggerProvider()
provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
const { OTLPLogExporter } = require('@opentelemetry/exporter-logs-otlp-http'); const exporter = new OTLPLogExporter({ url: 'https://otel.logclaw.ai/v1/logs', headers: { 'x-logclaw-api-key': 'lc_proj_your_key' },
});
const { OTLPLogExporter } = require('@opentelemetry/exporter-logs-otlp-http'); const exporter = new OTLPLogExporter({ url: 'https://otel.logclaw.ai/v1/logs', headers: { 'x-logclaw-api-key': 'lc_proj_your_key' },
});
const { OTLPLogExporter } = require('@opentelemetry/exporter-logs-otlp-http'); const exporter = new OTLPLogExporter({ url: 'https://otel.logclaw.ai/v1/logs', headers: { 'x-logclaw-api-key': 'lc_proj_your_key' },
});
java -javaagent:opentelemetry-javaagent.jar \ -Dotel.exporter.otlp.endpoint=https://otel.logclaw.ai \ -Dotel.exporter.otlp.headers=x-logclaw-api-key=lc_proj_your_key \ -jar my-app.jar
java -javaagent:opentelemetry-javaagent.jar \ -Dotel.exporter.otlp.endpoint=https://otel.logclaw.ai \ -Dotel.exporter.otlp.headers=x-logclaw-api-key=lc_proj_your_key \ -jar my-app.jar
java -javaagent:opentelemetry-javaagent.jar \ -Dotel.exporter.otlp.endpoint=https://otel.logclaw.ai \ -Dotel.exporter.otlp.headers=x-logclaw-api-key=lc_proj_your_key \ -jar my-app.jar
git clone https://github.com/logclaw/logclaw.git
cd logclaw
docker compose up -d
git clone https://github.com/logclaw/logclaw.git
cd logclaw
docker compose up -d
git clone https://github.com/logclaw/logclaw.git
cd logclaw
docker compose up -d
helm install logclaw charts/logclaw-tenant \ --namespace logclaw \ --create-namespace
helm install logclaw charts/logclaw-tenant \ --namespace logclaw \ --create-namespace
helm install logclaw charts/logclaw-tenant \ --namespace logclaw \ --create-namespace - PagerDuty fires at 3 AM (threshold alert you set 6 months ago)
- You open Datadog/Splunk/Grafana
- You spend 45 minutes grepping through dashboards
- You find the error, but not the cause
- You spend another hour tracing across services
- You open a Jira ticket manually and paste log lines
- You fix the bug - Pattern matches (30%) + Statistical z-score (25%) + Contextual signals (15%) + HTTP status (10%) + Log severity (10%) + Structural indicators (10%) - Blast radius: How many services are simultaneously erroring (5+ services = 0.90 weight)
- Velocity: Error rate acceleration vs. historical average (5x spike = 0.80 weight)
- Recurrence: Novel error templates score higher than known patterns - Immediate path (<100ms): OOM, crashes, and resource exhaustion fire instantly — no waiting for time windows. Your payment service crashes at 3 AM, and there's a ticket before the process restarts.
- Windowed path (10-30s): Statistical anomalies detected via z-score analysis on sliding windows. - Trace ID clustering — Groups related logs across services
- Temporal proximity — Associates logs within the same time window
- Service dependency mapping — Maps caller → callee relationships
- Error propagation tracking — Traces the cascade from root cause to symptoms
- Blast radius computation — Identifies all affected downstream services - Pulls relevant log samples + the correlated trace timeline from OpenSearch
- Sends them to your LLM (OpenAI, Claude, or Ollama for air-gapped deployments)
- Generates a root cause analysis with blast radius and suggested fix
- Creates a deduplicated ticket on Jira, ServiceNow, PagerDuty, OpsGenie, Slack, or Zammad - Metrics support — ingest OTEL metrics alongside logs
- Trace visualization — distributed trace rendering in the dashboard
- Deep learning anomaly models — beyond z-score, using autoencoder models for subtle drift detection
- Runbook automation — not just tickets, but auto-remediation scripts - GitHub: https://github.com/logclaw/logclaw
- Docs: https://docs.logclaw.ai
- Managed Cloud: https://console.logclaw.ai (1 GB/day free, no credit card)
- Book a Demo: https://calendly.com/robelkidin/logclaw