OpenTelemetry Integration
Catalyst-core provides a built-in SDK for OpenTelemetry integration, enabling comprehensive observability through distributed tracing and metrics collection. The SDK simplifies the setup process and provides a unified interface for monitoring your Catalyst applications.
Installation
Install the required OpenTelemetry packages for automatic instrumentation:
npm install @opentelemetry/auto-instrumentations-node @opentelemetry/sdk-node
Quick Setup
Import the OpenTelemetry SDK from catalyst-core:
import Otel from "catalyst-core/otel";
Initialize OpenTelemetry in your preServerInit() function in server/index.js:
const preServerInit = () => {
    Otel.init();
};
This provides a quick setup with default configuration and automatic instrumentation.
Configuration
You can customize the OpenTelemetry setup by passing a configuration object to Otel.init():
const preServerInit = () => {
    Otel.init({
        serviceName: "my-service",
        serviceVersion: "1.0.0",
        environment: "production",
        traceUrl: "http://localhost:4318/v1/traces",
        metricUrl: "http://localhost:4318/v1/metrics",
        traceHeaders: {
            "Authorization": "Bearer your-token"
        },
        metricHeaders: {
            "Authorization": "Bearer your-token"
        },
        exportIntervalMillis: 5000,
        exportTimeoutMillis: 30000
    });
};
Configuration Options
| Option | Type | Description | Default | 
|---|---|---|---|
| serviceName | string | Name of your service | "catalyst-app" | 
| serviceVersion | string | Version of your service | "1.0.0" | 
| environment | string | Environment (dev, staging, prod) | "development" | 
| traceUrl | string | OTLP trace export endpoint | "http://localhost:4318/v1/traces" | 
| metricUrl | string | OTLP metric export endpoint | "http://localhost:4318/v1/metrics" | 
| traceHeaders | object | Headers for trace export | {} | 
| metricHeaders | object | Headers for metric export | {} | 
| exportIntervalMillis | number | Metric export interval in ms | 10000 | 
| exportTimeoutMillis | number | Export timeout in ms | 10000 | 
Manual Setup
For more control over the OpenTelemetry configuration, you can set it up manually using the Node SDK:
import { NodeSDK } from '@opentelemetry/sdk-node';
import { Resource } from '@opentelemetry/resources';
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-otlp-http';
const preServerInit = () => {
    const sdk = new NodeSDK({
        resource: new Resource({
            [ATTR_SERVICE_NAME]: 'catalyst-app',
        }),
        spanProcessor: new SimpleSpanProcessor(new OTLPTraceExporter()),
    });
    
    sdk.start();
};
Testing Your Instrumentation
You need an OpenTelemetry collector with a compatible backend to test OpenTelemetry traces locally.
Setting Up a Local OpenTelemetry Collector
The OpenTelemetry Collector acts as a middleman between your application and observability backends. It receives, processes, and exports telemetry data. Here are several options for local testing:
Option 1: Docker Compose with Jaeger and Prometheus Backend
Create a docker-compose.yml file for a complete observability stack:
version: '3.8'
services:
  # Jaeger
  jaeger:
    image: jaegertracing/all-in-one:latest
    ports:
      - "16686:16686"
      - "14268:14268"
    environment:
      - COLLECTOR_OTLP_ENABLED=true
  # OpenTelemetry Collector
  otel-collector:
    image: otel/opentelemetry-collector-contrib:latest
    command: ["--config=/etc/otel-collector-config.yaml"]
    volumes:
      - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
    ports:
      - "4317:4317"   # OTLP gRPC receiver
      - "4318:4318"   # OTLP HTTP receiver
    depends_on:
      - jaeger
    # Prometheus
  prometheus:
    image: prom/prometheus:latest
    volumes:
      - ./prometheus.yaml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"
Create an otel-collector-config.yaml file:
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
processors:
  batch:
exporters:
  jaeger:
    endpoint: jaeger:14250
    tls:
      insecure: true
  prometheus:
    endpoint: "0.0.0.0:8889"
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [jaeger]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [prometheus]
Create a prometheus.yaml file
scrape_configs:
  - job_name: 'otel-collector'
    static_configs:
      - targets: ['otel-collector:8889']
    scrape_interval: 10s
    metrics_path: /metrics
Start the stack: docker-compose up -d
Access Jaeger UI at: http://localhost:16686 Access Prometheus UI at: http://localhost:9090
You can also set up a custom Grafana instance linking to Prometheus for advanced dashboards and alerting For more details see: Grafana
Option 2: Console Exporter (Development Only)
For quick debugging without external dependencies, use the console exporter:
import { NodeSDK } from '@opentelemetry/sdk-node';
import { ConsoleSpanExporter } from '@opentelemetry/sdk-trace-node';
import { Resource } from '@opentelemetry/resources';
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
const preServerInit = () => {
    const sdk = new NodeSDK({
        resource: new Resource({
            [ATTR_SERVICE_NAME]: 'catalyst-app-debug',
        }),
        traceExporter: new ConsoleSpanExporter(),
    });
    
    sdk.start();
};
Compatible Backends
Popular observability backends that work with OpenTelemetry:
- Jaeger: Open-source distributed tracing platform
- Zipkin: Distributed tracing system
- Grafana Cloud: Managed observability platform
- New Relic: Full-stack observability
- Datadog: Cloud monitoring platform
- Honeycomb: Observability for complex systems
- Lightstep: Distributed tracing and performance monitoring
Verifying Your Setup
- Start your observability backend
- Run your Catalyst application with OpenTelemetry enabled
- Make some requests to your application
- Check your backend's UI for traces and metrics
For Jaeger, visit http://localhost:16686 and select your service from the dropdown to view traces.