OpenTelemetry Integration for ChatGPT Apps
Observability is the cornerstone of reliable production ChatGPT applications. As your MCP servers handle thousands of requests across distributed systems, understanding performance bottlenecks, debugging errors, and monitoring service health becomes critical. OpenTelemetry provides a vendor-neutral, standardized approach to capturing telemetry data—traces, metrics, and logs—from your ChatGPT applications.
OpenTelemetry (OTel) is the industry standard for observability, combining the best of OpenTracing and OpenCensus. For ChatGPT apps built on the Model Context Protocol (MCP), OpenTelemetry enables:
- Distributed Tracing: Track request flows across MCP servers, external APIs, and databases
- Metrics Collection: Monitor request rates, error rates, and latency (RED method)
- Structured Logging: Correlate logs with traces for faster debugging
Unlike proprietary monitoring solutions, OpenTelemetry is open-source, vendor-agnostic, and future-proof. Whether you export telemetry to Jaeger, Prometheus, AWS CloudWatch, or Datadog, the instrumentation code remains identical. This guide provides production-ready TypeScript examples for instrumenting ChatGPT apps with OpenTelemetry, covering trace propagation, metrics collection, log correlation, and exporter configuration.
By the end of this article, you'll have a fully observable MCP server with distributed tracing, RED metrics, and structured logs—ready for production deployment. Let's dive into the three pillars of observability for ChatGPT applications.
Distributed Tracing for MCP Servers
Distributed tracing visualizes the complete journey of a ChatGPT request as it traverses multiple services. When a user invokes a tool in your MCP server—such as searchRestaurants or createBooking—the request may trigger database queries, external API calls, and internal service communication. Without tracing, debugging latency issues or cascading failures becomes a guessing game.
OpenTelemetry traces consist of spans—units of work with a start time, end time, and metadata (attributes, events, status). A trace is a tree of spans representing the request lifecycle. The root span represents the incoming MCP tool invocation, while child spans represent downstream operations (database queries, HTTP requests).
Trace Context Propagation
The W3C Trace Context standard ensures trace continuity across service boundaries. When your MCP server calls an external API, it propagates the current trace context via HTTP headers (traceparent, tracestate). This enables end-to-end visibility, from the ChatGPT conversation to your backend services.
Here's a production-ready implementation of OpenTelemetry tracing for an MCP server:
// src/observability/tracing.ts
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { HttpInstrumentation } from '@opentelemetry/instrumentation-http';
import { ExpressInstrumentation } from '@opentelemetry/instrumentation-express';
import { Resource } from '@opentelemetry/resources';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { W3CTraceContextPropagator } from '@opentelemetry/core';
import { trace, context, Span, SpanStatusCode } from '@opentelemetry/api';
// Initialize tracer provider with service metadata
export function initTracing(serviceName: string, serviceVersion: string) {
const provider = new NodeTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: serviceName,
[SemanticResourceAttributes.SERVICE_VERSION]: serviceVersion,
[SemanticResourceAttributes.DEPLOYMENT_ENVIRONMENT]: process.env.NODE_ENV || 'development',
}),
});
// Configure OTLP exporter (send traces to collector)
const exporter = new OTLPTraceExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || 'http://localhost:4318/v1/traces',
headers: {
'x-api-key': process.env.OTEL_API_KEY || '',
},
});
// Use batch processor for performance (batches spans before export)
provider.addSpanProcessor(new BatchSpanProcessor(exporter, {
maxQueueSize: 2048,
maxExportBatchSize: 512,
scheduledDelayMillis: 5000,
}));
// Register provider globally
provider.register({
propagator: new W3CTraceContextPropagator(),
});
// Auto-instrument HTTP and Express
registerInstrumentations({
instrumentations: [
new HttpInstrumentation({
requestHook: (span, request) => {
span.setAttribute('http.request.id', request.headers['x-request-id'] || 'unknown');
},
}),
new ExpressInstrumentation(),
],
});
console.log(`✅ OpenTelemetry tracing initialized (service: ${serviceName})`);
return trace.getTracer(serviceName, serviceVersion);
}
// Create custom span for MCP tool invocation
export function traceMCPTool<T>(
toolName: string,
operation: (span: Span) => Promise<T>
): Promise<T> {
const tracer = trace.getTracer('mcp-server');
return tracer.startActiveSpan(`mcp.tool.${toolName}`, async (span) => {
try {
// Add semantic attributes
span.setAttribute('mcp.tool.name', toolName);
span.setAttribute('mcp.protocol.version', '2024-11-05');
// Execute operation with active span context
const result = await operation(span);
// Mark span as successful
span.setStatus({ code: SpanStatusCode.OK });
return result;
} catch (error) {
// Record exception and mark span as error
span.recordException(error as Error);
span.setStatus({
code: SpanStatusCode.ERROR,
message: (error as Error).message,
});
throw error;
} finally {
span.end();
}
});
}
// Extract trace context from incoming request
export function extractTraceContext(headers: Record<string, string>) {
return context.extract(context.active(), headers, {
get: (carrier, key) => carrier[key],
keys: (carrier) => Object.keys(carrier),
});
}
// Inject trace context into outgoing request
export function injectTraceContext(headers: Record<string, string>) {
context.inject(context.active(), headers, {
set: (carrier, key, value) => {
carrier[key] = value;
},
});
return headers;
}
MCP Tool Tracing Example
When a ChatGPT user invokes the searchRestaurants tool, this creates a distributed trace:
// src/tools/search-restaurants.ts
import { traceMCPTool } from '../observability/tracing';
import { Span } from '@opentelemetry/api';
export async function searchRestaurants(params: {
location: string;
cuisine?: string;
priceRange?: string;
}) {
return traceMCPTool('searchRestaurants', async (span: Span) => {
// Add input parameters as span attributes
span.setAttribute('restaurant.location', params.location);
if (params.cuisine) span.setAttribute('restaurant.cuisine', params.cuisine);
if (params.priceRange) span.setAttribute('restaurant.price_range', params.priceRange);
// Child span for database query
const restaurants = await traceOperation(span, 'db.query', async () => {
return await db.restaurants.findMany({
where: {
location: params.location,
cuisine: params.cuisine,
priceRange: params.priceRange,
},
});
});
span.setAttribute('restaurant.results_count', restaurants.length);
span.addEvent('restaurants_retrieved', { count: restaurants.length });
return restaurants;
});
}
// Helper: Create child span for nested operation
async function traceOperation<T>(
parentSpan: Span,
operationName: string,
operation: () => Promise<T>
): Promise<T> {
const tracer = trace.getTracer('mcp-server');
return tracer.startActiveSpan(operationName, { parent: parentSpan }, async (childSpan) => {
try {
const result = await operation();
childSpan.setStatus({ code: SpanStatusCode.OK });
return result;
} catch (error) {
childSpan.recordException(error as Error);
childSpan.setStatus({ code: SpanStatusCode.ERROR });
throw error;
} finally {
childSpan.end();
}
});
}
This creates a trace hierarchy: mcp.tool.searchRestaurants → db.query. When visualized in Jaeger or Zipkin, you see the exact time spent in each operation, making performance optimization straightforward.
For more on distributed tracing patterns, see our Jaeger integration guide.
Metrics Collection with RED Method
Metrics provide quantitative data about your MCP server's health and performance. Unlike traces (which sample individual requests), metrics aggregate data over time windows. The RED method (Rate, Errors, Duration) is the gold standard for service-level metrics:
- Rate: Requests per second (throughput)
- Errors: Error rate percentage
- Duration: Request latency distribution (p50, p95, p99)
OpenTelemetry metrics support three instrument types:
- Counter: Monotonically increasing value (total requests, total errors)
- Histogram: Distribution of values (request duration, payload size)
- Gauge: Point-in-time value (active connections, memory usage)
Production Metrics Implementation
Here's a complete metrics collector for MCP servers:
// src/observability/metrics.ts
import { MeterProvider, PeriodicExportingMetricReader } from '@opentelemetry/sdk-metrics';
import { Resource } from '@opentelemetry/resources';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-http';
import { metrics, Counter, Histogram, ObservableGauge } from '@opentelemetry/api';
// Initialize metrics provider
export function initMetrics(serviceName: string, serviceVersion: string) {
const exporter = new OTLPMetricExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || 'http://localhost:4318/v1/metrics',
headers: {
'x-api-key': process.env.OTEL_API_KEY || '',
},
});
const meterProvider = new MeterProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: serviceName,
[SemanticResourceAttributes.SERVICE_VERSION]: serviceVersion,
}),
readers: [
new PeriodicExportingMetricReader({
exporter,
exportIntervalMillis: 60000, // Export every 60 seconds
}),
],
});
metrics.setGlobalMeterProvider(meterProvider);
console.log(`✅ OpenTelemetry metrics initialized (service: ${serviceName})`);
return metrics.getMeter(serviceName, serviceVersion);
}
// MCP server metrics collector
export class MCPMetrics {
private requestCounter: Counter;
private errorCounter: Counter;
private durationHistogram: Histogram;
private activeConnectionsGauge: ObservableGauge;
private activeConnections = 0;
constructor(private meter: any) {
// Counter: Total MCP tool invocations
this.requestCounter = meter.createCounter('mcp.tool.invocations', {
description: 'Total MCP tool invocations',
unit: '1',
});
// Counter: Total errors
this.errorCounter = meter.createCounter('mcp.tool.errors', {
description: 'Total MCP tool errors',
unit: '1',
});
// Histogram: Request duration distribution
this.durationHistogram = meter.createHistogram('mcp.tool.duration', {
description: 'MCP tool execution duration',
unit: 'ms',
advice: {
explicitBucketBoundaries: [10, 25, 50, 100, 250, 500, 1000, 2500, 5000, 10000],
},
});
// Gauge: Active connections (observable callback)
this.activeConnectionsGauge = meter.createObservableGauge('mcp.connections.active', {
description: 'Number of active MCP connections',
unit: '1',
});
this.activeConnectionsGauge.addCallback((observableResult) => {
observableResult.observe(this.activeConnections);
});
}
// Record MCP tool invocation
recordRequest(toolName: string, attributes: Record<string, string | number> = {}) {
this.requestCounter.add(1, {
'mcp.tool.name': toolName,
...attributes,
});
}
// Record error
recordError(toolName: string, errorType: string, attributes: Record<string, string | number> = {}) {
this.errorCounter.add(1, {
'mcp.tool.name': toolName,
'error.type': errorType,
...attributes,
});
}
// Record request duration
recordDuration(toolName: string, durationMs: number, attributes: Record<string, string | number> = {}) {
this.durationHistogram.record(durationMs, {
'mcp.tool.name': toolName,
...attributes,
});
}
// Update active connections count
incrementConnections() {
this.activeConnections++;
}
decrementConnections() {
this.activeConnections = Math.max(0, this.activeConnections - 1);
}
}
// Singleton metrics instance
let mcpMetrics: MCPMetrics;
export function getMCPMetrics(meter?: any): MCPMetrics {
if (!mcpMetrics) {
const mtr = meter || metrics.getMeter('mcp-server');
mcpMetrics = new MCPMetrics(mtr);
}
return mcpMetrics;
}
RED Metrics Middleware
Integrate metrics collection into your MCP server request handling:
// src/middleware/metrics-middleware.ts
import { Request, Response, NextFunction } from 'express';
import { getMCPMetrics } from '../observability/metrics';
export function metricsMiddleware(req: Request, res: Response, next: NextFunction) {
const metrics = getMCPMetrics();
const startTime = Date.now();
// Extract tool name from request body (MCP tool call)
const toolName = req.body?.params?.name || 'unknown';
// Record request
metrics.recordRequest(toolName, {
'http.method': req.method,
'http.route': req.route?.path || req.path,
});
// Intercept response to record duration and errors
const originalSend = res.send;
res.send = function (body: any) {
const duration = Date.now() - startTime;
// Record duration
metrics.recordDuration(toolName, duration, {
'http.status_code': res.statusCode,
});
// Record error if status code >= 400
if (res.statusCode >= 400) {
metrics.recordError(toolName, `http_${res.statusCode}`, {
'http.status_code': res.statusCode,
});
}
return originalSend.call(this, body);
};
next();
}
This middleware automatically tracks the RED metrics for every MCP tool invocation. In Prometheus or Grafana, you can query:
- Rate:
rate(mcp_tool_invocations_total[5m]) - Errors:
rate(mcp_tool_errors_total[5m]) / rate(mcp_tool_invocations_total[5m]) - Duration:
histogram_quantile(0.95, mcp_tool_duration_bucket)
For a deeper dive into Prometheus metrics collection, see our dedicated guide.
Logs Integration with Correlation IDs
While traces and metrics provide quantitative insights, logs offer qualitative context—error messages, debug information, and business logic details. The challenge with distributed systems is correlating logs from multiple services to a single request flow.
OpenTelemetry solves this with trace correlation IDs. Every log entry includes the trace ID and span ID, allowing you to jump from a trace visualization to the exact logs for that request.
Structured Logging with Winston
Here's a production-ready logging setup with OpenTelemetry correlation:
// src/observability/logging.ts
import winston from 'winston';
import { trace, context } from '@opentelemetry/api';
// Extract trace context for log correlation
function getTraceContext() {
const span = trace.getSpan(context.active());
if (!span) {
return { traceId: '', spanId: '' };
}
const spanContext = span.spanContext();
return {
traceId: spanContext.traceId,
spanId: spanContext.spanId,
traceFlags: spanContext.traceFlags,
};
}
// Custom log formatter with trace correlation
const traceFormatter = winston.format((info) => {
const traceContext = getTraceContext();
return {
...info,
trace_id: traceContext.traceId,
span_id: traceContext.spanId,
trace_flags: traceContext.traceFlags,
};
});
// Initialize logger
export const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp({ format: 'YYYY-MM-DD HH:mm:ss.SSS' }),
traceFormatter(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: {
service: process.env.SERVICE_NAME || 'mcp-server',
environment: process.env.NODE_ENV || 'development',
},
transports: [
// Console output (development)
new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.printf(({ timestamp, level, message, trace_id, span_id, ...meta }) => {
const traceInfo = trace_id ? `[trace=${trace_id.slice(0, 8)} span=${span_id.slice(0, 8)}]` : '';
const metaStr = Object.keys(meta).length ? JSON.stringify(meta) : '';
return `${timestamp} ${level}: ${message} ${traceInfo} ${metaStr}`;
})
),
}),
// File output (production)
new winston.transports.File({
filename: 'logs/error.log',
level: 'error',
maxsize: 10485760, // 10MB
maxFiles: 5,
}),
new winston.transports.File({
filename: 'logs/combined.log',
maxsize: 10485760,
maxFiles: 10,
}),
],
});
// Helper: Log with span context
export function logWithSpan(level: string, message: string, meta: Record<string, any> = {}) {
const span = trace.getSpan(context.active());
if (span) {
span.addEvent(message, meta);
}
logger.log(level, message, meta);
}
Log Exporter Configuration
Export logs to centralized systems (CloudWatch, Loki, Elasticsearch):
// src/observability/log-exporter.ts
import { LoggerProvider, SimpleLogRecordProcessor, BatchLogRecordProcessor } from '@opentelemetry/sdk-logs';
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-http';
import { Resource } from '@opentelemetry/resources';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
import { logs } from '@opentelemetry/api-logs';
export function initLogExporter(serviceName: string, serviceVersion: string) {
const exporter = new OTLPLogExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || 'http://localhost:4318/v1/logs',
headers: {
'x-api-key': process.env.OTEL_API_KEY || '',
},
});
const loggerProvider = new LoggerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: serviceName,
[SemanticResourceAttributes.SERVICE_VERSION]: serviceVersion,
}),
});
loggerProvider.addLogRecordProcessor(
new BatchLogRecordProcessor(exporter, {
maxQueueSize: 2048,
maxExportBatchSize: 512,
scheduledDelayMillis: 5000,
})
);
logs.setGlobalLoggerProvider(loggerProvider);
console.log(`✅ OpenTelemetry log exporter initialized (service: ${serviceName})`);
}
With trace correlation, you can query logs by trace ID: trace_id="5c8e4b2a9d3f1e6c" to see all logs for a specific ChatGPT conversation. For MCP server logging best practices, see our comprehensive guide.
Exporters Configuration
OpenTelemetry exporters send telemetry data to observability backends. The OTLP (OpenTelemetry Protocol) is the standard wire format, but you can also export directly to Jaeger, Prometheus, CloudWatch, or Datadog.
OTLP Exporter (Recommended)
The OTLP exporter sends traces, metrics, and logs to an OpenTelemetry Collector, which then forwards to backends:
// src/observability/otlp-exporter.ts
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { MeterProvider } from '@opentelemetry/sdk-metrics';
import { LoggerProvider } from '@opentelemetry/sdk-logs';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-http';
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-http';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { PeriodicExportingMetricReader } from '@opentelemetry/sdk-metrics';
import { BatchLogRecordProcessor } from '@opentelemetry/sdk-logs';
import { Resource } from '@opentelemetry/resources';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
// OTLP configuration interface
interface OTLPConfig {
serviceName: string;
serviceVersion: string;
endpoint: string;
apiKey?: string;
environment?: string;
}
// Initialize all OTLP exporters
export function initOTLPExporters(config: OTLPConfig) {
const resource = new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: config.serviceName,
[SemanticResourceAttributes.SERVICE_VERSION]: config.serviceVersion,
[SemanticResourceAttributes.DEPLOYMENT_ENVIRONMENT]: config.environment || 'production',
});
const headers = config.apiKey ? { 'x-api-key': config.apiKey } : {};
// Trace exporter
const traceExporter = new OTLPTraceExporter({
url: `${config.endpoint}/v1/traces`,
headers,
});
const tracerProvider = new NodeTracerProvider({ resource });
tracerProvider.addSpanProcessor(new BatchSpanProcessor(traceExporter));
tracerProvider.register();
// Metric exporter
const metricExporter = new OTLPMetricExporter({
url: `${config.endpoint}/v1/metrics`,
headers,
});
const meterProvider = new MeterProvider({
resource,
readers: [new PeriodicExportingMetricReader({ exporter: metricExporter })],
});
// Log exporter
const logExporter = new OTLPLogExporter({
url: `${config.endpoint}/v1/logs`,
headers,
});
const loggerProvider = new LoggerProvider({ resource });
loggerProvider.addLogRecordProcessor(new BatchLogRecordProcessor(logExporter));
console.log(`✅ OTLP exporters initialized (endpoint: ${config.endpoint})`);
}
Jaeger Exporter (Distributed Tracing)
For direct Jaeger integration (traces only):
// src/observability/jaeger-exporter.ts
import { JaegerExporter } from '@opentelemetry/exporter-jaeger';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
export function initJaegerExporter(serviceName: string) {
const exporter = new JaegerExporter({
endpoint: process.env.JAEGER_ENDPOINT || 'http://localhost:14268/api/traces',
tags: [],
maxPacketSize: 65000,
});
const provider = new NodeTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: serviceName,
}),
});
provider.addSpanProcessor(new BatchSpanProcessor(exporter));
provider.register();
console.log(`✅ Jaeger exporter initialized (endpoint: ${process.env.JAEGER_ENDPOINT})`);
}
Prometheus Exporter (Metrics)
For direct Prometheus scraping (metrics only):
// src/observability/prometheus-exporter.ts
import { PrometheusExporter } from '@opentelemetry/exporter-prometheus';
import { MeterProvider } from '@opentelemetry/sdk-metrics';
export function initPrometheusExporter(port: number = 9464) {
const exporter = new PrometheusExporter(
{
port,
endpoint: '/metrics',
},
() => {
console.log(`✅ Prometheus exporter listening on http://localhost:${port}/metrics`);
}
);
const meterProvider = new MeterProvider({
readers: [exporter],
});
return meterProvider;
}
AWS CloudWatch Exporter
For AWS environments (traces, metrics, logs):
// src/observability/cloudwatch-exporter.ts
import { AWSXRayPropagator } from '@opentelemetry/propagator-aws-xray';
import { AWSXRayIdGenerator } from '@opentelemetry/id-generator-aws-xray';
import { AwsInstrumentation } from '@opentelemetry/instrumentation-aws-sdk';
import { registerInstrumentations } from '@opentelemetry/instrumentation';
export function initCloudWatchExporter(serviceName: string) {
const tracerProvider = new NodeTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: serviceName,
}),
idGenerator: new AWSXRayIdGenerator(),
});
tracerProvider.register({
propagator: new AWSXRayPropagator(),
});
registerInstrumentations({
instrumentations: [new AwsInstrumentation()],
});
console.log(`✅ AWS X-Ray exporter initialized (service: ${serviceName})`);
}
For most deployments, use the OTLP exporter with an OpenTelemetry Collector. This provides maximum flexibility—change backends without modifying application code.
Context Propagation Across Services
When your MCP server calls external APIs or microservices, trace context must propagate across service boundaries. The W3C Trace Context specification defines two HTTP headers:
traceparent:00-{trace-id}-{parent-span-id}-{trace-flags}tracestate: Vendor-specific trace data
OpenTelemetry automatically injects these headers into outgoing HTTP requests (via HTTP instrumentation). For manual propagation:
// src/utils/context-propagation.ts
import { context, propagation, trace } from '@opentelemetry/api';
import axios, { AxiosRequestConfig } from 'axios';
// Inject trace context into HTTP client
export async function httpClientWithTracing<T>(
url: string,
config: AxiosRequestConfig = {}
): Promise<T> {
const headers = config.headers || {};
// Inject current trace context into headers
propagation.inject(context.active(), headers);
const response = await axios({
...config,
url,
headers,
});
return response.data;
}
// Create child span for external API call
export async function callExternalAPI<T>(
apiName: string,
operation: () => Promise<T>
): Promise<T> {
const tracer = trace.getTracer('mcp-server');
return tracer.startActiveSpan(`http.client.${apiName}`, async (span) => {
try {
span.setAttribute('http.target', apiName);
const result = await operation();
span.setStatus({ code: SpanStatusCode.OK });
return result;
} catch (error) {
span.recordException(error as Error);
span.setStatus({ code: SpanStatusCode.ERROR });
throw error;
} finally {
span.end();
}
});
}
Baggage Propagation
Baggage allows you to propagate key-value pairs across service boundaries (e.g., user ID, tenant ID):
// src/utils/baggage.ts
import { propagation, baggage, context } from '@opentelemetry/api';
// Set baggage values
export function setBaggage(key: string, value: string) {
const currentBaggage = propagation.getBaggage(context.active()) || baggage.createBaggage();
const updatedBaggage = currentBaggage.setEntry(key, { value });
return baggage.setGlobalBaggage(updatedBaggage);
}
// Get baggage value
export function getBaggage(key: string): string | undefined {
const currentBaggage = propagation.getBaggage(context.active());
return currentBaggage?.getEntry(key)?.value;
}
// Usage: Set user ID in baggage for cross-service correlation
setBaggage('user.id', req.user.id);
setBaggage('tenant.id', req.tenant.id);
For advanced patterns, see our observability best practices guide.
Conclusion
Implementing OpenTelemetry in your ChatGPT applications transforms black-box systems into fully observable architectures. With distributed tracing, you visualize request flows across MCP servers and external APIs. With RED metrics, you monitor service health in real-time. With structured logging and trace correlation, you debug production issues in minutes instead of hours.
This guide provided production-ready TypeScript implementations for:
- Distributed Tracing: Trace context propagation, MCP tool spans, child span creation
- Metrics Collection: RED method (rate, errors, duration), histogram/counter/gauge instruments
- Logs Integration: Structured logging with trace correlation, log exporters
- Exporters Configuration: OTLP, Jaeger, Prometheus, CloudWatch
- Context Propagation: W3C Trace Context, baggage, cross-service correlation
For ChatGPT applications deployed at scale, observability is not optional—it's the foundation of reliability. OpenTelemetry provides vendor-neutral instrumentation that adapts to your infrastructure, whether you use Jaeger, Datadog, or AWS X-Ray.
Build Observable ChatGPT Apps Without Code
Want to deploy production-ready ChatGPT apps with built-in OpenTelemetry observability—without writing instrumentation code?
MakeAIHQ is the only no-code platform that auto-generates ChatGPT apps with enterprise-grade observability:
- Automatic Distributed Tracing: Every generated MCP server includes OpenTelemetry tracing
- Pre-configured Metrics: RED metrics (rate, errors, duration) enabled by default
- Structured Logging: Trace-correlated logs out of the box
- Exporter Templates: One-click integration with Jaeger, Prometheus, CloudWatch
- Instant Deployment: From zero to observable ChatGPT app in 48 hours
Start your free trial and deploy observable ChatGPT apps today—no OpenTelemetry configuration required.
Additional Resources
- Complete Guide to Building ChatGPT Applications
- Distributed Tracing with Jaeger for ChatGPT Apps
- Prometheus Metrics Collection for ChatGPT Apps
- MCP Server Logging Best Practices
- Observability Best Practices for ChatGPT Apps
- OpenTelemetry Official Documentation
- W3C Trace Context Specification
- OTLP Protocol Documentation