Sentry Error Tracking for ChatGPT Apps

Production ChatGPT applications require robust error tracking to maintain reliability for millions of users. Sentry provides real-time error monitoring, performance tracking, and release management that helps teams identify and resolve issues before they impact users.

Introduction: Why Sentry for ChatGPT Apps

When your ChatGPT app serves thousands of users daily, every error matters. A failed API call, unhandled promise rejection, or widget rendering issue can cascade into poor user experience and lost revenue. Traditional logging approaches fail to capture the full context needed to debug complex issues in production environments.

Sentry transforms error tracking from reactive firefighting to proactive quality management. It automatically captures exceptions, enriches them with contextual breadcrumbs, groups similar errors, and alerts teams when regressions occur. For ChatGPT applications built with MCP servers and widget runtime, Sentry provides critical visibility into both server-side tool execution and client-side widget rendering.

Unlike basic logging solutions, Sentry offers source map integration for debugging minified production code, release tracking for identifying which deployments introduced bugs, and performance monitoring for detecting slow API calls. The platform integrates seamlessly with modern TypeScript workflows, CI/CD pipelines, and team collaboration tools like Slack and Jira.

ChatGPT app developers face unique challenges: MCP protocol complexity, OpenAI widget runtime constraints, and multi-platform deployment (iOS, Android, web). Sentry's breadcrumb system captures the sequence of events leading to errors, making it possible to debug issues that only occur in specific ChatGPT conversation contexts.

This guide demonstrates production-ready Sentry integration for ChatGPT applications, including TypeScript SDK configuration, automated source map uploads, custom breadcrumb tracking, and release management strategies.

SDK Integration: Implementing Sentry in ChatGPT Apps

Sentry supports both browser-side widget tracking and Node.js server-side MCP error monitoring. The integration requires careful initialization to avoid performance overhead while capturing comprehensive error context.

Browser SDK Configuration for ChatGPT Widgets

ChatGPT widgets run in a sandboxed iframe environment with strict CSP policies. Sentry initialization must respect these constraints while maximizing error capture:

// src/instrumentation/sentry-browser.ts
import * as Sentry from '@sentry/browser';
import { BrowserTracing } from '@sentry/tracing';
import { CaptureConsole } from '@sentry/integrations';

/**
 * Initialize Sentry for ChatGPT widget runtime
 *
 * CRITICAL REQUIREMENTS:
 * - Respect OpenAI widget CSP policies
 * - Minimize bundle size impact (<15kb gzipped)
 * - Capture widget-specific context
 * - Filter sensitive user data
 *
 * @param config - Sentry configuration options
 */
export function initializeSentryBrowser(config: {
  dsn: string;
  environment: 'production' | 'staging' | 'development';
  release?: string;
  tracesSampleRate?: number;
}): void {
  Sentry.init({
    dsn: config.dsn,
    environment: config.environment,
    release: config.release,

    // Sample 10% of transactions for performance monitoring
    tracesSampleRate: config.tracesSampleRate ?? 0.1,

    // Enable automatic session tracking
    autoSessionTracking: true,

    // Capture unhandled promise rejections (common in async widget code)
    integrations: [
      new BrowserTracing({
        // Track widget navigation events
        tracingOrigins: ['chatgpt.com', 'platform.openai.com'],

        // Track fetch/XHR calls to backend MCP server
        traceFetch: true,
        traceXHR: true,
      }),

      // Capture console errors as breadcrumbs
      new CaptureConsole({
        levels: ['error', 'warn'],
      }),
    ],

    // Filter sensitive data before sending to Sentry
    beforeSend(event, hint) {
      // Remove PII from error messages
      if (event.exception?.values) {
        event.exception.values = event.exception.values.map(exception => ({
          ...exception,
          value: exception.value?.replace(/\b\d{3}-\d{2}-\d{4}\b/g, '[SSN]'),
        }));
      }

      // Remove Authorization headers
      if (event.request?.headers) {
        delete event.request.headers['Authorization'];
        delete event.request.headers['Cookie'];
      }

      // Attach ChatGPT conversation context
      const conversationId = sessionStorage.getItem('chatgpt_conversation_id');
      if (conversationId) {
        event.tags = {
          ...event.tags,
          conversation_id: conversationId,
        };
      }

      return event;
    },

    // Ignore known third-party errors
    ignoreErrors: [
      // Browser extensions
      'top.GLOBALS',
      'Can\'t find variable: ZiteReader',

      // OpenAI widget runtime expected errors
      'ResizeObserver loop limit exceeded',
      'Non-Error promise rejection captured',
    ],

    // Set user context (without PII)
    initialScope: {
      tags: {
        'widget.platform': navigator.platform,
        'widget.viewport': `${window.innerWidth}x${window.innerHeight}`,
      },
    },
  });

  // Capture global error events
  window.addEventListener('error', (event) => {
    Sentry.captureException(event.error, {
      contexts: {
        widget: {
          source: event.filename,
          line: event.lineno,
          column: event.colno,
        },
      },
    });
  });

  // Capture unhandled promise rejections
  window.addEventListener('unhandledrejection', (event) => {
    Sentry.captureException(event.reason, {
      tags: {
        error_type: 'unhandled_promise_rejection',
      },
    });
  });
}

Node.js SDK Configuration for MCP Servers

MCP servers require server-side Sentry integration to track tool execution errors, API failures, and protocol violations:

// src/instrumentation/sentry-node.ts
import * as Sentry from '@sentry/node';
import { ProfilingIntegration } from '@sentry/profiling-node';
import { nodeProfilingIntegration } from '@sentry/profiling-node';

/**
 * Initialize Sentry for MCP server runtime
 *
 * CRITICAL REQUIREMENTS:
 * - Capture MCP protocol errors
 * - Track tool execution performance
 * - Link errors to specific ChatGPT conversations
 * - Profile slow database queries
 *
 * @param config - Sentry configuration options
 */
export function initializeSentryNode(config: {
  dsn: string;
  environment: 'production' | 'staging' | 'development';
  release?: string;
  profilesSampleRate?: number;
}): void {
  Sentry.init({
    dsn: config.dsn,
    environment: config.environment,
    release: config.release,

    // Sample 5% of transactions for profiling (lower than browser to reduce overhead)
    profilesSampleRate: config.profilesSampleRate ?? 0.05,
    tracesSampleRate: 0.1,

    integrations: [
      // Enable continuous profiling
      nodeProfilingIntegration(),

      // Automatic Express.js instrumentation
      new Sentry.Integrations.Http({ tracing: true }),

      // Database query tracing
      new Sentry.Integrations.Postgres(),
      new Sentry.Integrations.Mongo(),
    ],

    // Filter sensitive data
    beforeSend(event) {
      // Remove environment variables containing secrets
      if (event.contexts?.runtime?.environment) {
        const env = event.contexts.runtime.environment as Record<string, string>;
        Object.keys(env).forEach(key => {
          if (key.includes('SECRET') || key.includes('KEY') || key.includes('TOKEN')) {
            delete env[key];
          }
        });
      }

      // Redact API keys from error messages
      if (event.message) {
        event.message = event.message.replace(/sk-[a-zA-Z0-9]{48}/g, '[OPENAI_API_KEY]');
      }

      return event;
    },
  });

  // Capture uncaught exceptions
  process.on('uncaughtException', (error) => {
    Sentry.captureException(error, {
      level: 'fatal',
      tags: {
        error_type: 'uncaught_exception',
      },
    });

    // Flush events and exit
    Sentry.close(2000).then(() => {
      process.exit(1);
    });
  });

  // Capture unhandled promise rejections
  process.on('unhandledRejection', (reason) => {
    Sentry.captureException(reason, {
      level: 'error',
      tags: {
        error_type: 'unhandled_rejection',
      },
    });
  });
}

React Error Boundary Integration

For ChatGPT widgets built with React, error boundaries provide graceful fallback UI while capturing errors:

// src/components/SentryErrorBoundary.tsx
import React, { Component, ErrorInfo, ReactNode } from 'react';
import * as Sentry from '@sentry/browser';

interface Props {
  children: ReactNode;
  fallback?: ReactNode;
  showDialog?: boolean;
}

interface State {
  hasError: boolean;
  error: Error | null;
  errorInfo: ErrorInfo | null;
}

/**
 * React Error Boundary with Sentry integration
 *
 * Captures React component errors and displays fallback UI
 * Automatically reports errors to Sentry with component stack trace
 *
 * @example
 * <SentryErrorBoundary fallback={<ErrorFallback />}>
 *   <ChatGPTWidget />
 * </SentryErrorBoundary>
 */
export class SentryErrorBoundary extends Component<Props, State> {
  constructor(props: Props) {
    super(props);
    this.state = {
      hasError: false,
      error: null,
      errorInfo: null,
    };
  }

  static getDerivedStateFromError(error: Error): Partial<State> {
    return { hasError: true, error };
  }

  componentDidCatch(error: Error, errorInfo: ErrorInfo): void {
    // Log error details to console
    console.error('Widget error boundary caught error:', error, errorInfo);

    // Capture error with component stack trace
    Sentry.withScope((scope) => {
      // Add React component stack
      scope.setContext('react', {
        componentStack: errorInfo.componentStack,
      });

      // Add widget-specific context
      scope.setTag('error_boundary', 'widget');
      scope.setLevel('error');

      // Capture the exception
      const eventId = Sentry.captureException(error);

      // Show user dialog if enabled
      if (this.props.showDialog) {
        Sentry.showReportDialog({
          eventId,
          title: 'Widget encountered an error',
          subtitle: 'Our team has been notified. Please describe what you were doing when this occurred.',
        });
      }
    });

    this.setState({ errorInfo });
  }

  render(): ReactNode {
    if (this.state.hasError) {
      // Render custom fallback UI
      if (this.props.fallback) {
        return this.props.fallback;
      }

      // Default fallback UI
      return (
        <div style={{
          padding: '24px',
          backgroundColor: '#FEE',
          border: '1px solid #C33',
          borderRadius: '8px',
        }}>
          <h2>Widget Error</h2>
          <p>The widget encountered an error. Our team has been notified.</p>
          <details style={{ marginTop: '16px' }}>
            <summary>Error details</summary>
            <pre style={{
              fontSize: '12px',
              overflow: 'auto',
              maxHeight: '200px',
            }}>
              {this.state.error?.toString()}
              {this.state.errorInfo?.componentStack}
            </pre>
          </details>
          <button
            onClick={() => window.location.reload()}
            style={{
              marginTop: '16px',
              padding: '8px 16px',
              backgroundColor: '#C33',
              color: '#FFF',
              border: 'none',
              borderRadius: '4px',
              cursor: 'pointer',
            }}
          >
            Reload Widget
          </button>
        </div>
      );
    }

    return this.props.children;
  }
}

Source Maps: Debugging Minified Production Code

Production builds minify and obfuscate code for performance, making stack traces unreadable. Source maps enable Sentry to display original source code locations in error reports.

Build Configuration for Source Map Generation

Configure your build tool to generate source maps optimized for Sentry:

// vite.config.ts
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
import { sentryVitePlugin } from '@sentry/vite-plugin';

export default defineConfig({
  plugins: [
    react(),

    // Sentry plugin for automatic source map upload
    sentryVitePlugin({
      org: 'makeaihq',
      project: 'chatgpt-widget',

      // Auth token for uploading (use environment variable)
      authToken: process.env.SENTRY_AUTH_TOKEN,

      // Only upload in production builds
      disable: process.env.NODE_ENV !== 'production',

      // Source map upload options
      sourcemaps: {
        assets: './dist/**',
        ignore: ['**/node_modules/**'],
      },

      // Release configuration
      release: {
        name: process.env.SENTRY_RELEASE || `widget@${process.env.npm_package_version}`,
        setCommits: {
          auto: true, // Automatically determine commits
        },
      },
    }),
  ],

  build: {
    // Generate source maps in production
    sourcemap: true,

    // Minification settings
    minify: 'terser',
    terserOptions: {
      compress: {
        drop_console: false, // Keep console for Sentry breadcrumbs
      },
    },

    // Rollup options
    rollupOptions: {
      output: {
        // Separate source maps into .map files
        sourcemapExcludeSources: false,
      },
    },
  },
});

Automated Source Map Upload Script

For CI/CD pipelines without Vite plugin support, use the Sentry CLI:

#!/bin/bash
# scripts/upload-sourcemaps.sh
#
# Upload source maps to Sentry during CI/CD deployment
#
# REQUIREMENTS:
# - SENTRY_AUTH_TOKEN environment variable
# - SENTRY_ORG environment variable
# - SENTRY_PROJECT environment variable
# - SENTRY_RELEASE environment variable
#
# USAGE:
# ./scripts/upload-sourcemaps.sh

set -e

# Configuration
DIST_DIR="./dist"
SENTRY_URL="https://sentry.io"

# Validate environment variables
if [ -z "$SENTRY_AUTH_TOKEN" ]; then
  echo "ERROR: SENTRY_AUTH_TOKEN is not set"
  exit 1
fi

if [ -z "$SENTRY_RELEASE" ]; then
  echo "ERROR: SENTRY_RELEASE is not set"
  exit 1
fi

echo "========================================="
echo "Uploading source maps to Sentry"
echo "========================================="
echo "Organization: ${SENTRY_ORG}"
echo "Project: ${SENTRY_PROJECT}"
echo "Release: ${SENTRY_RELEASE}"
echo "========================================="

# Install Sentry CLI
if ! command -v sentry-cli &> /dev/null; then
  echo "Installing sentry-cli..."
  npm install -g @sentry/cli
fi

# Create release
echo "Creating release..."
sentry-cli releases new "$SENTRY_RELEASE" \
  --org "$SENTRY_ORG" \
  --project "$SENTRY_PROJECT"

# Upload source maps
echo "Uploading source maps..."
sentry-cli releases files "$SENTRY_RELEASE" upload-sourcemaps "$DIST_DIR" \
  --org "$SENTRY_ORG" \
  --project "$SENTRY_PROJECT" \
  --url-prefix "~/assets" \
  --validate \
  --strip-common-prefix

# Associate commits with release
echo "Associating commits..."
sentry-cli releases set-commits "$SENTRY_RELEASE" \
  --auto \
  --org "$SENTRY_ORG" \
  --project "$SENTRY_PROJECT"

# Finalize release
echo "Finalizing release..."
sentry-cli releases finalize "$SENTRY_RELEASE" \
  --org "$SENTRY_ORG" \
  --project "$SENTRY_PROJECT"

# Delete source maps from distribution (security best practice)
echo "Removing source maps from distribution..."
find "$DIST_DIR" -name "*.map" -type f -delete

echo "========================================="
echo "Source map upload complete!"
echo "View release: ${SENTRY_URL}/${SENTRY_ORG}/${SENTRY_PROJECT}/releases/${SENTRY_RELEASE}/"
echo "========================================="

Source Map Verification

Verify source maps are correctly uploaded and associated with errors:

// scripts/verify-sourcemaps.ts
import * as Sentry from '@sentry/node';

/**
 * Verify Sentry source maps are working correctly
 *
 * Triggers a test error and checks if stack trace shows original source
 *
 * USAGE:
 * SENTRY_DSN=xxx SENTRY_RELEASE=xxx ts-node scripts/verify-sourcemaps.ts
 */
async function verifySourceMaps(): Promise<void> {
  const dsn = process.env.SENTRY_DSN;
  const release = process.env.SENTRY_RELEASE;

  if (!dsn || !release) {
    throw new Error('SENTRY_DSN and SENTRY_RELEASE must be set');
  }

  // Initialize Sentry
  Sentry.init({
    dsn,
    release,
    environment: 'verification',
    tracesSampleRate: 1.0,
  });

  try {
    // Trigger test error
    throwTestError();
  } catch (error) {
    // Capture and wait for upload
    const eventId = Sentry.captureException(error);

    console.log('Test error captured');
    console.log('Event ID:', eventId);
    console.log('Release:', release);
    console.log('');
    console.log('Waiting for upload...');

    await Sentry.close(2000);

    console.log('');
    console.log('✅ Verification complete!');
    console.log('');
    console.log('Next steps:');
    console.log('1. Open Sentry dashboard');
    console.log(`2. Find event ID: ${eventId}`);
    console.log('3. Verify stack trace shows original source code (not minified)');
  }
}

function throwTestError(): never {
  const nested = () => {
    throw new Error('Source map verification error (safe to ignore)');
  };

  nested();
}

verifySourceMaps().catch(console.error);

Learn more about error handling patterns for ChatGPT apps to complement your Sentry integration.

Breadcrumbs: Tracking User Actions Leading to Errors

Breadcrumbs provide a timeline of events that led to an error, making it easier to reproduce and fix issues. Sentry automatically captures navigation, console logs, and network requests, but custom breadcrumbs add ChatGPT-specific context.

Automatic Breadcrumb Categories

Sentry captures these breadcrumb types automatically:

  • Navigation: Page loads, route changes
  • Console: console.log(), console.warn(), console.error()
  • Network: Fetch/XHR requests and responses
  • DOM: Click events, form submissions
  • User: Custom user interactions

Custom Breadcrumb Implementation

Add ChatGPT-specific breadcrumbs to track MCP tool calls, widget state changes, and conversation context:

// src/instrumentation/breadcrumbs.ts
import * as Sentry from '@sentry/browser';

/**
 * Custom breadcrumb tracking for ChatGPT widgets
 *
 * Captures widget lifecycle events, MCP tool calls, and user interactions
 * that are critical for debugging ChatGPT-specific issues
 */
export class WidgetBreadcrumbs {
  /**
   * Track MCP tool invocation
   */
  static trackToolCall(toolName: string, params: Record<string, unknown>): void {
    Sentry.addBreadcrumb({
      category: 'mcp.tool',
      message: `Tool called: ${toolName}`,
      level: 'info',
      data: {
        tool: toolName,
        params: this.sanitizeParams(params),
        timestamp: Date.now(),
      },
    });
  }

  /**
   * Track widget state transitions
   */
  static trackStateChange(
    previousState: string,
    newState: string,
    reason?: string
  ): void {
    Sentry.addBreadcrumb({
      category: 'widget.state',
      message: `State: ${previousState} → ${newState}`,
      level: 'info',
      data: {
        previous: previousState,
        next: newState,
        reason,
        timestamp: Date.now(),
      },
    });
  }

  /**
   * Track OpenAI API calls
   */
  static trackOpenAIAPICall(
    endpoint: string,
    method: string,
    statusCode?: number
  ): void {
    Sentry.addBreadcrumb({
      category: 'api.openai',
      message: `${method} ${endpoint}`,
      level: statusCode && statusCode >= 400 ? 'warning' : 'info',
      data: {
        endpoint,
        method,
        statusCode,
        timestamp: Date.now(),
      },
    });
  }

  /**
   * Track widget user interactions
   */
  static trackUserAction(
    action: string,
    target?: string,
    metadata?: Record<string, unknown>
  ): void {
    Sentry.addBreadcrumb({
      category: 'widget.interaction',
      message: `User ${action}`,
      level: 'info',
      data: {
        action,
        target,
        ...metadata,
        timestamp: Date.now(),
      },
    });
  }

  /**
   * Track conversation context changes
   */
  static trackConversationContext(
    conversationId: string,
    messageCount: number,
    turnCount: number
  ): void {
    Sentry.addBreadcrumb({
      category: 'chatgpt.conversation',
      message: `Conversation context updated`,
      level: 'info',
      data: {
        conversationId,
        messageCount,
        turnCount,
        timestamp: Date.now(),
      },
    });
  }

  /**
   * Track widget performance metrics
   */
  static trackPerformanceMetric(
    metric: string,
    value: number,
    unit: string
  ): void {
    Sentry.addBreadcrumb({
      category: 'widget.performance',
      message: `${metric}: ${value}${unit}`,
      level: value > 1000 ? 'warning' : 'info', // Flag slow operations
      data: {
        metric,
        value,
        unit,
        timestamp: Date.now(),
      },
    });
  }

  /**
   * Sanitize sensitive parameters before logging
   */
  private static sanitizeParams(params: Record<string, unknown>): Record<string, unknown> {
    const sanitized = { ...params };

    // Remove common sensitive fields
    const sensitiveKeys = ['password', 'token', 'apiKey', 'secret', 'authorization'];

    Object.keys(sanitized).forEach(key => {
      if (sensitiveKeys.some(sensitive => key.toLowerCase().includes(sensitive))) {
        sanitized[key] = '[REDACTED]';
      }
    });

    return sanitized;
  }
}

// Usage example in ChatGPT widget
export function exampleUsage(): void {
  // Track tool call
  WidgetBreadcrumbs.trackToolCall('search_products', {
    query: 'fitness equipment',
    limit: 10,
  });

  // Track state change
  WidgetBreadcrumbs.trackStateChange('loading', 'ready', 'Data fetched successfully');

  // Track user interaction
  WidgetBreadcrumbs.trackUserAction('clicked', 'buy-button', {
    productId: '12345',
    price: 49.99,
  });

  // Track performance
  const startTime = performance.now();
  // ... expensive operation ...
  const duration = performance.now() - startTime;
  WidgetBreadcrumbs.trackPerformanceMetric('widget_render', duration, 'ms');
}

Breadcrumb Filtering and Limits

Sentry limits breadcrumbs to 100 per event. Configure filters to keep the most relevant breadcrumbs:

// src/instrumentation/breadcrumb-filters.ts
import * as Sentry from '@sentry/browser';

/**
 * Configure breadcrumb filtering to keep most relevant events
 */
export function configureBreadcrumbFiltering(): void {
  Sentry.init({
    beforeBreadcrumb(breadcrumb, hint) {
      // Filter out noisy console breadcrumbs
      if (breadcrumb.category === 'console' && breadcrumb.level === 'log') {
        return null; // Drop debug logs
      }

      // Filter out health check requests
      if (breadcrumb.category === 'fetch' && breadcrumb.data?.url?.includes('/health')) {
        return null;
      }

      // Truncate large data payloads
      if (breadcrumb.data && JSON.stringify(breadcrumb.data).length > 1000) {
        breadcrumb.data = {
          ...breadcrumb.data,
          _truncated: true,
          _originalSize: JSON.stringify(breadcrumb.data).length,
        };
      }

      // Prioritize error-level breadcrumbs
      if (breadcrumb.level === 'error' || breadcrumb.level === 'warning') {
        breadcrumb.timestamp = Date.now() / 1000; // Ensure proper ordering
      }

      return breadcrumb;
    },

    // Maximum breadcrumbs to keep (default: 100)
    maxBreadcrumbs: 100,
  });
}

Combine breadcrumbs with production monitoring strategies for comprehensive observability.

Release Tracking: Identifying Regressions Across Deployments

Release tracking connects errors to specific code versions, enabling teams to identify which deployment introduced a bug and calculate error rates per release.

Creating Releases with Git Integration

Sentry releases should match your deployment versioning scheme:

// scripts/create-sentry-release.ts
import { execSync } from 'child_process';
import * as Sentry from '@sentry/node';

/**
 * Create Sentry release with Git metadata
 *
 * USAGE:
 * SENTRY_AUTH_TOKEN=xxx ts-node scripts/create-sentry-release.ts
 */
async function createRelease(): Promise<void> {
  const authToken = process.env.SENTRY_AUTH_TOKEN;
  const org = 'makeaihq';
  const project = 'chatgpt-widget';

  if (!authToken) {
    throw new Error('SENTRY_AUTH_TOKEN is required');
  }

  // Generate release name from Git
  const gitCommit = execSync('git rev-parse HEAD').toString().trim();
  const gitBranch = execSync('git rev-parse --abbrev-ref HEAD').toString().trim();
  const packageVersion = process.env.npm_package_version || '0.0.0';

  const releaseName = `${project}@${packageVersion}+${gitCommit.substring(0, 7)}`;

  console.log('Creating Sentry release:', releaseName);

  // Create release via API
  const response = await fetch(`https://sentry.io/api/0/organizations/${org}/releases/`, {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${authToken}`,
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({
      version: releaseName,
      projects: [project],
      refs: [
        {
          repository: 'makeaihq/chatgpt-widget',
          commit: gitCommit,
          previousCommit: getPreviousCommit(),
        },
      ],
    }),
  });

  if (!response.ok) {
    const error = await response.text();
    throw new Error(`Failed to create release: ${error}`);
  }

  console.log('✅ Release created:', releaseName);

  // Set environment variable for deployment
  console.log(`\nSet this in your deployment:\nSENTRY_RELEASE=${releaseName}`);
}

function getPreviousCommit(): string | undefined {
  try {
    return execSync('git rev-parse HEAD~1').toString().trim();
  } catch {
    return undefined; // No previous commit (initial release)
  }
}

createRelease().catch(console.error);

Deploy Tracking for Release Health Monitoring

Track when releases are deployed to different environments:

// scripts/track-deployment.ts
import { execSync } from 'child_process';

/**
 * Track deployment to Sentry for release health monitoring
 *
 * USAGE:
 * SENTRY_AUTH_TOKEN=xxx SENTRY_RELEASE=xxx DEPLOY_ENV=production \
 * ts-node scripts/track-deployment.ts
 */
async function trackDeployment(): Promise<void> {
  const authToken = process.env.SENTRY_AUTH_TOKEN;
  const release = process.env.SENTRY_RELEASE;
  const environment = process.env.DEPLOY_ENV || 'production';
  const org = 'makeaihq';

  if (!authToken || !release) {
    throw new Error('SENTRY_AUTH_TOKEN and SENTRY_RELEASE are required');
  }

  console.log('Tracking deployment to Sentry');
  console.log('Release:', release);
  console.log('Environment:', environment);

  // Create deployment via API
  const response = await fetch(
    `https://sentry.io/api/0/organizations/${org}/releases/${encodeURIComponent(release)}/deploys/`,
    {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${authToken}`,
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        environment,
        name: `Deploy ${release} to ${environment}`,
        url: process.env.DEPLOY_URL,
        dateStarted: new Date().toISOString(),
        dateFinished: new Date().toISOString(),
      }),
    }
  );

  if (!response.ok) {
    const error = await response.text();
    throw new Error(`Failed to track deployment: ${error}`);
  }

  console.log('✅ Deployment tracked');
}

trackDeployment().catch(console.error);

Regression Detection with Alert Rules

Configure Sentry alerts to notify teams when new errors appear in a release:

# .sentry/alert-rules.yaml
#
# Sentry alert rules for ChatGPT widget
#
# Deploy: sentry-cli alerts upload .sentry/alert-rules.yaml

rules:
  # Alert on new errors in release
  - name: "New Error in Release"
    conditions:
      - id: "sentry.rules.conditions.first_seen_event.FirstSeenEventCondition"
    filters:
      - id: "sentry.rules.filters.tagged_event.TaggedEventFilter"
        key: "release"
        match: "is"
    actions:
      - id: "sentry.rules.actions.notify_event_service.NotifyEventServiceAction"
        service: "slack"
        workspace: "makeaihq"
        channel: "#alerts-chatgpt"

  # Alert on error spike in release
  - name: "Error Spike in Release"
    conditions:
      - id: "sentry.rules.conditions.event_frequency.EventFrequencyCondition"
        value: 100
        interval: "1h"
    actions:
      - id: "sentry.rules.actions.notify_event_service.NotifyEventServiceAction"
        service: "slack"
        workspace: "makeaihq"
        channel: "#alerts-chatgpt"
      - id: "sentry.integrations.pagerduty.notify_action.PagerDutyNotifyServiceAction"
        account: "makeaihq"
        service: "chatgpt-widget-on-call"

  # Alert on regression (previously resolved error)
  - name: "Regression Detected"
    conditions:
      - id: "sentry.rules.conditions.regression_event.RegressionEventCondition"
    actions:
      - id: "sentry.rules.actions.notify_event_service.NotifyEventServiceAction"
        service: "slack"
        workspace: "makeaihq"
        channel: "#alerts-chatgpt"
      - id: "sentry.integrations.jira.notify_action.JiraCreateTicketAction"
        project: "CHATGPT"
        issuetype: "Bug"
        priority: "High"
        labels: "regression,sentry-auto"

  # Alert on high error rate
  - name: "High Error Rate"
    conditions:
      - id: "sentry.rules.conditions.event_frequency.EventFrequencyPercentCondition"
        value: 5.0  # 5% of sessions
        interval: "1h"
    filters:
      - id: "sentry.rules.filters.tagged_event.TaggedEventFilter"
        key: "environment"
        match: "eq"
        value: "production"
    actions:
      - id: "sentry.integrations.pagerduty.notify_action.PagerDutyNotifyServiceAction"
        account: "makeaihq"
        service: "chatgpt-widget-on-call"

Integrate release tracking with your CI/CD pipeline for ChatGPT apps to automate deployment monitoring.

Performance Monitoring: Tracking Slow Operations

Beyond error tracking, Sentry's performance monitoring identifies slow API calls, database queries, and rendering bottlenecks that degrade user experience.

Transaction Tracing for MCP Tool Calls

Track performance of individual MCP tool invocations:

// src/instrumentation/performance.ts
import * as Sentry from '@sentry/node';

/**
 * Track MCP tool performance with Sentry transactions
 */
export async function executeMCPToolWithTracing<T>(
  toolName: string,
  params: Record<string, unknown>,
  executor: () => Promise<T>
): Promise<T> {
  // Start transaction
  const transaction = Sentry.startTransaction({
    op: 'mcp.tool',
    name: `MCP Tool: ${toolName}`,
    tags: {
      tool: toolName,
    },
    data: {
      params,
    },
  });

  try {
    // Create span for tool execution
    const span = transaction.startChild({
      op: 'mcp.execute',
      description: `Execute ${toolName}`,
    });

    // Execute tool
    const result = await executor();

    // Record success
    span.setStatus('ok');
    span.finish();

    return result;
  } catch (error) {
    // Record failure
    transaction.setStatus('internal_error');
    Sentry.captureException(error, {
      contexts: {
        tool: {
          name: toolName,
          params,
        },
      },
    });

    throw error;
  } finally {
    // Finish transaction
    transaction.finish();
  }
}

// Usage example
export async function searchProducts(query: string): Promise<Product[]> {
  return executeMCPToolWithTracing(
    'search_products',
    { query },
    async () => {
      // Actual implementation
      const response = await fetch(`/api/products/search?q=${encodeURIComponent(query)}`);
      return response.json();
    }
  );
}

Custom Instrumentation for Widget Rendering

Track frontend performance metrics:

// src/instrumentation/widget-performance.ts
import * as Sentry from '@sentry/browser';

/**
 * Track widget rendering performance
 */
export function trackWidgetRender(componentName: string): () => void {
  const transaction = Sentry.startTransaction({
    op: 'widget.render',
    name: `Render ${componentName}`,
  });

  // Return cleanup function
  return () => {
    transaction.finish();
  };
}

// Usage in React component
export function ProductListWidget(): JSX.Element {
  const finishRender = trackWidgetRender('ProductListWidget');

  useEffect(() => {
    return () => {
      finishRender();
    };
  }, []);

  return <div>...</div>;
}

Learn more about logging best practices and alerting strategies to build comprehensive monitoring.

Conclusion: Building Production-Grade Error Tracking

Sentry transforms error tracking from reactive debugging to proactive quality management. By implementing comprehensive SDK integration, source map uploads, custom breadcrumbs, release tracking, and performance monitoring, ChatGPT app developers gain the visibility needed to maintain high reliability at scale.

The integration examples in this guide provide production-ready starting points for TypeScript-based ChatGPT applications. Remember to filter sensitive data, configure alert rules for critical issues, and regularly review error trends to identify systemic problems.

Ready to Build Production ChatGPT Apps?

MakeAIHQ provides the complete infrastructure for building, deploying, and monitoring ChatGPT apps—including pre-configured Sentry integration, automated deployment pipelines, and built-in error handling patterns.

Start building: https://makeaihq.com/signup

Explore monitoring tools: https://makeaihq.com/features/monitoring

Read the docs: Complete Guide to Building ChatGPT Applications


Related Resources

External References


Last updated: December 2026