Provider Coverage
This page lists the providers currently tracked in the public dashboard. Coverage will expand over time with additional endpoints and incident annotations.
Currently Monitored
General endpoint reachability and latency sampling.
General endpoint reachability and latency sampling.
General endpoint reachability and latency sampling.
General endpoint reachability and latency sampling.
General endpoint reachability and latency sampling.
General endpoint reachability and latency sampling.
Scope Clarification
- Provider status does not imply every model endpoint is affected identically.
- Regional edge behavior can differ from your own workload region.
- Use this as a baseline signal alongside your own telemetry.
How To Read Provider Status Correctly
Provider-level status is a broad health signal, not a guarantee that all models and endpoints are performing equally. During high-load periods, one route may be stable while another experiences elevated latency or throttling. Treat this page as an entry point, then validate against endpoint-specific pages and your own logs.
For production use, combine three views: provider status, endpoint latency distribution, and application-level error rates. This minimizes both overreaction to small spikes and delayed reaction to meaningful degradation.
Coverage Depth by Provider
Current public coverage is focused on consistency and comparability. Each provider currently includes scheduled synthetic checks for reachability and latency sampling. OpenAI has the deepest public coverage in this release, including endpoint-level tables, region views, and incident summarization.
- OpenAI: provider signal, endpoint rollups, region status, incident feed.
- Other providers: provider signal with rolling uptime and latency baselines.
- Roadmap: endpoint-level expansion for Anthropic, Gemini, Mistral, Cohere, and Perplexity.
Operational Guidance by Use Case
Different workloads need different response thresholds. Chat applications usually prioritize latency and user responsiveness, while offline pipelines can tolerate slower responses in exchange for cost efficiency.
- Interactive chat: watch p95 and timeout trends; fail over sooner.
- Batch processing: watch sustained error ratios; retry with jitter and queue controls.
- Enterprise workflows: use multi-provider policy with explicit rollback criteria.
What We Are Adding Next
Coverage will expand in a way that remains useful for operators: deeper endpoint maps, historical comparisons, and clearer incident timelines tied to observable metrics rather than generic status messages.
If your team wants a specific provider or endpoint prioritized, send a request through the contact page with your workload type, critical routes, and preferred region.
Verification Best Practice
Before declaring a provider outage internally, confirm with at least two independent signals: a public monitor like this page and your own application telemetry. This prevents unnecessary failovers when the issue is caused by local credentials, quota limits, or a single upstream network path.