AI Checker Hub

Provider Coverage

This page lists the providers currently tracked in the public dashboard. Coverage will expand over time with additional endpoints and incident annotations.

Currently Monitored

OpenAI
General endpoint reachability and latency sampling.
Anthropic
General endpoint reachability and latency sampling.
Google AI
General endpoint reachability and latency sampling.
Mistral AI
General endpoint reachability and latency sampling.
Cohere
General endpoint reachability and latency sampling.
Perplexity
General endpoint reachability and latency sampling.

Scope Clarification

How To Read Provider Status Correctly

Provider-level status is a broad health signal, not a guarantee that all models and endpoints are performing equally. During high-load periods, one route may be stable while another experiences elevated latency or throttling. Treat this page as an entry point, then validate against endpoint-specific pages and your own logs.

For production use, combine three views: provider status, endpoint latency distribution, and application-level error rates. This minimizes both overreaction to small spikes and delayed reaction to meaningful degradation.

Coverage Depth by Provider

Current public coverage is focused on consistency and comparability. Each provider currently includes scheduled synthetic checks for reachability and latency sampling. OpenAI has the deepest public coverage in this release, including endpoint-level tables, region views, and incident summarization.

Operational Guidance by Use Case

Different workloads need different response thresholds. Chat applications usually prioritize latency and user responsiveness, while offline pipelines can tolerate slower responses in exchange for cost efficiency.

What We Are Adding Next

Coverage will expand in a way that remains useful for operators: deeper endpoint maps, historical comparisons, and clearer incident timelines tied to observable metrics rather than generic status messages.

If your team wants a specific provider or endpoint prioritized, send a request through the contact page with your workload type, critical routes, and preferred region.

Verification Best Practice

Before declaring a provider outage internally, confirm with at least two independent signals: a public monitor like this page and your own application telemetry. This prevents unnecessary failovers when the issue is caused by local credentials, quota limits, or a single upstream network path.