GPT-4 Preview Shutdown in March 2026: What to Migrate Now
A practical March 2026 guide to the OpenAI shutdown schedule for older GPT-4 preview model lines and what teams should replace immediately.
A practical March 2026 guide to the OpenAI shutdown schedule for older GPT-4 preview model lines and what teams should replace immediately.
OpenAI deprecations now list March 26, 2026 shutdown dates for older preview-era GPT-4 lines including `gpt-4-0125-preview`, `gpt-4-1106-preview`, and related preview variants. This is exactly the kind of lifecycle change that many teams overlook because the model still appears to work right now. Once the deadline hits, though, the issue stops being theoretical and becomes a hard failure for any route still pinned to those names.
Preview models are especially dangerous in mature products because they often survive inside legacy services, low-volume internal tooling, or forgotten background jobs. The traffic may be small, but if the workflow is important, the outage still matters.
It is common to find preview models in places that never got revisited after an early launch. Teams prototype fast, pick a model that works, and move on. Later, the main product migrates, but batch jobs, evaluation tools, dashboards, or administrative utilities remain on the older model ID. Those systems rarely get the same lifecycle attention as customer-facing code.
That is why the right first step is not choosing a replacement. It is discovering every place the old model names still exist: code, config, tests, prompts stored in databases, orchestration services, and even documentation used by operators.
Replacement should be based on workload requirements, not on sentimental attachment to the old preview line. Some routes need strong reasoning, some need low latency, and some need stable structured output. Review what the route actually needs, then map it to the current supported model lineup. That is the only defensible way to migrate.
Avoid one blanket replacement decision for the entire estate. Production systems often have very different route classes, and forcing them all onto one model can create unnecessary cost or performance tradeoffs.
Model lifecycle migrations often focus too narrowly on answer quality. Quality matters, but it is not the only production variable. Also test latency distribution, output-format stability, token consumption, tool-use behavior if applicable, and downstream parser reliability. Preview-line shutdowns are a good moment to check whether your route contract is too brittle overall.
If the older model was masking weak schema handling or overfitted prompt logic, the migration may surface that fragility. Treat that as useful signal. Hardening the route now is better than carrying the weakness into the next lifecycle event.
Use route-level migration with observable checkpoints. For each route, define the old model, the replacement, the acceptance criteria, and the rollback rule. Then record the migration result in one place so on-call engineers and support teams know exactly which routes remain exposed to the shutdown date.
Also update runbooks and dashboards to search for outdated model IDs. Future lifecycle events will be easier if you already have one view that highlights deprecated model usage across the estate.
The deeper lesson is that preview models should always be treated as temporary dependencies. They can unlock progress, but they need lifecycle ownership from day one. That means abstraction, replacement planning, and clear documentation of where they are used.
Teams that normalize this discipline end up migrating faster and with less fear. Teams that treat preview models as permanent because the product launched successfully are the ones repeatedly surprised by shutdown calendars.
As of March 12, 2026, the March 26 shutdown window for older GPT-4 preview lines is close enough that discovery and replacement should already be underway or finished. This is immediate housekeeping with real production consequences.
Find every outdated model reference, replace based on actual workload needs, and treat preview lifecycle management as part of normal AI platform operations.
After you remove the old GPT-4 preview references, add one simple governance rule: every preview model must have an owner, a review date, and a replacement plan. That sounds administrative, but it is actually an operational safeguard. Without those three items, preview dependencies tend to stay invisible until the next shutdown calendar creates pressure.
You should also add automated detection in CI or periodic scans to flag deprecated model names. Lifecycle management is much easier when your tooling catches stale references before customers do.
Stakeholders do not need a long explanation of model lifecycle policy. They need a concrete statement: older preview model lines shut down on March 26, 2026, the affected routes have been identified, and each route has a replacement and validation owner. That level of clarity prevents escalation noise and keeps the work focused.
Good communication also helps support teams know whether an issue is true provider instability or simply a route that missed the lifecycle update. That distinction saves time during incident response.
This article is based on current official provider documentation and release material available as of March 12, 2026, then translated into operational guidance for engineering teams.