Oracle Database 19c has been the workhorse of enterprise Oracle deployments for the last five years. It’s been the long-term release — the one everyone migrated to from 12c, the one that proved stable enough for even the most conservative production environments. But Premier Support for 19c ended in April 2024. Extended Support runs through April 2027. After that, you’re on Sustaining Support — which means no new patches, no new security fixes, just access to existing fixes.
The clock is ticking. And the question every Oracle DBA needs to be seriously considering right now is: when do we move to 23ai, and how?
Understanding the Oracle Support Lifecycle — The Details That Matter
People often conflate “end of support” with “the database stops working.” It doesn’t. Your 19c database will run fine after April 2027. But here’s what you lose:
After Premier Support ends (April 2024): No new PSUs (Patch Set Updates) or RUs (Release Updates). You can still download existing patches, but Oracle won’t create new ones for 19c vulnerabilities discovered after this date.
After Extended Support ends (April 2027): No new patches of any kind. Known security vulnerabilities discovered after this date will not be patched for 19c. You’re running a database with unpatched CVEs.
Sustaining Support (after April 2027): You get access to the My Oracle Support knowledge base and existing patches. You can log SRs. But Oracle’s resolution for any new issue will typically be “upgrade to a supported release.”
For organizations in regulated industries — banking, healthcare, insurance, government — running unsupported software with unpatched security vulnerabilities creates compliance risk that auditors will flag. This isn’t theoretical.
Why 23ai, Not 21c?
Oracle 21c was an innovation release — it introduced many of the features that landed in 23ai, but it was never intended as a long-term platform. Premier Support for 21c ended in April 2024 as well. Migrating from 19c to 21c would be a short-term move that solves nothing.
Oracle 23ai (formerly 23c) is the next long-term release. Premier Support runs until April 2028, Extended Support through April 2031. This is the target platform.
The feature gap between 19c and 23ai is significant. In addition to AI Vector Search (which I covered in Post 1), 23ai introduces:
SQL Domains — Define reusable domain objects that attach validation constraints, display transformations, and ordering to column types. Reduces duplicated constraint logic across tables.
sql
CREATE DOMAIN email_domain AS VARCHAR2(255) CONSTRAINT email_chk CHECK (VALUE LIKE '%@%.%') DISPLAY UPPER(VALUE);CREATE TABLE customers ( id NUMBER PRIMARY KEY, email email_domain);
JSON Relational Duality Views — A single Oracle object that exposes data as both a relational table and a JSON document. Your Java microservice can read and write JSON; your SQL reports read relational data. Same underlying storage, no synchronization needed.
True Cache — Discussed separately, but worth noting here as a 23ai-specific feature.
Property Graph Queries (SQL/PGQ) — Standard SQL extension for graph queries. You can define a graph over existing relational tables and query it with graph traversal syntax, without a separate graph database.
Annotations — Metadata attributes on database objects. Document your schema intent in the catalog, not in an external spreadsheet.
Migration Path: In-Place Upgrade vs. Full Migration
There are two primary paths from 19c to 23ai.
In-Place Upgrade with DBUA or AutoUpgrade
AutoUpgrade is Oracle’s recommended tool for database upgrades. It handles pre-checks, the upgrade itself, and post-upgrade steps. For 19c to 23ai on the same server:
bash
# Download AutoUpgrade from MOS, then:java -jar autoupgrade.jar -config upgrade.cfg -mode analyze# Review the generated report, then:java -jar autoupgrade.jar -config upgrade.cfg -mode deploy
The analyze mode runs hundreds of pre-upgrade checks without modifying anything. Fix everything it flags before running deploy.
Key pre-checks to expect failures on:
- Deprecated initialization parameters (several 19c parameters are desupported in 23ai)
- Tablespace requirements (SYSAUX and SYSTEM must have adequate free space)
- Invalid objects (run
utlrp.sqlbefore upgrade) - Time zone file version mismatch
Export/Import Migration
For major architectural changes (non-CDB to CDB migration, platform change, charset change), Data Pump export/import is often cleaner than in-place upgrade. More downtime, more control.
My strong recommendation: migrate to CDB/PDB architecture if you haven’t already. As of 23ai, non-CDB architecture is desupported. Every 23ai database is a CDB. If you’re still on non-CDB 19c, your upgrade process includes a mandatory CDB conversion step.
The Non-CDB to CDB Migration — Don’t Underestimate It
This is where organizations with legacy 19c non-CDB databases hit a wall. Converting a non-CDB to a PDB inside a CDB is not complicated, but it’s also not trivial, and it requires downtime.
The process:
- Unplug the non-CDB as a manifest XML file
- Describe the non-CDB as a PDB candidate (DBMS_PDB.DESCRIBE)
- Plug it into an existing or new CDB (CREATE PLUGGABLE DATABASE)
- Run noncdb_to_pdb.sql to adjust catalog objects
- Open the PDB and validate
sql
-- Step 1: On the source non-CDB (read-only)SHUTDOWN IMMEDIATE;STARTUP OPEN READ ONLY;-- Step 2: Generate manifestBEGIN DBMS_PDB.DESCRIBE( pdb_descr_file => '/tmp/my_db.xml' );END;/-- Step 3: Check compatibility on target CDBSET SERVEROUTPUT ONDECLARE compatible BOOLEAN := FALSE;BEGIN compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/tmp/my_db.xml' ); IF compatible THEN DBMS_OUTPUT.PUT_LINE('Compatible'); ELSE DBMS_OUTPUT.PUT_LINE('Not Compatible - check PDB_PLUG_IN_VIOLATIONS'); END IF;END;/-- Step 4: Plug inCREATE PLUGGABLE DATABASE mypdb USING '/tmp/my_db.xml' COPY FILE_NAME_CONVERT = ('/source/path/', '/target/path/');
I’ve seen this process take anywhere from 30 minutes to 6 hours depending on database size and whether COPY or NOCOPY mode is used. Test it in a non-production environment before touching production.
Build Your Upgrade Lab Now
The worst time to discover an upgrade problem is during your production maintenance window. The right approach:
- Clone your production database to a test environment (RMAN duplicate or PDB clone)
- Run AutoUpgrade in analyze mode against the clone — fix everything it reports
- Run the full upgrade on the clone — time it, document it
- Run your application regression suite against the upgraded clone
- Check optimizer behavior — 23ai’s optimizer has been updated, some query plans will change
- Run
SQL Performance Analyzerto compare execution plans before and after
sql
-- SQL Performance Analyzer workflow-- Capture current workloadVARIABLE t_name VARCHAR2(30)EXEC :t_name := DBMS_SQLPA.CREATE_ANALYSIS_TASK(sqlset_name => 'my_sqlset');EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(:t_name, 'CONVERT SQLSET', 'BEFORE_CHANGE');-- After upgrade, compareEXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(:t_name, 'TEST EXECUTE', 'AFTER_CHANGE');EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(:t_name, 'COMPARE PERFORMANCE');
The queries that regress after upgrade are your risk items. Address them before go-live, not after.
My Timeline Recommendation
If you’re on 19c today:
- Now through end of 2025: Build and test your upgrade path in a lab environment
- 2025: Upgrade non-production environments to 23ai
- 2026: Upgrade production in waves, starting with less critical systems
- Before April 2027: All production databases on 23ai or a newer supported release
Don’t wait until 2027 to start. The organizations that plan upgrades reactively are the ones that rush through testing, miss issues, and have bad production go-lives.

Yorum bırakın