Oracle’s Multitenant architecture — Container Database (CDB) with Pluggable Databases (PDBs) — has been around since 12c. It’s no longer new, yet I still encounter the same management mistakes in production environments year after year. Some are minor inefficiencies. Some have caused real outages. This post covers the patterns I see most frequently and what to do instead.
Mistake 1: Opening PDBs Manually After Every CDB Restart
This one is surprisingly common, even in large enterprises. The DBA restarts the CDB, then manually opens each PDB with ALTER PLUGGABLE DATABASE ALL OPEN. They either have a startup script that does this, or — more dangerously — they do it manually and sometimes forget one.
The correct solution is to save PDB state:
sql
-- Save open state so PDBs open automatically when CDB startsALTER PLUGGABLE DATABASE ALL OPEN;ALTER PLUGGABLE DATABASE ALL SAVE STATE;-- Or for a specific PDBALTER PLUGGABLE DATABASE pdb_prod OPEN;ALTER PLUGGABLE DATABASE pdb_prod SAVE STATE;-- Verify saved statesSELECT con_name, state FROM dba_pdb_saved_states;
With saved state, every time the CDB starts, Oracle automatically opens PDBs to their last saved state. No manual intervention. No forgetting a PDB.
I’ve seen production outages where an application was down for 45 minutes after a server reboot because the DBA who normally handles restarts was on vacation and nobody else knew PDBs needed to be manually opened. SAVE STATE eliminates this class of problem entirely.
Mistake 2: Ignoring CDB-Level vs PDB-Level Parameter Scope
In a CDB environment, initialization parameters exist at two levels: the CDB root level (container 0) and the PDB level. Some parameters can be set at the PDB level to override CDB defaults. Others are CDB-only and can’t be changed per PDB.
The confusion arises because ALTER SYSTEM SET parameter = value behaves differently depending on where you run it. If you’re connected to the CDB root (CON_ID = 1), you’re setting a CDB-wide default. If you’re connected to a PDB, you’re setting it for that PDB only.
sql
-- Check which parameters are modifiable at PDB levelSELECT name, ispdb_modifiableFROM v$parameterWHERE ispdb_modifiable = 'TRUE'ORDER BY name;-- Set a parameter at PDB level (connected to that PDB)ALTER SESSION SET CONTAINER = pdb_prod;ALTER SYSTEM SET sga_target = 4G SCOPE=SPFILE;-- Check parameter value in current PDB vs CDB rootSELECT con_id, name, valueFROM v$system_parameterWHERE name = 'sga_target'ORDER BY con_id;
The mistake I see: DBAs connect to the CDB root to “tune” a specific PDB’s parameters, accidentally changing CDB-wide settings and affecting all PDBs. Always verify your current container before running ALTER SYSTEM:
sql
SELECT SYS_CONTEXT('USERENV','CON_NAME') current_container FROM DUAL;
Make this a habit. Muscle memory for regular Oracle doesn’t always transfer to CDB environments.
Mistake 3: Poor PDB Resource Management — No Resource Plans
In a CDB with multiple PDBs, all PDBs share the same SGA, CPU, and I/O resources by default. Without resource management, a batch job in one PDB can starve a production OLTP PDB on the same CDB.
Oracle Resource Manager in a CDB environment works at two levels: the CDB level (limiting each PDB’s share of resources) and the PDB level (managing sessions within a PDB).
sql
-- Create a CDB-level resource planBEGIN DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA(); DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN( plan => 'MY_CDB_PLAN', comment => 'Production CDB resource plan' ); -- Give production PDB 60% of CPU shares DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE( plan => 'MY_CDB_PLAN', pluggable_database => 'PDB_PROD', shares => 6, utilization_limit => 80, -- Max 80% CPU parallel_server_limit => 100 ); -- Give dev PDB only 20% shares DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE( plan => 'MY_CDB_PLAN', pluggable_database => 'PDB_DEV', shares => 2, utilization_limit => 40 ); DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA(); DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();END;/-- Activate the planALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'MY_CDB_PLAN';
Without this, you’re gambling that PDBs will be well-behaved neighbors. They won’t be.
Mistake 4: Hot Cloning Without Understanding the Implications
Hot cloning (cloning a PDB while it’s open read/write) is genuinely one of the most useful features in Oracle Multitenant. For quickly spinning up test environments from production data, it’s excellent. But it’s used carelessly more often than carefully.
sql
-- Hot clone PDB_PROD to PDB_TESTCREATE PLUGGABLE DATABASE pdb_test FROM pdb_prod;
What people miss: until the clone is complete, Oracle keeps undo information for the source PDB for the entire duration of the clone operation. If pdb_prod is a busy production database and the clone takes 30 minutes (copying datafiles), you need 30 minutes of undo retained for pdb_prod. If your undo tablespace doesn’t have enough space for this, you’ll get ORA-01555 errors in the source PDB during the clone.
For large PDBs in busy production environments, consider cloning at a lower-traffic time, or use the SNAPSHOT COPY option (which is near-instant but uses sparse files and requires compatible storage):
sql
CREATE PLUGGABLE DATABASE pdb_test FROM pdb_prod SNAPSHOT COPY;
Also: hot cloning generates redo and undo on the source CDB. It’s not a zero-impact operation. Monitor your I/O and undo usage during a hot clone.
Mistake 5: Not Monitoring PDB-Level AWR
In CDB environments, AWR data is collected at both the CDB level and the PDB level. Most DBAs I work with check CDB-level AWR and miss PDB-level performance issues entirely.
sql
-- Generate PDB-level AWR report-- First, identify snap IDs within the PDBSELECT snap_id, begin_interval_timeFROM dba_hist_snapshotWHERE con_id = (SELECT con_id FROM v$pdbs WHERE name = 'PDB_PROD')ORDER BY snap_id DESCFETCH FIRST 5 ROWS ONLY;-- Generate report for specific PDB@$ORACLE_HOME/rdbms/admin/awrrpti.sql-- When prompted, enter the con_id of your PDB
PDB-level AWR shows you top SQL, wait events, and resource usage isolated to that PDB. CDB-level AWR shows you the aggregate. For a CDB running 5 PDBs, the aggregate is often misleading — the performance problem is in one PDB, diluted by the others in the CDB view.
Mistake 6: Unplugging PDBs Without Proper Cleanup
When a PDB is unplugged (ALTER PLUGGABLE DATABASE pdb_name UNPLUG INTO '/path/manifest.xml'), the datafiles remain on disk. The PDB still exists in the CDB data dictionary in MOUNTED state. If you don’t explicitly DROP the PDB after unplugging, you accumulate orphaned metadata.
sql
-- Correct unplug-and-drop sequenceALTER PLUGGABLE DATABASE pdb_old CLOSE IMMEDIATE;ALTER PLUGGABLE DATABASE pdb_old UNPLUG INTO '/backup/pdb_old_manifest.xml';DROP PLUGGABLE DATABASE pdb_old KEEP DATAFILES;-- KEEP DATAFILES preserves the files for plugging elsewhere-- Use INCLUDING DATAFILES to delete them
Also: before unplugging a PDB from a non-CDB or moving it to a different CDB, verify the target CDB’s database options compatibility. If the source PDB was created with options (Spatial, XML DB, etc.) not installed in the target CDB, the plug-in will fail or produce violations that must be resolved before the PDB can open.

Yorum bırakın