Oracle Multitenant architecture at 26ai

The Oracle Multitenant architecture promised consolidated databases with workload isolation. The reality, in environments I’ve worked with, is that the isolation part often gets implemented incompletely. Teams spend time on the CDB/PDB structure itself and not enough time on the resource management and monitoring layers that actually enforce isolation. The result: a noisy neighbor problem where a runaway PDB degrades every other workload on the CDB. In 26ai, the tools to prevent this are better than ever. But you have to configure them.


The Fundamental Isolation Problem in Shared CDBs

When you create a CDB with multiple PDBs and no resource management plan, every PDB competes equally for:

  • CPU (all OCPUs allocated to the CDB are shared)
  • SGA (shared pool, buffer cache, large pool — all shared)
  • I/O (storage bandwidth shared between all PDBs)
  • Undo (shared undo tablespace)

A batch PDB running a full export suddenly consuming 95% of available CPU is not a hypothetical — it’s a Tuesday morning in most shops running consolidated Multitenant. The OLTP PDBs on the same CDB slow down. Users complain. The DBA gets paged.

The solution is Oracle Database Resource Manager at the CDB level. Not optional. Not something to add later. It should be the first thing configured after your CDB is built.


CDB-Level Resource Plans — The Right Way to Do It

-- Create a comprehensive CDB resource plan
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
-- Create the CDB-level plan
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
plan => 'PRODUCTION_CDB_PLAN',
comment => 'Production CDB resource management plan'
);
-- OLTP PDB: highest priority, 60% CPU shares, max 80% CPU
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
plan => 'PRODUCTION_CDB_PLAN',
pluggable_database => 'PDB_OLTP',
shares => 6,
utilization_limit => 80,
parallel_server_limit => 80
);
-- Reporting PDB: medium priority, 30% shares, max 60% CPU
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
plan => 'PRODUCTION_CDB_PLAN',
pluggable_database => 'PDB_REPORTS',
shares => 3,
utilization_limit => 60,
parallel_server_limit => 50
);
-- Dev PDB: lowest priority, 10% shares, max 30% CPU
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
plan => 'PRODUCTION_CDB_PLAN',
pluggable_database => 'PDB_DEV',
shares => 1,
utilization_limit => 30,
parallel_server_limit => 20
);
-- ORA$DEFAULT_PDB_DIRECTIVE for any PDB not explicitly listed
DBMS_RESOURCE_MANAGER.UPDATE_CDB_DEFAULT_DIRECTIVE(
plan => 'PRODUCTION_CDB_PLAN',
new_shares => 1,
new_utilization_limit => 20,
new_parallel_server_limit => 10
);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
-- Activate the plan
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'PRODUCTION_CDB_PLAN' SCOPE=BOTH;

The shares parameter controls relative CPU allocation when the system is under contention. When the system has spare CPU, all PDBs can use whatever they need up to their utilization_limit. Shares only matter when CPU is the bottleneck — which is exactly when you need protection.


PDB Memory Isolation in 26ai — Per-PDB SGA Limits

In 19c, SGA is shared across all PDBs with no per-PDB memory limits. One PDB’s poorly-written queries filling the shared pool affects every other PDB. In 26ai, you can set memory limits per PDB:

-- Connect to a specific PDB and set memory limits
ALTER SESSION SET CONTAINER = pdb_dev;
-- Limit SGA for this PDB
ALTER SYSTEM SET SGA_MIN_SIZE = 512M SCOPE=BOTH;
ALTER SYSTEM SET SGA_TARGET = 2G SCOPE=BOTH;
-- Limit PGA for this PDB
ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 1G SCOPE=BOTH;
ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 512M SCOPE=BOTH;
-- Verify per-PDB memory settings
SELECT name, value, con_id
FROM v$system_parameter
WHERE name IN ('sga_target', 'pga_aggregate_limit', 'sga_min_size')
ORDER BY con_id, name;

SGA_MIN_SIZE guarantees the PDB always has at least this much SGA available — protection against memory starvation from other PDBs. SGA_TARGET caps the maximum this PDB can consume. The range between min and max is where Oracle dynamically adjusts based on workload.

For production PDBs, set SGA_MIN_SIZE to ensure they always have baseline memory. For dev and test PDBs, set SGA_TARGET low to prevent them from consuming production-grade memory.


Monitoring PDB Resource Usage — The Queries You Run When Things Go Wrong

When a CDB is experiencing performance problems and you need to identify which PDB is the culprit:

-- CPU usage by PDB right now
SELECT p.name pdb_name,
SUM(s.value) cpu_usage
FROM v$pdbs p
JOIN v$sesstat s ON s.con_id = p.con_id
JOIN v$statname n ON n.statistic# = s.statistic#
WHERE n.name = 'CPU used by this session'
GROUP BY p.name
ORDER BY cpu_usage DESC;
-- Active sessions by PDB
SELECT p.name pdb_name,
COUNT(*) active_sessions,
COUNT(CASE WHEN s.state = 'WAITING' THEN 1 END) waiting,
COUNT(CASE WHEN s.state = 'ON CPU' THEN 1 END) on_cpu
FROM v$session s
JOIN v$pdbs p ON s.con_id = p.con_id
WHERE s.status = 'ACTIVE'
AND s.type = 'USER'
GROUP BY p.name
ORDER BY active_sessions DESC;
-- I/O by PDB
SELECT p.name pdb_name,
SUM(f.phyrds) physical_reads,
SUM(f.phywrts) physical_writes,
SUM(f.readtim) read_time_cs,
SUM(f.writetim) write_time_cs
FROM v$filestat f
JOIN v$datafile d ON f.file# = d.file#
JOIN v$pdbs p ON d.con_id = p.con_id
GROUP BY p.name
ORDER BY physical_reads DESC;
-- Memory usage by PDB
SELECT p.name pdb_name,
ROUND(SUM(s.bytes)/1024/1024, 1) sga_mb
FROM v$sgastat s
JOIN v$pdbs p ON s.con_id = p.con_id
GROUP BY p.name
ORDER BY sga_mb DESC;

These four queries together tell you which PDB is consuming disproportionate CPU, sessions, I/O, and memory. Run them when the CDB is slow before touching anything.


PDB Lockdown Profiles — Restricting What PDBs Can Do

In 26ai, Lockdown Profiles give you fine-grained control over what operations are permitted inside each PDB. This is critical for shared CDBs where you don’t fully trust all PDB administrators:

-- Create a lockdown profile for dev PDBs
CREATE LOCKDOWN PROFILE dev_pdb_profile;
-- Prevent dev PDBs from creating database links (avoid data exfiltration)
ALTER LOCKDOWN PROFILE dev_pdb_profile
DISABLE STATEMENT = ('CREATE DATABASE LINK');
-- Prevent dev PDBs from enabling parallel query beyond a threshold
ALTER LOCKDOWN PROFILE dev_pdb_profile
DISABLE OPTION = ('Parallel execution') UNTIL '4';
-- Prevent dev PDBs from using external tables (OS file access)
ALTER LOCKDOWN PROFILE dev_pdb_profile
DISABLE FEATURE = ('EXTERNAL FILE ACCESS');
-- Restrict network access from dev PDBs
ALTER LOCKDOWN PROFILE dev_pdb_profile
DISABLE FEATURE = ('UTL_HTTP')
DISABLE FEATURE = ('UTL_SMTP')
DISABLE FEATURE = ('UTL_TCP');
-- Apply the profile to a PDB
ALTER SESSION SET CONTAINER = pdb_dev;
ALTER SYSTEM SET PDB_LOCKDOWN = 'DEV_PDB_PROFILE' SCOPE=BOTH;
-- Verify lockdown profile
SELECT profile_name, rule_type, rule, con_id
FROM cdb_lockdown_rules
WHERE profile_name = 'DEV_PDB_PROFILE'
ORDER BY rule_type, rule;

Lockdown profiles are one of the most underused Multitenant features. In environments where different teams manage different PDBs on a shared CDB, profiles enforce the boundaries that organizational trust cannot.


The 3 AM Scenario — And How to Prevent It

The page arrives: “production application is slow, DBA please check.” You connect and immediately run your PDB resource queries. You find PDB_REPORTS is consuming 85% of CDB CPU — a scheduled report that normally runs in 20 minutes has been running for 4 hours because a missing index caused a query regression after last night’s statistics refresh.

In a CDB with no resource management plan, this kills PDB_OLTP performance. With the resource plan configured above, PDB_REPORTS is capped at 60% utilization. PDB_OLTP is protected. The report is slow, but the OLTP application is not degraded.

-- Emergency: temporarily reduce a PDB's resource allocation without
-- changing the plan permanently
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.UPDATE_CDB_PLAN_DIRECTIVE(
plan => 'PRODUCTION_CDB_PLAN',
pluggable_database => 'PDB_REPORTS',
new_utilization_limit => 20 -- Temporarily throttle to 20%
);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
-- After fixing the report/index, restore normal allocation:
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.UPDATE_CDB_PLAN_DIRECTIVE(
plan => 'PRODUCTION_CDB_PLAN',
pluggable_database => 'PDB_REPORTS',
new_utilization_limit => 60 -- Restore normal limit
);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

Dynamic plan directive updates take effect immediately without restarting anything. This is your operational lever during incidents — throttle the misbehaving PDB while you fix the root cause, then restore normal allocation. No downtime, no impact on other PDBs.


PDB Cloning for Rapid Troubleshooting

When a PDB has a suspected data or configuration problem, cloning it for investigation without taking down production is one of Multitenant’s most practical features:

-- Hot clone PDB_PROD to PDB_PROD_CLONE for investigation
-- (PDB_PROD stays open and fully operational)
CREATE PLUGGABLE DATABASE pdb_prod_clone
FROM pdb_prod
FILE_NAME_CONVERT = ('/oradata/pdb_prod/', '/oradata/pdb_prod_clone/');
-- Open the clone
ALTER PLUGGABLE DATABASE pdb_prod_clone OPEN;
-- Apply lockdown profile immediately to prevent the clone from
-- accidentally connecting to external systems
ALTER SESSION SET CONTAINER = pdb_prod_clone;
ALTER SYSTEM SET PDB_LOCKDOWN = 'DEV_PDB_PROFILE' SCOPE=BOTH;
-- Now investigate in the clone without touching production
-- When done, drop the clone
DROP PLUGGABLE DATABASE pdb_prod_clone INCLUDING DATAFILES;

This workflow — clone, investigate, drop — takes less than 5 minutes to set up for most PDB sizes. It completely separates investigation from production. I use it routinely for reproducing reported issues in a safe environment before touching the production PDB.



Yorum bırakın