Friday, December 27, 2013

12c partial index to speed OLTP

In 11g database, the more index you have in a table, the longer it takes to insert rows into that table. When a row is inserted into a table, rowid, and column values are inserted into the index. Further, if several sessions are concurrently inserting data, there will be contention to insert data into the same table and index block. This issue can be further magnified in RAC servers, where these user sessions will have block contention across different RAC servers. Some of the solutions to remove these contention is to try to distribute the data inserted across several blocks by using hash partitions or reverse key indexing or even in some cases use exchange partition strategy.
 While all these methods are still relevant in 12c as well, it is possible to reduce the index contention completely on some indexes by not having the indexes built during inserts.
This can be done using 12c new feature called partial index. Note that indexes are used mainly to speed queries or reports. There may be still some indexes like unique index, indexes on foreign key column and some additional indexes which might be required to speed  look-ups to do the insert into tables. Rest of the indexes which could wait to be asynchronously built but still report in real time could use PARTIAL indexing.

To create partial indexes, you have to set INDEXING parameter on the table and then alter or create index with the PARTIAL clause.

To configure indexes to have PARTIAL GLOBAL and  PARTIAL LOCAL index  on a partitioned table.  There are two places you can set this feature.
1) indexing ON/OFF at table level
2) indexing ON/OFF at the table partition level.
and then
Alter or create  index  with partial clause.
e.g. alter index global_idx indexing partial;
      create index local_idx on tabname(colname) local indexing PARTIAL;

The parameter set at table level to OFF will set newer default partitions created with indexing OFF at partition level. Indexing ON at table level will set INDEXING ON at partition level for newly created  partition. The key to note is "newly created".  We will make use of this feature to implement a  performance strategy.

  The global partial index will have only one index segment. The size of the global partial index will be smaller than a full index as it will not have data of partitions where indexing is OFF. 
The local partial index will have  index segments on partitions where indexing is ON and no index segments on partitions where it is set OFF. In 11g, it was possible to make some of the index segments unuseable. however, it was not easy to maintain. It would not work with rebuilds and the index segments were created even if they are unusable. 

The query optimizer will use global/local partial index where ever available and do partition scan where indexing is OFF to get the full picture. The execution plan will have a UNION-ALL to combine the data from index range scan and partition range scan. The cost of this operation will be higher than full index range scan but lesser than a full table scan.
Below explain plan shows the UNION-ALL to combine partial index data with partition scan.

The trade off to use partial index will depend on

  1.  the cost & performance of the queries  using partial index
  2. to the improved speed of OLTP transaction. 
This is really application dependent.  
It is more favorable to use partial index when 

  • The oltp transaction rate is critical.
  • The performance of queries with partial index is not tht different .
  • The resource used to run queries with partial index does not impact the overall performance of the system.

 When you ALTER the partition level indexing from OFF to ON, the local index will be created and global index rebuild to add the partition's data.

When you have INTERVAL PARTITION  table, Oracle will automatically create the next partition when the data inserted into the table is beyond the current high value of range partition. These partitions will pickup the information from the table level.
At the partition level,  INDEXING OFF will mean that the indexes will not be created for that partition.  

  In order to reduce index contention for application with high inserts, one way is to not have any non-essential indexes on the latest partition where concurrent sessions are inserting data. Having no index on the latest partition will speedup inserts. Reports that do use these indexes will continue to use PARTIAL indexes on older partitions and a Full Table Scan (FTS) on the latest partition.
Steps below describes  how to do this.

 1) Alter current partitioned table to have indexing OFF at table level. This will ensure that all new partitions added will not automatically enable indexes. ( global and local)
 2) If the partition scheme is range partition, you can alter table to have interval partition so that you need not manually add partitions. Although this is a optional step,it is better to have partitions created automatically.
3) create or alter all nonessential indexes with the partial index clause.
4) During inserts into these partitioned tables, newly created partitions will not have partial indexes created. With interval partition, the automatically created partition will not have any partial indexes created.  it is the last partition where data is inserted and has usually has the highest contention. Now with less indexes on this partition, the inserts will run a lot faster. At the same time , the reports will still run real time with the latest data inserted. It could be slightly more expensive due to the full scan on the last partition.
5)  Have a background job or manually  turn INDEXING ON for all partitions where the heavy inserting is not happening.  This will automatically rebuild global partial indexed on that table as well as create  local partial indexes on that table partition.

That is it. This is one of the strategy you can use to speed OLTP inserts and reduce contention in 12c using PARTIAL INDEXING.
Note: although the example below is using interval range partition, partial indexing will work on other partitions as well. 
 At the time of writing this blog, there is a documented Bug 14558315 that does a FTS  instead of union-all between index scan and partition scan. 

1)  Create partitioned table. set indexing off at table. You can then set indexing on at partition level.

drop table poc_data purge;
create table
poc_data (
   start_date        DATE,
   store_id          NUMBER,
   inventory_id      NUMBER,
   qty_sold          NUMBER
   PARTITION poc_data_p2 VALUES LESS THAN (TO_DATE('1-7-2007', 'DD-MM-YYYY')) INDEXING on,

-- create global and local index with PARTIAL clause.

create index global_partial_index on poc_data(store_id) GLOBAL INDEXING PARTIAL;
create index local_partial_index on poc_data(inventory_id) LOCAL INDEXING PARTIAL;

2) Load some data to create newer interval partition.

for i in 1..144 loop
  for j in 1..10000 loop
insert into poc_data values (TO_DATE('1-8-2007', 'DD-MM-YYYY') +i,i,J,i+j);
  end loop;
end loop;

-- check the partition indexing parameter is off as it is set to off at table level.
SQL> col table_name format a15
SQL> col partition_name format a20
SQL> select table_name, partition_name, indexing from user_tab_partitions where table_name='POC_DATA';

-- check  the local indexes are created to only partitions where indexing ON is set. and obser the leaf_blocks count for PARTIAL global index.
SQL> col index_name format a30
SQL> select  index_name, partition_name,segment_created,status,leaf_blocks from
user_ind_partitions where index_name in (select index_name from user_indexes where table_name='POC_DATA')
union all
select Index_name, 'SINGLE GLOBAL INDEX' PARTITION_NAME,'YES' SEGMENT_CREATED,status, leaf_blocks from user_indexes where table_name='POC_DATA'
and partitioned='NO';

-- Manually enable the partial index on on one of the new partitions..
alter table poc_data modify  partition SYS_P1283 indexing on ;
-- this will automatically rebuild global index and create a partial local index on that partition.
-- update statistics to update leaf_block info
dbms_stats.gather_table_stats(ownname=>user, tabname=>'POC_DATA', estimate_percent=>null, cascade=> true);

-- check the index segments and observer leaf_blocks in global nonpartitioned index has increased.


Combining 12c PARTIAL INDEXING feature and INTERVAL PARTITION , you can easily automate index maintenance that could speed up performance of OTLP applications. 

Monday, December 23, 2013

In-memory option in oracle 12c

I clench  when people say oracle is playing catchup with in-memory option. Any option oracle came up was integral part of database. It takes time to build on your own, rather than buy a company like SAP did and call it its own. 

I recall Teradata gave a anti Exadata compete paper saying "Exadata is still  Oracle" database. They totally missed the point that oracle worked hard to make it transparent to end users. It is what people want. 
 Now with in-memory option, will SAP dare to call it  "In-memory option is still a Oracle database "? 

Remember, Times-ten is a Row-store in memory database and it will stay that way. In-memory option is a columnar-store format and is created ground up. There is no change of code required for any of the existing application to run on oracle in-memory option. 
 The underlying code change to make this happen cannot be a after though after HANA announcement. It is foresight, hard work and great development team that can code. 

Thursday, October 31, 2013

SAS Solutions OnDemand on Oracle Exadata

If  there are companies out there making a decision to run SAS products  and wanted to choose a database  or wondered if Oracle Exadata is a good fit ?
Consider this. SAS runs their Cloud solutions on Oracle Exadata. SAS Solutions OnDemand   did talk at Oracle  OpenWorld 2013. They were happy with ROI, performance, scalability and simplicity of provisioning. 
Interesting info from the presentation slides is the ability of SAS specific sqls are able to make use of exadata features to improve performance.
some of the examples listed were.
  • Up to 45x performance increase with Exadata storage indexes.
  • Dropped some of the indexes to save space and make use of smart scans and storage indexes.
  • Daily ETL runs < 10 hours vs. > 20 hours
  • Interface in use by 33,000 users now returns all queries in less than 30 seconds vs. many selections timing out at 3 minutes.
For more info, checkout

Wednesday, October 23, 2013

12c Fraud detection using pattern matching. (MATCH_RECOGNIZE)

Just wanted to show different real scenarios  where 12c pattern matching could be used.  
This feature could be handy in running one of several scenarios of Credit Card frauds.

One common use case is when a credit card gets stolen and the thief first purchases a small item such as candy or gas to make sure the card works and then purchases a large ticket item as quickly as possible before suspicions arise in a 15 minute span. 

To run this with 12c match_recognize, the query will be as show below.

select * from transaction
  partition by seller_name order by tx_time
         to_char(FIRST(low.tx_time),'DD-MON-YY HH24:MI') AS first_low_time,
        FIRST(low.tx_amount) first_tx_value,
      --  last(low.tx_amount) last_low_amount,
        (SPIKE.tx_amount) big_tx,
        to_char(spike.tx_time,'DD-MON-YY HH24:MI') AS big_tx_time,
        match_number() as match_num
        AFTER MATCH skip to next row
        LOW as low.tx_amount < 50,
        SPIKE as spike.tx_amount > 500 and (spike.tx_time - low.tx_time)*1440 < 15
) ;

-- ---------------- --------------  ----------   ------------------------  ----------
aa 22-OCT-13 19:25              32         
503       22-OCT-13 19:28               1
aa 22-OCT-13 19:26              23         503       22-OCT-13 19:28               2
aa 22-OCT-13 19:27              12         503       22-OCT-13 19:28               3

This is a challenge to build using 11g style analytical functions.
By simple way of defining patters and then searching for one or more occurrence of them, we build a complex query. If we to add additional condition to the above query, we define the condition and do patter matching.

This patter matching within database is also called mapreduce technique also.

Below is the sample data created to run this query.

-- drop table transaction purge;
create table transaction ( tx_amount number, zip number, tx_time date, seller_name varchar2(32)) ;
insert into transaction values (2, 75028,sysdate +1/1440, 'aa');
insert into transaction values (12, 75028,sysdate +2/1440, 'aa');
insert into transaction values (2, 75028,sysdate +3/1440, 'aa');
insert into transaction values (25, 75028,sysdate +4/1440, 'aa');
insert into transaction values (55, 75028,sysdate +4/1440, 'aa');
insert into transaction values (5, 75028,sysdate +5/1440, 'aa');
insert into transaction values (10, 75028,sysdate +6/1440, 'aa');
insert into transaction values (2, 75028,sysdate +7/1440, 'aa');
insert into transaction values (432, 75028,sysdate +8/1440, 'aa');
insert into transaction values (2, 75028,sysdate +9/1440, 'aa');
insert into transaction values (43, 75028,sysdate +10/1440, 'aa');
insert into transaction values (2, 75028,sysdate +11/1440, 'aa');
insert into transaction values (35, 75028,sysdate +12/1440, 'aa');
insert into transaction values (35, 75028,sysdate +13/1440, 'aa');
insert into transaction values (35, 75028,sysdate +15/1440, 'aa');
insert into transaction values (88, 75028,sysdate +16/1440, 'aa');
insert into transaction values (52, 75028,sysdate +17/1440, 'aa');
insert into transaction values (32, 75028,sysdate +18/1440, 'aa');
insert into transaction values (23, 75028,sysdate +19/1440, 'aa');
insert into transaction values (12, 75028,sysdate +20/1440, 'aa');
insert into transaction values (503, 75028,sysdate +21/1440, 'aa');
insert into transaction values (66, 75028,sysdate +21/1440, 'aa');
insert into transaction values (45, 75028,sysdate +22/1440, 'aa');
insert into transaction values (45, 75028,sysdate +23/1440, 'aa');
insert into transaction values (23, 75028,sysdate +21/1440, 'aa');
insert into transaction values (66, 75028,sysdate +21/1440, 'aa');
insert into transaction values (509, 75028,sysdate +40/1440, 'aa');
insert into transaction values (45, 75028,sysdate +23/1440, 'aa');


Friday, October 11, 2013

12c CDB PDB views. (Metadata link and Object links )

One of the key feature of 12c container architecture is isolating the data dictionary tables and views specific to the application separate from the data dictionary tables specific to manage the Database ( now called the container) .  

However, the Oracle has some internal ways to talk to each other and populate the views required for management and monitoring.
  The views can work both ways. 
1) From CDB to PDBs :AWR is run at CDB level and internal object links populate the PDB's tables with information only specific to that PDB.
2) From PDBs to CDBs: Dictionary views like DBA_USERS, DBA_TABLES and DBA_SOURCE on the other hand is local to PDBs and has a superset view with a Prefix of CDB which will add the data from each of these PDBS through a mechanism called metadata link.

The following graphic shows the relationship among the different categories of dictionary views: 

One important point to note is these CDB_views are populated ONLY when the PDBs ARE OPEN.

Below is a test to prove this.
-- from CDB, close all PDBs and select count(*) from CDB_USERS.

SQL> alter pluggable database all close immediate;

Pluggable database altered.

SQL> show pdbs;

---------- ------------------------------ ---------- ----------
     2 PDB$SEED                 READ ONLY  NO
     3 PDB1                  MOUNTED
     4 PDB2                  MOUNTED
     5 PDB_2                 MOUNTED
     6 PDB_1                 MOUNTED
     7 PDB_4                 MOUNTED
     8 PDB_3                 MOUNTED
     9 PDB_5                 MOUNTED
     10 PDB_1_DUP                 MOUNTED
SQL> select count(*) from cdb_users;


-- from CDB, OPEN all PDBs and select count(*) from CDB_USERS.

SQL> alter pluggable database all open;

Pluggable database altered.

SQL> show pdbs

---------- ------------------------------ ---------- ----------
     2 PDB$SEED                 READ ONLY  NO
     3 PDB1                  READ WRITE NO
     4 PDB2                  READ WRITE NO
     5 PDB_2                 READ WRITE NO
     6 PDB_1                 READ WRITE NO
     7 PDB_4                 READ WRITE NO
     8 PDB_3                 READ WRITE NO
     9 PDB_5                 READ WRITE NO
     10 PDB_1_DUP            READ WRITE NO
SQL> select count(*) from cdb_users;



The count of users increased when we open all the PDBs. Thus prooving that CDB Views are populated  through metadata link only when the PDBs are OPEN.

To see tests on object links views ( Link from CDB to PDBs) click here.

 To see a list of all objects that are local to PDB and have a metadata link to CDB, you can run the below query.

select distinct object_name from dba_objects where sharing ='METADATA LINK';

When you clone from a seed PDB, I observer about 84,206 objects that have metadata link. This might change in future versions of 12c.

Thursday, October 10, 2013

12c match_recognize feature for pattern matching

I though i will add a  practical example to help DBAs understand the power of 12c pattern matching.

In the OEM express manager view below, you will see a spike in load between 7.46 PM to 7.51PM.

The challenge is to find this starting point from v$active_session_history when the load spiked and had a sustained load for more than 5 minutes before it dropped down.
This pattern will help us capture start and end time of some sustained load on DB perhaps caused by some batch jobs.  The same query using 11g analytical window functions will not be easy to create and will have a lot more lines of code.

To capture this using 12c new feature match_recognize, i defined 2 points.
 1)  starting point. defined as UP in the example. This is when the service_wait_time values are continuously more than the starting point,  It need to be at least 200% more than the starting point (to be a reasonable spike) . Also  200% spike needs to be sustained for at least 5 minutes. 
2) end point : define as END_PT here. This is when the load tapers down and eventually less than the starting value or zero.

Below is the 12c Pattern Matching query that can capture this.

SELECT TO_CHAR(first_x,'MM/DD HH24:MI') first_time,
  TO_CHAR( last_z,'MM/DD HH24:MI') end_time,
  (SELECT TRUNC(sample_time,'MI') time_t,
    SUM(time_waited) sum_time_waited
  FROM v$active_session_history
  WHERE wait_class != 'Idle'
  GROUP BY TRUNC(sample_time,'MI')
                 FIRST(strt.time_t) AS first_x,
                 FIRST(strt.sum_time_waited) strt_value,
                 LAST(END_PT.sum_time_waited) end_value,
                 LAST(END_PT.time_t) AS last_z,
                 AVG(up.sum_time_waited) avg_t
                 ONE ROW PER MATCH
                 AFTER MATCH skip TO END_PT
                 PATTERN (STRT UP{5,} END_PT)   -- {5,} means 5 or more times.
                  UP  AS
                      up.sum_time_waited          > strt.sum_time_waited
                      AND strt.sum_time_waited   > 0
                      AND ( up.sum_time_waited -strt.sum_time_waited) *100/ strt.sum_time_waited >200,
                  END_PT AS END_PT.sum_time_waited < strt.sum_time_waited ) ; 

To see the raw data passed to this pattern matching query, you can query the following.

SELECT TO_CHAR(TRUNC(sample_time,'MI'),' MM/DD HH24:MI:') time_t,
FROM v$active_session_history
WHERE wait_class != 'Idle'
GROUP BY TRUNC(sample_time,'MI')
ORDER BY TRUNC(sample_time,'MI');

In my test system, to generate some load, i ran the following load generator.

create or replace  procedure load_test (times in number) as
ct number;
for i in 1..times loop
select count(*) into ct from abc;
end loop;
end load_test;
exec load_test(100000);

Update: i added another blog about fraud detection using pattern matching. 

Friday, October 4, 2013

12c CLONE PDBS online

 In oracle release,  we can now clone PDBS on the same container. While this is really easy and good enough for most of the database, it  requires shutting down PDBs and opening in READ ONLY mode. So the question is, " what to do if we need to Clone a PDB, but do not want to shutdown the PDB ?"

The solution is simple and includes the following steps.

1) Duplicate the PDB in a container to a auxiliary CDB .
2) unplug the PDB from auxiliary CDB.
3) Plug-in the PDB with the CLONE MOVE or CLONE COPY clause to move/copy the PDB and change the name of the PDB at the same time.
4) You can now destroy the  auxiliary CDB .(optional)

Some detail steps below.

1) Duplicate the PDB in a container to a ancillary CDB .

   To make a Duplicate copy of a CDB with specific PDBs, the steps are similar to regular RMAN DUPLICATE clause, with the addition of PDB name lists. To duplicate the DB, the database NEED NOT be in archivelog mode.

 the steps include 
  a)  Edit listener.ora to add auxiliary CDB in SID_LIST_LISTENER.

       THis is a mandatory step even if you opt to use port 1521 and depend on dynamic registration.
       The reason is that RMAN command will first shutdown  and restart the auxiliary instance. 
       Note:  If the source container CDB1 in my case is not in SID_LIST_LISTENER, it is because CDB1 is automatically registered and RMAN will not restart it.

      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))

    (SID_DESC =
      (GLOBAL_DBNAME = dup)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
      (SID_NAME = dup)

 b) add entry in tnsnames.ora for source and auxiliary  CDBs.

     This again is a mandatory step as rman connects to the auxiliary db using the connect string after it shuts the db internally.

dup =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
      (SERVICE_NAME = dup)(UR=A)

c) create a sample pfile for DUP database..
    connect to the source database CDB1 and create a editable pfile;
    sql> conn  / as sysdba
            create pfile='/tmp/initdup.ora' from spfile;

   edit and replace all CDB1 with DUP. also change the name for controlfile to avoid override errors.

*.dispatchers='(PROTOCOL=TCP) (SERVICE=dup)'

 then move the edited file to $ORACLE_HOME/dbs directory.

 mv /tmp/initdup.ora $ORACLE_HOME/dbs/.

d)  Add password file for auxillary database/
     CD $ORACLE_HOME/dbs

     export ORACLE_SID=dup
     orapwd file=$ORACLE_HOME/orapwdup password=oracle entries=10

 d) connect to RMAN and run the DUPLICATE COMMAND.

  • To duplicate the PDB pdb1 from cdb1 to CDB cdb2, use the following command:
  • To duplicate the PDBs pdb1pdb3, and pdb4 from database cdb1 to cdb2, use the following command:

 $>rman target sys@cdb1 AUXILIARY sys@dup

Recovery Manager: Release - Production on Tue Oct 1 16:57:29 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

target database Password:
connected to target database: CDB1 (DBID=792037445)
auxiliary database Password:
connected to auxiliary database: DUP (not mounted)


SET NEWNAME FOR DATABASE TO '/u01/app/oracle/oradata/%U';


2) unplug the PDB.

  We can now unplug the PDB from the duplicate database .

  alter pluggable database pdb_1 unplug into '/u01/app/oracle/oradata/pdb_1.xml';

3) Plug-in the PDB with the CLONE MOVE  or CLONE COPY clause to move or copy the PDB and change the name of the PDB at the same time.
Now, this PDB can be plugged back to the original CDB.
This way you can clone PDBs without closing any PDBs. The original PDB  pdb1 can be online all the while. The clone of pdb1 is called pdb_1_dup.
SQL> create pluggable database pdb_1_dup
        AS CLONE using '/u01/app/oracle/oradata/pdb_1.xml'
SQL> /

Pluggable database created.

SQL> show pdbs

---------- ------------------------------ ----------      ----------
     2       PDB$SEED                       READ ONLY          NO
     3       PDB1                               READ WRITE       NO
     4       PDB2                               MOUNTED
     5       PDB_2                             MOUNTED
     6       PDB_1                             READ WRITE        NO
     7       PDB_4                             MOUNTED
     8       PDB_3                             MOUNTED
     9       PDB_5                             MOUNTED
     10       PDB_1_DUP                    MOUNTED

Tuesday, October 1, 2013

12c connect to multiple PDBs with the same name in different CDBs

When you create a Pluggable Database (PDB) , the database automatically creates and starts a service inside the Container Database ( CDB) . Let us assume you have a PDB_1 database in CDB1. I created a another container called DUP and created a pdb called PDB_1.
In my case, i just used RMAN to duplicate the CDB1 to DUP.

Now, if i try to connect to the service using the easy connect or tnsnames.ora to PDB_1, which CDB--> PDB will i be connected to ?

easy_connect syntax is
sqlplus scott/tiger@//localhost:1521/pdb_1 

What chances are that i could be connected to CDB1-->PDB_1 database or DUP-->PDB_1 database ..
To prove my point, let us look at the listener status.

[oracle@unknown080027afa328 dup]$ lsnrctl stat

LSNRCTL for Linux: Version - Production on 01-OCT-2013 17:03:22

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version - Production
Start Date                01-OCT-2013 15:58:19
Uptime                    0 days 1 hr. 5 min. 3 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/unknown080027afa328/listener/alert/log.xml
Services Summary...

Service "cdb1" has 2 instance(s).
  Instance "cdb1", status UNKNOWN, has 1 handler(s) for this service...
  Instance "cdb1", status READY, has 1 handler(s) for this service...
Service "cdb1XDB" has 1 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
Service "dup" has 2 instance(s).
  Instance "dup", status UNKNOWN, has 1 handler(s) for this service...
  Instance "dup", status READY, has 2 handler(s) for this service...
Service "pdb_1" has 2 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
  Instance "dup", status READY, has 1 handler(s) for this service...

 The service has the same name as the PDB_1. But observe that it has two instances...
This is where i remember that in easy connect and tnsnames.ora we always had a option to connect to an instance_name. I was using that functionality in RAC services or db links to connect to one node, even if the services can span multiple nodes for load balancing.. 
Now using the existing functionality, i can now  pick which PDB to connect..

example below.

sqlplus scott/tiger@//localhost:1521/pdb_1/dup  and

sqlplus scott/tiger@//localhost:1521/pdb_1/cdb1

or if you use tnsnames.ora it would be

PDB_1_dup =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
      (SERVICE_NAME = pdb_1) (instance_name = dup)

Saturday, September 28, 2013

12c data isolation for redologs and archivelogs with multiple PDBs

With 12c new functionality , You can have multiple pluggable databases (PDBs) under one container database (CDB). 
The real advantage of this compared to multiple databases on a server is 
1) more resource optimization and
2) better manageability and monitoring.

However, one of the questions that comes up is,  if PDBs all share  common online redolog files and archivelog files,  the archive logs could contain data from other PDBs and that is a potential for SLA conflict. 
PDB data is isolated at database level and can manage individual backup and recovery at PDB level. Some financial customers SLA will require that no files should share data from one company with another ? 
This may not be a issue for in-house server hosting multiple databases for a single client. but if it is a hosting service  and they can share or give archivelog to customers, this is certainly a cause for concern ?

Solution: There is a simple solution for this. It is service/PDB isolation in a RAC environment.
PDBs can run in a RAC environment. In addition, each PDB starts with a default service. This default services is available on all nodes. You can simply isolate the PDB service to one or more instance and ensure no other Oracle PDBs services run on that node. This way you can ensure that the online redologs and archive log files will  only have data for PDBs allowed to run on that node.

Services functionality in the database has been around quiet some time. We can now put them to use for data isolation in addition to load balancing and monitoring functionality they provide.

In the above diagram, you can be sure that the online redologs of ERP is available only one node.

Friday, September 20, 2013

12c TEMP_UNDO_ENABLED limitations

I had blogged about 12c new feature of  "no redo for undo for Global temporary tables(GTT)". While trying to show the demo, i created a simple script to first
set AUTOTRACE TRACEONLY STATISTICS to show the redo generation during insert and update into GTT.
Then set TEMP_UNDO_ENABLED=FALSE to demo default redo generation numbers.

Next, when i set TEMP_UNDO_ENABLED=TRUE and reran the insert and update, i did not see much difference in amount of redo generation !!!
The reduction in redo was supposed to be more than 90% for large inserts and updates !!

After various trials, i found that the manual had hinted at a limitation.
 Only, that it is not clear. Below is the quote from the doc.

"When a session uses temporary objects for the first time, the current value of the TEMP_UNDO_ENABLED initialization parameter is set for the rest of the session. Therefore, if temporary undo is enabled for a session and the session uses temporary objects, then temporary undo cannot be disabled for the session. Similarly, if temporary undo is disabled for a session and the session uses temporary objects, then temporary undo cannot be enabled for the session."

What this means is if a session has started undo stream in UNDO tablespace, then it cannot switch to log undo in temporary tablespace in between. You need to set  TEMP_UNDO_ENABLED before accessing any GTTs.

So, in the demo, i simply reconnected (sql> scott/passwd) to the session and set TEMP_UNDO_ENABLED=TRUE to fix the problem.

Another test i conducted was to see if i could insert in GTTS after opening the PBDs in read only mode. I was not able to insert data. i got the following errors.

[oracle@localhost ~]$ sqlplus system/oracle@pdb2

SQL*Plus: Release Production on Sun Sep 29 04:00:50 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Sun Sep 29 2013 03:36:08 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show pdbs

---------- ------------------------------ ---------- ----------
SQL> alter system set TEMP_UNDO_ENABLED=TRUE ;

System altered.

SQL> insert into vijay.ro_test values (4324) ;
insert into vijay.ro_test values (4324)
ERROR at line 1:
ORA-16000: database or pluggable database open for read-only access

Commit complete.