Saturday, September 28, 2013

12c data isolation for redologs and archivelogs with multiple PDBs


With 12c new functionality , You can have multiple pluggable databases (PDBs) under one container database (CDB). 
The real advantage of this compared to multiple databases on a server is 
1) more resource optimization and
2) better manageability and monitoring.

However, one of the questions that comes up is,  if PDBs all share  common online redolog files and archivelog files,  the archive logs could contain data from other PDBs and that is a potential for SLA conflict. 
PDB data is isolated at database level and can manage individual backup and recovery at PDB level. Some financial customers SLA will require that no files should share data from one company with another ? 
This may not be a issue for in-house server hosting multiple databases for a single client. but if it is a hosting service  and they can share or give archivelog to customers, this is certainly a cause for concern ?

Solution: There is a simple solution for this. It is service/PDB isolation in a RAC environment.
PDBs can run in a RAC environment. In addition, each PDB starts with a default service. This default services is available on all nodes. You can simply isolate the PDB service to one or more instance and ensure no other Oracle PDBs services run on that node. This way you can ensure that the online redologs and archive log files will  only have data for PDBs allowed to run on that node.

Services functionality in the database has been around quiet some time. We can now put them to use for data isolation in addition to load balancing and monitoring functionality they provide.



In the above diagram, you can be sure that the online redologs of ERP is available only one node.

Friday, September 20, 2013

12c TEMP_UNDO_ENABLED limitations

I had blogged about 12c new feature of  "no redo for undo for Global temporary tables(GTT)". While trying to show the demo, i created a simple script to first
set AUTOTRACE TRACEONLY STATISTICS to show the redo generation during insert and update into GTT.
Then set TEMP_UNDO_ENABLED=FALSE to demo default redo generation numbers.

Next, when i set TEMP_UNDO_ENABLED=TRUE and reran the insert and update, i did not see much difference in amount of redo generation !!!
The reduction in redo was supposed to be more than 90% for large inserts and updates !!

After various trials, i found that the manual had hinted at a limitation.
 Only, that it is not clear. Below is the quote from the doc.

"When a session uses temporary objects for the first time, the current value of the TEMP_UNDO_ENABLED initialization parameter is set for the rest of the session. Therefore, if temporary undo is enabled for a session and the session uses temporary objects, then temporary undo cannot be disabled for the session. Similarly, if temporary undo is disabled for a session and the session uses temporary objects, then temporary undo cannot be enabled for the session."

What this means is if a session has started undo stream in UNDO tablespace, then it cannot switch to log undo in temporary tablespace in between. You need to set  TEMP_UNDO_ENABLED before accessing any GTTs.

So, in the demo, i simply reconnected (sql> scott/passwd) to the session and set TEMP_UNDO_ENABLED=TRUE to fix the problem.


Another test i conducted was to see if i could insert in GTTS after opening the PBDs in read only mode. I was not able to insert data. i got the following errors.

[oracle@localhost ~]$ sqlplus system/oracle@pdb2

SQL*Plus: Release 12.1.0.1.0 Production on Sun Sep 29 04:00:50 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Sun Sep 29 2013 03:36:08 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show pdbs

    CON_ID CON_NAME  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
4 PDB2  READ ONLY  NO
SQL> alter system set TEMP_UNDO_ENABLED=TRUE ;

System altered.

SQL> insert into vijay.ro_test values (4324) ;
commit;
insert into vijay.ro_test values (4324)
                  *
ERROR at line 1:
ORA-16000: database or pluggable database open for read-only access


SQL> 
Commit complete.


SQL> 


Wednesday, September 18, 2013

12c NO REDO for UNDO for Global Temporary Tables(GTT) with TEMP_UNDO_ENABLED

Global Temporary Tables (GTT) hold temporary non persistent data. The data in GTTs are  either deleted after commit or kept until the session is connected depending of the definition of the GTT.(ON COMMIT PRESERVE OR DELETE ROWS ).

 However, data in GTTs are consistent. They can be part of a transaction and also support rollback and commit. For this to happen,  Oracle DB has to maintain some undo information for each transaction. And until now, any block change in UNDO also generates some REDO. Since the data is not persistent by definition of GTTs, it looks like a waste of precious cycles generating REDO.

In 12c, oracle can now write UNDO of GTT in temp tablespace instead of UNDO tablespace. This way you can now avoid generation of REDO for the UNDO, while still having the ability to rollback a transaction.

With this feature, some of the I/O contention is reduced for online redo logfiles. Since transaction throughput depends on how fast you can commit transactions to the sequential logfiles. Less load on these files in a busy server is certainly a bonus. With 12c consolidation , where you could have multiple Pluggable database in one Container Database, the PDBs share the online redolog files. So reducing load on redo logfile is a important strategy.

Another important functionality this will enable is run reports in a Active Dataguard (ADG)  Site.  Usually, reports on some large dataset will use some staging tables to temporarily hold data before producing some final report. Until now, such report could not be run on ADR site. Now since the standby database will not generate redo of its own, we do not have that issue.

So, How do you enable this feature ?
All you have to do is set  TEMP_UNDO_ENABLED=true;

Proof below.

SQL> @gtt
SQL> conn scott/tiger@pdb1
Connected.
SQL> set echo on
SQL> set autotrace traceonly statistics
SQL> alter system set temp_undo_enabled=false;

System altered.

SQL> drop   table my_temp_table  purge;

Table dropped.

SQL> CREATE GLOBAL TEMPORARY TABLE my_temp_table (
  2    column1    NUMBER,
  3    column2    NUMBER
  4  )    ON COMMIT DELETE ROWS;

Table created.

SQL> 
SQL> 
SQL> insert into my_temp_table(column1,column2) select object_id,object_id from all_objects where rownum < 90001;

89791 rows created.


Statistics
----------------------------------------------------------
    139  recursive calls
   1271  db block gets
  59340  consistent gets
      0  physical reads
 260784  redo size
    857  bytes sent via SQL*Net to client
    907  bytes received via SQL*Net from client
      3  SQL*Net roundtrips to/from client
   1466  sorts (memory)
      0  sorts (disk)
  89791  rows processed

SQL> update    my_temp_table set column1=column1+2;

89791 rows updated.


Statistics
----------------------------------------------------------
     14  recursive calls
  94538  db block gets
    472  consistent gets
      0  physical reads
5482356  redo size
    859  bytes sent via SQL*Net to client
    843  bytes received via SQL*Net from client
      3  SQL*Net roundtrips to/from client
      2  sorts (memory)
      0  sorts (disk)
      89791  rows processed

SQL> commit;

Commit complete.
  -- Now reconnect to enable emp_undo_enabled.
 --  This would not work without a reconnect due to session limitation

SQL>  conn scott/tiger@pdb1 
Connected.
SQL> alter system set temp_undo_enabled=TRUE;

System altered.

SQL> set autotrace traceonly statistics
SQL> insert    into my_temp_table(column1,column2) select object_id,object_id from all_objects where rownum < 90001;

89791 rows created.


Statistics
----------------------------------------------------------
    132  recursive calls
   1278  db block gets
  59330  consistent gets
      0  physical reads
    280  redo size
    854  bytes sent via SQL*Net to client
    908  bytes received via SQL*Net from client
      3  SQL*Net roundtrips to/from client
   1465  sorts (memory)
      0  sorts (disk)
  89791  rows processed

SQL> update    my_temp_table set column1=column1+2;

89791 rows updated.


Statistics
----------------------------------------------------------
      5  recursive calls
  95985  db block gets
    411  consistent gets
      0  physical reads
      0  redo size
    858  bytes sent via SQL*Net to client
    843  bytes received via SQL*Net from client
      3  SQL*Net roundtrips to/from client
      1  sorts (memory)
      0  sorts (disk)
  89791  rows processed

SQL> 
SQL> commit;

Commit complete.

oracle KILL MY OWN SESSION for developers (without system privilege )

It is a common  request from development to kill some run-away sessions so that they can re-run the application or query.. I have created a procedure to kill your own session. This way,developers can be more productive and not wait for a DBA to do the hunting..

The checks put into the procedure are that it has to run from the same username and from the same laptop as the session you want to kill.

How to run
____________

set serveroutput on -- (for debug info )
exec kill_my_session(183,
6963);

  

The two parameter passed are SID and SERIAL# from gv$session. This procedure can kill across instances in RAC cluster as well.

SAMPLE OUTPUT
--------------------
anonymous block completed
the procedure got executed by SCOTT-vijay-PC and sid,serial is from SCOTT-vijay-PC instance 1
they are the same user
successful in executing alter system kill session '123,321,@1';

-----
Also, below is a query you could use to see the info from gv$session. One of the column has the kill_my_session() info... you can simply cut and paste :)
This is a handy script to have if you need to kill your session which spawned multiple parallel query slaves and you need to kill the query's main session.
I have not concentrated on formatting since you could use some development tool like sql*developer.


SCRIPT to find you session
-------------------------------
SELECT s.inst_id,
s.program,
s.module,
s.event,
s.username,
s.SQL_ID,
TO_CHAR(sysdate,'SSSSS')- TO_CHAR(sql_exec_start,'SSSSS') sec_since_sqlStart,
machine,
'exec kill_my_session(' ||sid ||',' ||serial# ||');' kill_My_session ,
lpad( TO_CHAR( TRUNC(24 *(sysdate-s.logon_time)) ) || TO_CHAR(TRUNC(sysdate) + (sysdate-s.logon_time) , ':MI:SS' ) , 10, ' ') AS UP_time ,
px.count_pq,
sq.sql_text
FROM gv$session s,
(SELECT qcsid ,COUNT(*) count_pq FROM gv$px_session GROUP BY qcsid
) px ,
v$sqlarea sq
WHERE s.type! ='BACKGROUND'
AND s.status ='ACTIVE'
AND s.sql_id IS NOT NULL
AND s.program NOT LIKE '%(P%)%'
AND s.sql_id =sq.sql_id(+)
AND s.sid =px.qcsid(+);

---------------------
The actual kill_my_session is a procedure owned by a schema with DBA privileges with a public synonym.
you also grant execute on the procedure to public;

create or replace
procedure kill_my_session ( sid_v in number, serial in number ) as
run_by varchar2(32);
sess_user varchar2(32);
inst  number;
my_machine varchar2(32);
sess_machine varchar2(32);
begin
SELECT SYS_CONTEXT ('USERENV', 'SESSION_USER') , SYS_CONTEXT ('USERENV', 'HOST')
   into run_by , my_machine FROM DUAL;
   begin
   select username ,inst_id,machine into sess_user,inst,sess_machine from gv$session where sid=sid_v and serial#= serial;
   exception when no_data_found then
   dbms_output.put_line('no session like that in db');
   end;
  dbms_output.put_line('the procedure got executed by '||run_by||'-'||my_machine||' and sid,serial is from '||sess_user||'-'||sess_machine||' instance '||inst);
   if (run_by=sess_user) and (my_machine=sess_machine) then
     dbms_output.put_line('they are the same user');
     begin
     execute immediate 'alter system kill session '''||sid_v||','||serial||',@'||inst||'''';
     dbms_output.put_line(' successful in executing alter system kill session '''||sid_v||','||serial||',@'||inst||''';');
     exception when others then
     dbms_output.put_line(' error executing alter system kill session '''||sid_v||','||serial||',@'||inst||'''');
     dbms_output.put_line(SUBSTR(SQLERRM(SQLCODE), 1, 250));
     end;
   else
     dbms_output.put_line('cannot kill another user''s session');
  end if;
end;

CREATE OR REPLACE PUBLIC SYNONYM "KILL_MY_SESSION" FOR "<DBA_OWNER>"."KILL_MY_SESSION";

grant execute on KILL_MY_SESSION to PUBLIC;

Monday, September 16, 2013

12c DATA Redaction with DBMS_REDACT to hide sensitive data.

We know that oracle has some high security options like database vault and Virtual private database.
While they are very comprehensive and make the database highly secure, It will still take a DBA/Architect to understand the complete Architecture of the application in order to implement those options. This is something that cannot be implemented quickly or without much thought to the application as a whole. This could be one reason why i am a bit behind on the hands-on of database vault.

On the other hand, Data Redaction is like a security option where the data is scrambled only during the display part of the data.  To understand it better, think of redaction similar to to_char or to_data function in the select part of the sql query. The data is only scrambled  during the select.
Some examples are 

 social_security       543-46-2457        xxx-xx-2457                     -- partial 
 email_id               abc@oracle.com   xxx@oracle.com               -- RegExpression
 Account_balance    640000                0                                    -- Full
 Random_number    3423545              5245663                          -- Random

 The redaction is enabled at a session level based on defined policy. 

 Can we hide data from table owner and expose data only to a application.?
 One common situation is to prevent the data being read,processed or stolen from with-in the organization. One of the greatest risk for data loss is when the password of a high privileged user is compromised and sencitive data is stolen. This is a common scenario and i will try to implement a simple policy which can prevent that..

 The idea is to redact sensitivity data from every session that access the data including the owner of the table and add exception only for the application to view data.
Sounds simple ? 
Yes.. 
As I mentioned earlier, the redaction policy is activated when the session is connected to the database or plugable database. Redaction takes place only if this policy expression evaluates to TRUE. This expression must be based on SYS_CONTEXT values from a specified namespace. 
The expression that needs to be true includes SYS_CONTEXT= user, module, service ,machine.  or 1=1.
In a ideal world, you could set  expression='SYS_CONTEXT=CLIENT_IDENTIFIER' which is coded into the application. but to demonstrate a simple Cut&Paste scenario,I opted to have the session expression evaluate to true when connected to default service_name. And if you need to connect to the same database through an application, simply add a service to the database and connect through that servece by only changing the tnsnames.ora This way, you can have a simple redaction of data with no application code change or manual intervention once the policy is set. 




SQL> set pages 0
SQL> conn sys/oracle@pdb1 as sysdba
Connected.
SQL> --create two users
SQL>
SQL> drop user vijay cascade;

User dropped.

SQL> create user vijay identified by oracle;

User created.

SQL> grant connect,resource to vijay;

Grant succeeded.

SQL> grant execute on dbms_redact to vijay;

Grant succeeded.

SQL> grant unlimited tablespace to vijay;

Grant succeeded.

SQL> drop user vijay1 cascade;

User dropped.

SQL> create user vijay1 identified by tiger;

User created.

SQL> grant connect,resource to vijay1;

Grant succeeded.

SQL> conn vijay/oracle@pdb1
Connected.
SQL> -- as first user, create a table with SS column
SQL> create table  creditcard_info (customer_name,SS) as select object_name, object_id from all_objects;

Table created.

SQL> grant select on vijay.creditcard_info to public;

Grant succeeded.

SQL> -- select from the table and observe SS column
SQL> col customer_name format a20
SQL> col ss format 9999
SQL> select * from vijay.creditcard_info fetch first 5 rows only;
ORA$BASE             133
DUAL                 142
DUAL                 143
MAP_OBJECT           348
SYSTEM_PRIVILEGE_MAP 445

SQL> -- create redact policy where service_name is default service_name
SQL> declare
  2  service_name varchar2(10);
  3  begin
  4   select SYS_CONTEXT('USERENV','SERVICE_NAME') into service_name from dual;
  5   DBMS_REDACT.ADD_POLICY (policy_name   => 'Redact_SS',object_schema => 'VIJAY',
  6  object_name   => 'CREDITCARD_INFO', column_name   => 'SS',
  7  expression    => 'SYS_CONTEXT(''USERENV'',''SERVICE_NAME'') ='''||service_name||'''',
  8  function_type => DBMS_REDACT.FULL);
  9  end;
10  /

PL/SQL procedure successfully completed.

SQL>
SQL> -- observe that even the table owner is not able view the column content
SQL> select * from vijay.creditcard_info fetch first 5 rows only;
ORA$BASE              0
DUAL                  0
DUAL                  0
MAP_OBJECT            0
SYSTEM_PRIVILEGE_MAP  0

SQL> -- connect as second user and that user cannot see column as well
SQL> conn vijay1/tiger@pdb1
Connected.
SQL>
SQL> col customer_name format a20
SQL> col ss format 9999
SQL> select * from vijay.creditcard_info fetch first 5 rows only;
ORA$BASE              0
DUAL                  0
DUAL                  0
MAP_OBJECT            0
SYSTEM_PRIVILEGE_MAP  0

SQL>
SQL> -- NOW, connect to sys and create a new service to the same pdb
SQL> conn sys/oracle@pdb1 as sysdba
Connected.
SQL> exec DBMS_SERVICE.stop_service('pdb1_app');

PL/SQL procedure successfully completed.

SQL> BEGIN
  2    DBMS_SERVICE.DELETE_SERVICE(
  3       service_name => 'pdb1_app');
  4  end;
  5  /

PL/SQL procedure successfully completed.

SQL> BEGIN
  2       DBMS_SERVICE.CREATE_SERVICE(
  3       service_name => 'pdb1_app',
  4       network_name => 'pdb1_app');
  5    DBMS_SERVICE.start_service('pdb1_app');
  6  END;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL> -- Now test your application can access thorugh this new service and can see the rows..
SQL>
SQL> conn vijay/oracle@localhost:1521/pdb1_app
Connected.
SQL>
SQL> col customer_name format a20
SQL> col ss format 9999
SQL> select * from vijay.creditcard_info fetch first 5 rows only;
ORA$BASE               133
DUAL                   142
DUAL                   143
MAP_OBJECT             348
SYSTEM_PRIVILEGE_MAP   445


if you have a middle tier and an application to test this,,  you can have multiple AND condition in the expression validity to make it even more secure. For example, you could have a condition of where hostname = middletier_hostname and service=pdb1_app.
e.g.
expression    =>  'SYS_CONTEXT (''USERENV'', ''HOST'') = ''localhost.localdomain'' and SYS_CONTEXT(''USERENV'',''SERVICE_NAME'') =''pdb1_app''',

I want to point out, that this is a easy way to prevent  users  from accessing data outside of the intended application. This still does not prevent privileged users who have  "EXEMPT REDACTION POLICY"  system privilege including DBA roles. This is a first step that can prevent hacking by under-privileged users. To get a more comprehensive security, DB vault, Virtual Private Database and Fine Grain Access Control are additional options.

Friday, September 13, 2013

12c SGA memory distribution by PDBs

With 12c multi-tenant architecture, you could have  several Pluggable database in one Container Database(CBD).
The consolidated architecture is more efficient in using the server's CPU and memory among PDBs.

One question that arises is " can you tell what percent of total  buffers is used up by each PDBs ? "

Below script can tell you just that. You will have to run this at CDB level in your database.

 col PCT_BARCHART format a30
 set linesize 200
 col name format a10
 col con_id format 99
 col PCT format 999
    SELECT NVL(pdb.name,'CDB') name,b.con_id,
  ROUND(b.subtotal *
  (SELECT value FROM V$PARAMETER WHERE name = 'db_block_size'
  )                        /(1024 *1024)) bufferCache_MB,
  ROUND(b.subtotal         /b.total * 100) PCT,
  rpad('*',ROUND(b.subtotal/b.total * 100),'*') PCT_BARCHART
FROM
  ( SELECT DISTINCT a.*
  FROM
    (SELECT con_id,
      COUNT(*) over (partition BY con_id ) subtotal ,
      COUNT(*) over () total
    FROM sys.gv$bh
    ) a
  )b, v$pdbs pdb
  where b.con_id=pdb.con_id(+)
ORDER BY con_id;


output.



NAME      CON_ID BUFFERCACHE_MB  PCT PCT_BARCHART
---------- ------ -------------- ---- ------------------------------
CDB 1              131   28 ****************************
PDB$SEED 2               14    3 ***
PDB1 3               18    4 ****
PDB2 4               89   19 *******************
PDB_2 5               41    9 *********
PDB_1 6               41    9 *********
PDB_4 7               39    8 ********
PDB_3 8               43    9 *********
PDB_5 9               44    9 *********

9 rows selected.





 SQL> 

Wednesday, September 11, 2013

12c clone PDBs in parallel

In one of the 12c Features presentation, one of the questions asked was if a Clone of PDBs can run in parallel ?
I.E, are there any locks held that will serialize creation of cloned PDBs. 

To clone a PDB in 12c, the procedure is to make the source PDB in read only mode, or use SEED database which is always in read only mode, i was fairly certain that no locks will be held. Still, to rule out any doubt, i ran a simple test. I created 5 PDBs in 5 different sessions at the same time. I noticed that all the PDBs were created at the same time proving that they can be created in parallel. 

The top wait event was "Pluggable Database file copy" of  "User I/O"  wait class while PDBs were being created.

below is the test script and the output.

[oracle@localhost pdbtest]$ cat a.sh
echo " starting PDBS at  $(date)"
for i in {1..5}
do
   echo "Welcome $i times"
  rm -r /home/oracle/pdbtest/$i
   mkdir /home/oracle/pdbtest/$i
   /home/oracle/pdbtest/b.sh $i &
done
wait
echo " Ending  PDBS at  $(date)"
[oracle@localhost pdbtest]$ 

[oracle@localhost pdbtest]$ cat b.sh

sqlplus -s sys/oracle as sysdba << __EOF__
set timing on
set echo on
-- SELECT $1 from dual;
CREATE PLUGGABLE DATABASE pdb_$1
  ADMIN USER  pdb_$1 IDENTIFIED BY oracle
  ROLES = (dba)
  FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/cdb1/pdbseed/',
                       '/home/oracle/pdbtest/$1/');
   drop pluggable database pdb_$1 including datafiles;
exit;
__EOF__
[oracle@localhost pdbtest]$ 


The output below.
[oracle@localhost pdbtest]$ ./a.sh
starting PDBS at  Thu Sep 12 12:42:06 UTC 2013
Welcome 1 times
Welcome 2 times
Welcome 3 times
Welcome 4 times
Welcome 5 times

Pluggable database created.

Elapsed: 00:00:29.82

Pluggable database created.

Elapsed: 00:00:30.04

Pluggable database dropped.

Elapsed: 00:00:00.45

Pluggable database created.

Elapsed: 00:00:30.29

Pluggable database dropped.

Elapsed: 00:00:00.37

Pluggable database dropped.

Elapsed: 00:00:00.30

Pluggable database created.

Elapsed: 00:00:30.59

Pluggable database created.

Elapsed: 00:00:30.67

Pluggable database dropped.

Elapsed: 00:00:00.18

Pluggable database dropped.

Elapsed: 00:00:00.16
Ending  PDBS at  Thu Sep 12 12:42:38 UTC 2013

12c Consolidated AWR report with PDBs

 For a DBA, Automatic Workload Repository (AWR) is THE tool that helps in diagnosis and tuning.  However, with 12c multi-tenant architecture, where one Container Database (CDB) can have multiple Pluggable Database (PDB).  what will happen to the AWR ? 

The Manual  explains 12c AWR as follows.
"If a dictionary table stores information that pertains to the CDB as a whole, instead of for each PDB, then both the metadata and the data displayed in a data dictionary view are stored in the root. For example, Automatic Workload Repository (AWR) data is stored in the root and displayed in some data dictionary views, such as the DBA_HIST_ACTIVE_SESS_HISTORY view. An internal mechanism called an object link enables a PDB to access both the metadata and the data for these types of views in the root."

 It implies that, all the AWR snapshots is taken at CDB level and PDBs have appropriate views  for its corresponding con_id of individual PDBs.
It make total sense in a consolidated environment. If you want to see I/O, cpu utilization on a server, you can get a full picture at a CDB level. Until now in 11g, you needed to login to every DB to check what was happening at a particular time period during say heavy i/o. Now in 12c, you have a global view of all your database(PDBs)  activities on a server at CDB AWR. The individual PDBs still will have views to monitor sqls pertaining to it.

 A simple test is to run  exec dbms_workload_repository.create_snapshot();  in any PDB and CDB and check the max snap_id for each PDB and CDB. (select  con_id,max(snap_id) from dba_hist_sqlstat group by con_id;)
The snap_id is  the same for all PDBs and CDB. This implies, no matter where you run create_snapshot(), Repository is always at CDB level. and every PDB in the DB will get updated VIEW related to its environment.

Below is a small test...



SQL> @a
SQL> -- conn as container DB
SQL> conn sys/oracle as sysdba
Connected.
SQL> show pdbs

    CON_ID CON_NAME              OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
     2 PDB$SEED              READ ONLY  NO
     3 PDB1               READ WRITE NO
     4 PDB2               MOUNTED
     5 DEV2               READ WRITE NO
SQL> exec dbms_workload_repository.create_snapshot();

PL/SQL procedure successfully completed.

SQL> select con_id,max(snap_id) from dba_hist_sqlstat group by con_id order by 1;

    CON_ID MAX(SNAP_ID)
---------- ------------
     1       1204
     3       1204
     5       1204

SQL> ---
SQL> -- conn to pdb called dev2
SQL> --
SQL> conn system/manager@dev2
Connected.
SQL> show pdbs

    CON_ID CON_NAME              OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
     5 DEV2               READ WRITE NO
SQL> -- check the max snap_id
SQL> select con_id,max(snap_id) from dba_hist_sqlstat group by con_id;

    CON_ID MAX(SNAP_ID)
---------- ------------
     5       1204

SQL> exec dbms_workload_repository.create_snapshot();

PL/SQL procedure successfully completed.

SQL> select con_id,max(snap_id) from dba_hist_sqlstat group by con_id order by 1;

    CON_ID MAX(SNAP_ID)
---------- ------------
     5       1205

SQL> --
SQL> -- connect back to cdb and recheck  max snap_id
SQL> conn sys/oracle as sysdba
Connected.
SQL> select con_id,max(snap_id) from dba_hist_sqlstat group by con_id order by 1;

    CON_ID MAX(SNAP_ID)
---------- ------------
     1       1205
     3       1205
     5       1205

------------------------------------

The next question you might ask is
" can i run AWR reports from PDBs as well as CDBs for consolidated report ? "

Answer: Yes. awrrpt.sql will run at CDB level and PDB level. 
When a snapshot is taken either in PDB or CDB, the data is first populated in the CDB repository. each PDB has all the tables that is required for running awr reports. These tables are populated through object link views from the CDB with information only pertaining to that PDB.

So awrrpt.sql will run at both PDB and CDB levels without errors. However, at PDB level, it will only have info pertaining to it.

Update: If you want to see all the objects with OBJECT link to CDB, you can run the following query.

select distinct object_name from dba_objects where sharing ='OBJECT LINK';