Hemant K Chitale

Subscribe to Hemant K Chitale feed
I am an Oracle Database Specialist in Singapore.
EU visitors : Please note that this site uses cookies.

Updated: 5 hours 54 min ago

Using FLASHBACK DATABASE for [destructive] D.R. Testing

Thu, 2020-03-26 11:45
Testing your Disaster Recovery strategy with an Oracle Standby Database can be at different "levels" for the database :
1. Graceful Switchover to the D.R. site and reversing roles between the two databases, but only querying* data at the D.R. site
2. Shutdown of the Production site and Failover to the D.R. site and only *querying* data at the D.R. site
3. Shutdown of the Production site and Failover to the D.R. site with *destructive* testing at the D.R. site followed by restore (or flashback) of the D.R. site database to "throwaway" all  changes
3. Either Switchover or Failover with role reversal and *destructive* testing at the D.R. site, validation that data changes flow back to the Production site and, finally, restore (or flashback) of the database at both sites.

Restoring a large database at one or both sites can take time.
You may have taken a Snapshot of the database(s) and just restore the snapshot.
Or you may FLASHBACK the database(s).

{for details on how I created this Standby database configuration in 19c, see my previous posts here and here}

I will try to use FLASHBACK DATABASE here.

I start with the Primary running at the Production site :

oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:22:26 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Thu Mar 26 2020 23:22:02 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> drop table my_transactions purge;

Table dropped.

SQL> create table my_transactions (txn_id number, txn_data varchar2(50));

Table created.

SQL> insert into my_transactions values (1,'First at ProductionDC:Primary');

1 row created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>


I then verify the state of both databases (the "oracle19c" prompt is at the Production site, the  "STDBYDB" prompt is at the D.R. site)

oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:23:48 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON CURRENT_SCN
------- ----------- ---------------- ------------------ -----------
CURRENT NOT ALLOWED PRIMARY NO 4796230

SQL>



STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:25:02 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
STANDBY REQUIRED PHYSICAL STANDBY NO
0 4796205


SQL>


So, currently, the Standby is slightly behind (SCN#4796205) the Primary (SCN#4796230). Note that FLASHBACK is *not* enabled in the databases.

I first create my RESTORE POINT on the Standby and then on the Primary.

{at the current Standby at the D.R. site}
SQL> alter database recover managed standby database cancel;

Database altered.

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /opt/oracle/FRA/STDBYDB
db_recovery_file_dest_size big integer 10G
SQL> create restore point dr_before_switch guarantee flashback database;

Restore point created.

SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

NAME
--------------------------------------------------------------------------------
RESTORE_POINT_TIME
---------------------------------------------------------------------------
DATABASE_INCARNATION# SCN GUA
--------------------- ---------- ---
DR_BEFORE_SWITCH

2 4796590 YES


SQL>
SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL>




{at the current Primary at the Production site}
SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON CURRENT_SCN
------- ----------- ---------------- ------------------ -----------
CURRENT NOT ALLOWED PRIMARY NO 4796230

SQL> alter system switch logfile;

System altered.

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON CURRENT_SCN
------- ----------- ---------------- ------------------ -----------
CURRENT NOT ALLOWED PRIMARY NO 4796968

SQL> create restore point production_before_switch guarantee flashback database;

Restore point created.

SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

NAME
--------------------------------------------------------------------------------
RESTORE_POINT_TIME
---------------------------------------------------------------------------
DATABASE_INCARNATION# SCN GUA
--------------------- ---------- ---
PRODUCTION_BEFORE_SWITCH

2 4797182 YES


SQL>


At each site, I have created a Restore Point (with Guarantee Flashback Database). I have ensured that the Restore Point for the current Standby Database at the D.R. site is at a *lower* SCN (4796590) than that for the current Primary (4797182) (at the Production site).  To further ensure this, I did a log swich and verified the CURRENT_SCN at the Primary before creating the Restore Point.

(Note that both sites have a DB_RECOVERY_FILE_DEST configured for the GUARANTEEd Restore Point).

(a small note : I have to disable Recovery at the Standby database before I can create a Restore Point and then re-enable Recovery after that.  A Restore Point cannot be created when a database is in Recovery mode).


I now put in another transaction at the Primary (Production site database) and then Switchover to to the D.R. site.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> insert into my_transactions values (2,'Second, after R.P. at ProductionDC:Primary');

1 row created.

SQL> commit;

Commit complete.

SQL> connect / as sysdba
Connected.
SQL> alter database switchover to stdbydb;

Database altered.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:41:57 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, current_scn
2 from v$databasse
3
SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON CURRENT_SCN
------- ----------- ---------------- ------------------ -----------
STANDBY ALLOWED PHYSICAL STANDBY RESTORE POINT ONLY 4899284

SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL>


So, now the database at the Production site is a Standby database.

I now connect to the database at the D.R. site that is now a Primary

STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:45:02 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
CURRENT NOT ALLOWED PRIMARY RESTORE POINT ONLY
4899284 0


SQL> shutdown ;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
Database opened.
SQL>
SQL> alter pluggable database orclpdb1 open;

Pluggable database altered.

SQL> connect hemant/hemant@STDBYPDB1
Connected.
SQL> select * from my_transactions order by 1;

TXN_ID TXN_DATA
---------- --------------------------------------------------
1 First at ProductionDC:Primary
2 Second, after R.P. at ProductionDC:Primary

SQL>
SQL> insert into my_transactions values (3,'Destructive change at DRDC');

1 row created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
STDBYDB>


{Note that "STDBYDPDB1" is my tnsnames entry for the PDB which still has the name "orclpdb1" at the D.R. site.}

I have created a "destructive" change with the third row which should not be in production. However, I will switch back to the Production data centre and verify that the row has replicated back.

{at the D.R. site}
STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:50:29 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter database switchover to orclcdb;

Database altered.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
STDBYDB>



{at the Production site}
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:52:21 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
Database opened.
SQL> alter pluggable database orclpdb1 open;
alter pluggable database orclpdb1 open
*
ERROR at line 1:
ORA-65019: pluggable database ORCLPDB1 already open


SQL>
SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> select * from my_transactions order by 1;

TXN_ID TXN_DATA
---------- --------------------------------------------------
1 First at ProductionDC:Primary
2 Second, after R.P. at ProductionDC:Primary
3 Destructive change at DRDC

SQL>


So, I have been able to
1. SWITCHOVER from the Production site to the D.R. site
2. Create a new row when the database is Primary at the D.R. site
3. SWITCHOVER back to the Production site
4. Verify that the destructive row is now at the Production site.

I now need to reset both databases to the state they were in before I began the test.

{at the Production site}
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:56:16 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
CURRENT NOT ALLOWED PRIMARY RESTORE POINT ONLY
5000964 0


SQL>
SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

NAME
--------------------------------------------------------------------------------
RESTORE_POINT_TIME
---------------------------------------------------------------------------
DATABASE_INCARNATION# SCN GUA
--------------------- ---------- ---
PRODUCTION_BEFORE_SWITCH

2 4797182 YES


SQL>
SQL> FLASHBACK DATABASE TO RESTORE POINT PRODUCTION_BEFORE_SWITCH;

Flashback complete.

SQL> alter database open resetlogs ;

Database altered.

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
CURRENT NOT ALLOWED PRIMARY RESTORE POINT ONLY
5000964 4798237


SQL>
SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> select * from my_transactions order by 1;

TXN_ID TXN_DATA
---------- --------------------------------------------------
1 First at ProductionDC:Primary

SQL>


So, now, the database at the Production site has reverted to the Restore Point and all changes after the Restore Point have been discarded.

This includes TXN_ID=2 which I had added to demonstrate propagation of a change from the Production site to the D.R. site ---- in your testing, you must ensure that you do not make any changes after the Restore Point is created.   Typically, you'd create your Production Restore Point with the applications disconnecte, database shutdown and re-mounted just before switchover.  Remember, this is for D.R. testing when you do have control over applications and database shutdown and startup.


What about the database at the D.R. site ?  Can I flashback it and resume it's role as a Standby ?
Remember that the Restore Point I created on the D.R. site was at a *lower* SCN than that for the Production site.

STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 27 00:08:25 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
STANDBY ALLOWED PHYSICAL STANDBY RESTORE POINT ONLY
0 5000964


SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

NAME
--------------------------------------------------------------------------------
RESTORE_POINT_TIME
---------------------------------------------------------------------------
DATABASE_INCARNATION# SCN GUA
--------------------- ---------- ---
DR_BEFORE_SWITCH

2 4796590 YES

PRODUCTION_BEFORE_SWITCH_PRIMARY

2 4797182 NO


SQL> FLASHBACK DATABASE TO RESTORE POINT DR_BEFORE_SWITCH;

Flashback complete.

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
STANDBY ALLOWED PHYSICAL STANDBY RESTORE POINT ONLY
0 4796590


SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL>


Now the database at the Production site has resumed as a Primary database, at SCN#4798237 and the database at the D.R. site has resumed as a Standby database at SCN#4796590  (lower than the Primary).

If you noticed the second entry in v$restore_point at the D.R. site -- Restore Point name "PRODUCTION_BEFORE_SWITCH_PRIMARY" -- this is a 19c enhancement where a Restore Point created on the Primary automatically propagates to the Standby, with the suffix "_PRIMARY"  (to indicate that it came from a database in PRIMARY role) attached to the Restore Point name.

Can I really really be sure that I have reverted both databases to their intended roles ?

I  can verify this again :

{at the Production site}
SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> insert into my_transactions values (1001,'After DR Testing, back to normal life');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from my_transactions order by 1;

TXN_ID TXN_DATA
---------- --------------------------------------------------
1 First at ProductionDC:Primary
1001 After DR Testing, back to normal life

SQL>



{at the D.R site}
SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database open read only;

Database altered.

SQL> alter pluggable database orclpdb1 open;

Pluggable database altered.

SQL> connect hemant/hemant@stdbypdb1
Connected.
SQL> select * from my_transactions order by 1;

TXN_ID TXN_DATA
---------- --------------------------------------------------
1 First at ProductionDC:Primary
1001 After DR Testing, back to normal life

SQL>
SQL> connect / as sysdba
Connected.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
STANDBY REQUIRED PHYSICAL STANDBY RESTORE POINT ONLY
0 4802358


SQL>


To verify the behaviour, I added a new row (TXN_ID=1001) in the Primary database at the Production site and then did an OPEN READ ONLY of the Standby database at the D.R. site to check the table.
Note :  So as to not require an Active Data Guard licence, I stopped Recovery on the Standby before I did an OPEN READ ONLY.
Of course, after the verification, I resumed the Standby database in Recovery mode.

This whole exercise also did NOT need the databases to be "permanently" in FLASHBACK ON mode.  I used the Guaranteed Restore Point feature with the Recovery File Dest to generate the minimal flashback logs.  At the end of the exercise, I can DROP the Restore Points.

{at the Production site}
oracle19c>sqlplus

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 27 00:37:47 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> drop restore point PRODUCTION_BEFORE_SWITCH;

Restore point dropped.

SQL> alter database open;

Database altered.

SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

no rows selected

SQL>


{at the D.R. site}
STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 27 00:40:47 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> select name from v$restore_point;

NAME
--------------------------------------------------------------------------------
DR_BEFORE_SWITCH
PRODUCTION_BEFORE_SWITCH_PRIMARY

SQL>
SQL> drop restore point PRODUCTION_BEFORE_SWITCH_PRIMARY;

Restore point dropped.

SQL> drop restore point DR_BEFORE_SWITCH;

Restore point dropped.

SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL>
SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

no rows selected

SQL>


The only "catch" is that I had to bring up the Production site (Primary) database in MOUNT mode before I could drop the Restore Point.  So, you need to factor this into you D.R. testing.


Categories: DBA Blogs

Redo Shipping for Standby Database in 19c

Sun, 2020-03-15 10:39
Following my previous post, here is some setup information :

Relevant database instance parameter(s) for the Primary database :

*.local_listener='LISTENER_ORCLCDB'
*.log_archive_dest_1='LOCATION=/opt/oracle/archivelog/ORCLCDB'
*.log_archive_dest_2='SERVICE=STDBYDB ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STDBYDB'
*.remote_login_passwordfile='EXCLUSIVE'


Relevant database instance parameter(s) for the Standby database :

*.audit_file_dest='/opt/oracle/admin/STDBYDB/adump'
*.control_files='/opt/oracle/oradata/STDBYDB/control01.ctl','/opt/oracle/oradata/STDBYDB/control02.ctl'
*.db_file_name_convert='/opt/oracle/oradata/ORCLCDB','/opt/oracle/oradata/STDBYDB'
*.db_name='ORCLCDB'
*.db_unique_name='STDBYDB'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=STDBYDBXDB)'
*.fal_server='ORCLCDB'
*.local_listener='LISTENER_STDBYDB'
*.log_archive_dest_1='LOCATION=/opt/oracle/archivelog/STDBYDB'
*.log_archive_dest_2='SERVICE=ORCLCDB ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLCDB'
*.log_file_name_convert='/opt/oracle/oradata/ORCLCDB','/opt/oracle/oradata/STDBYDB'
*.remote_login_passwordfile='EXCLUSIVE'
*.standby_file_management='AUTO'


The listener.ora and tnsnames.ora entries on the Primary server :

{listener}
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = x.x.x.x)(PORT = 1521))
)
)

{tnsnames}
LISTENER_ORCLCDB =
(ADDRESS = (PROTOCOL = TCP)(HOST = x.x.x.x)(PORT = 1521))

STDBYDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = x.x.x.x)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = STDBYDB)
)
)


The listener.ora and tnsnames.ora entries on the Standby server :

{static listener entry}
LISTENER_STDBYDB =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1522))
)
)

SID_LIST_LISTENER_STDBYDB =
(SID_LIST=
(SID_DESC =
(ORACLE_HOME = /opt/oracle/product/19c/dbhome_1)
(SID_NAME = STDBYDB)
)
)

{tnsnames}
LISTENER_STDBYDB =
(ADDRESS = (PROTOCOL = TCP)(HOST = x.x.x.x)(PORT = 1522))

ORCLCDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = x.x.x.x)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCLCDB)
)
)


Database listener and instance startup commands on the Standby :

STDBYDB_server>lsnrctl start listener_stdbydb

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 15-MAR-2020 23:05:43

Copyright (c) 1991, 2019, Oracle. All rights reserved.

Starting /opt/oracle/product/19c/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 19.0.0.0.0 - Production
System parameter file is /opt/oracle/product/19c/dbhome_1/network/admin/listener.ora
Log messages written to /opt/oracle/diag/tnslsnr/oracle-19c-vagrant/listener_stdbydb/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1522)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=0.0.0.0)(PORT=1522)))
STATUS of the LISTENER
------------------------
Alias listener_stdbydb
Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date 15-MAR-2020 23:05:43
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /opt/oracle/product/19c/dbhome_1/network/admin/listener.ora
Listener Log File /opt/oracle/diag/tnslsnr/oracle-19c-vagrant/listener_stdbydb/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=x.x.x.x)(PORT=1522)))
Services Summary...
Service "STDBYDB" has 1 instance(s).
Instance "STDBYDB", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
STDBYDB_server>
STDBYDB_server>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Mar 15 23:06:07 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL>



Once the Standby database instance is started I can see entries in the *Standby* database instance alert log file which show that backlog of archivelogs (43 to 46) that were generated in the Primary database instance but hadn't been applied yet  to the Standby (the Standby was shutdown while the Primary was still active):

Completed: ALTER DATABASE   MOUNT
2020-03-15T23:06:22.664060+08:00
rfs (PID:6164): Primary database is in MAXIMUM PERFORMANCE mode
2020-03-15T23:06:22.756031+08:00
rfs (PID:6164): Selected LNO:4 for T-1.S-47 dbid 2778483057 branch 1007421686
2020-03-15T23:06:23.159102+08:00
rfs (PID:6168): Opened log for T-1.S-45 dbid 2778483057 branch 1007421686
2020-03-15T23:06:23.176308+08:00
rfs (PID:6166): Opened log for T-1.S-43 dbid 2778483057 branch 1007421686
2020-03-15T23:06:23.201644+08:00
rfs (PID:6170): Opened log for T-1.S-44 dbid 2778483057 branch 1007421686
2020-03-15T23:06:23.266812+08:00
rfs (PID:6168): Archived Log entry 3 added for B-1007421686.T-1.S-45 ID 0xa59c8470 LAD:2
2020-03-15T23:06:23.342737+08:00
rfs (PID:6166): Archived Log entry 4 added for B-1007421686.T-1.S-43 ID 0xa59c8470 LAD:2
2020-03-15T23:06:23.353286+08:00
rfs (PID:6170): Archived Log entry 5 added for B-1007421686.T-1.S-44 ID 0xa59c8470 LAD:2
2020-03-15T23:06:23.402195+08:00
rfs (PID:6168): Opened log for T-1.S-46 dbid 2778483057 branch 1007421686
2020-03-15T23:06:23.451732+08:00
rfs (PID:6168): Archived Log entry 6 added for B-1007421686.T-1.S-46 ID 0xa59c8470 LAD:2
2020-03-15T23:06:30.118056+08:00
alter database recover managed standby database disconnect from session
2020-03-15T23:06:30.124297+08:00
Attempt to start background Managed Standby Recovery process (STDBYDB)
Starting background process MRP0
2020-03-15T23:06:30.138764+08:00
MRP0 started with pid=49, OS id=6178
2020-03-15T23:06:30.139465+08:00
Background Managed Standby Recovery process started (STDBYDB)
2020-03-15T23:06:35.172532+08:00
Started logmerger process
2020-03-15T23:06:35.184395+08:00
PR00 (PID:6184): Managed Standby Recovery starting Real Time Apply
max_pdb is 3
2020-03-15T23:06:35.518115+08:00
Parallel Media Recovery started with 2 slaves
2020-03-15T23:06:35.563095+08:00
stopping change tracking
2020-03-15T23:06:35.733514+08:00
PR00 (PID:6184): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_43_1007421686.dbf
2020-03-15T23:06:36.129942+08:00
PR00 (PID:6184): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_44_1007421686.dbf
2020-03-15T23:06:36.142908+08:00
Completed: alter database recover managed standby database disconnect from session
2020-03-15T23:06:39.365000+08:00
PR00 (PID:6184): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_45_1007421686.dbf
2020-03-15T23:06:40.241700+08:00
PR00 (PID:6184): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_46_1007421686.dbf
2020-03-15T23:06:40.981414+08:00


Subsequently as redo generation continues on the Primary, the Standby starts showing that it waits for archive logs, applies redo and even does datafile resizes:

PR00 (PID:6184): Media Recovery Waiting for T-1.S-47 (in transit)
2020-03-15T23:06:40.997356+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 47 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log
2020-03-15T23:12:52.195417+08:00
Resize operation completed for file# 1, old size 931840K, new size 942080K
2020-03-15T23:13:08.231444+08:00
rfs (PID:6572): Primary database is in MAXIMUM PERFORMANCE mode
rfs (PID:6572): Re-archiving LNO:4 T-1.S-47
2020-03-15T23:13:08.489447+08:00
PR00 (PID:6184): Media Recovery Waiting for T-1.S-48
2020-03-15T23:13:08.495944+08:00
rfs (PID:6572): No SRLs available for T-1
2020-03-15T23:13:08.515405+08:00
rfs (PID:6572): Opened log for T-1.S-48 dbid 2778483057 branch 1007421686
2020-03-15T23:13:08.516367+08:00
ARC2 (PID:6141): Archived Log entry 7 added for T-1.S-47 ID 0xa59c8470 LAD:1
2020-03-15T23:19:13.700490+08:00
rfs (PID:6572): Archived Log entry 8 added for B-1007421686.T-1.S-48 ID 0xa59c8470 LAD:2
2020-03-15T23:19:13.769405+08:00
rfs (PID:6572): Selected LNO:4 for T-1.S-49 dbid 2778483057 branch 1007421686
2020-03-15T23:19:14.445032+08:00
PR00 (PID:6184): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_48_1007421686.dbf
PR00 (PID:6184): Media Recovery Waiting for T-1.S-49 (in transit)
2020-03-15T23:19:14.947878+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 49 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log


Log Group#4  is actually the Standby Redo Log :

{at the Primary}
SQL> select l.group#, f.member
2 from v$standby_log l, v$logfile f
3 where l.group#=f.group#
4 /

GROUP#
----------
MEMBER
--------------------------------------------------------------------------------
4
/opt/oracle/oradata/ORCLCDB/stdbredo01.log


SQL>
{at the Standby}
SQL> select l.group#, f.member
2 from v$standby_log l, v$logfile f
3 where l.group#=f.group#
4 /

GROUP#
----------
MEMBER
--------------------------------------------------------------------------------
4
/opt/oracle/oradata/STDBYDB/stdbredo01.log


SQL>


I can monitor the Standby with this query :

23:32:25 SQL> l
1 select thread#, sequence#, group#, client_process, block#, blocks, delay_mins
2 from v$managed_standby
3 where thread#=1
4 and sequence# is not null
5 and sequence# != 0
6* order by 1,2
23:32:25 SQL> /

THREAD# SEQUENCE# GROUP# CLIENT_P BLOCK# BLOCKS DELAY_MINS
---------- ---------- ------ -------- ---------- ---------- ----------
1 47 4 ARCH 26624 945 0
1 49 4 ARCH 139264 656 0
1 50 N/A N/A 0 0 0
1 50 2 LGWR 86 1 0

23:32:26 SQL>
23:32:55 SQL> /

THREAD# SEQUENCE# GROUP# CLIENT_P BLOCK# BLOCKS DELAY_MINS
---------- ---------- ------ -------- ---------- ---------- ----------
1 47 4 ARCH 26624 945 0
1 49 4 ARCH 139264 656 0
1 50 N/A N/A 0 0 0
1 50 2 LGWR 65490 3 0

23:32:56 SQL>
23:33:19 SQL> /

THREAD# SEQUENCE# GROUP# CLIENT_P BLOCK# BLOCKS DELAY_MINS
---------- ---------- ------ -------- ---------- ---------- ----------
1 47 4 ARCH 26624 945 0
1 49 4 ARCH 139264 656 0
1 50 N/A N/A 0 0 0
1 50 2 LGWR 133538 1 0

23:33:19 SQL>
23:34:00 SQL> /

THREAD# SEQUENCE# GROUP# CLIENT_P BLOCK# BLOCKS DELAY_MINS
---------- ---------- ------ -------- ---------- ---------- ----------
1 47 4 ARCH 26624 945 0
1 49 4 ARCH 139264 656 0
1 51 N/A N/A 9 409600 0
1 51 3 LGWR 9 1 0

23:34:01 SQL>
23:38:03 SQL> /

THREAD# SEQUENCE# GROUP# CLIENT_P BLOCK# BLOCKS DELAY_MINS
---------- ---------- ------ -------- ---------- ---------- ----------
1 47 4 ARCH 26624 945 0
1 49 4 ARCH 139264 656 0
1 51 N/A N/A 66201 409600 0
1 51 3 LGWR 66201 1 0

23:38:04 SQL>


At my first execution of this query (at 23:32:25), Sequence#50 is the CURRENT Redo Log file in the Primary database.  V$MANAGED_STANDBY on the Standby shows two entries but the active one is the one where it shows that the CLIENT_PROGRAM is the LGWR (Log Writer) on the Primary that is shipping Redo to the Standby.
As transactions occur on the Primary, you can see that the current BLOCK# has also changed for Sequence#50.
When the Primary forces an Archive and Log switch to #51, V$MANAGED_STANDBY now reflects #51 as the redo sequence that is being applied.  Subsequently, the current BLOCK# changes as transactions occur on the Primary.

Thus, this monitoring does show that the Standby is receiving and applying Redo without waiting for actual Archival of the Redo Log file from the Primary.


Categories: DBA Blogs

Quickly creating a Standby Database in 19c

Sun, 2020-02-23 09:45
A quick overview of creating a Standby from an active database, copying over the network.
(words in italics above are added after this post was published)

1.  Create the parameter file initSTDBYDB.ora with additional parameters
  change or add DB_UNIQUE_NAME to be STDBYDB
  change the location of control files
  add fal_server to be the lookup name for the Primary (e.g. ORCLCDB)
  add log_archive_dest_2 to specify the Primary Service and DB_UNIQUE_NAME (note : If you are using "log_archive_dest", you can't use "log_archive_dest_2" to co-exist.  A default DB_RECOVERY_FILE_DEST location is preferable)
  add db_file_name_convert and log_file_name_convert to map file names to new directories (if they are to be different or, for example, if creating the Standby on the same server !!)  --- ensure that you have the new directories (or ASM DiskGroups) available on the Standby with the right permissions (including directories for PDBs and the PDBSEED) !
  change any other hardcoded directory names (e.g. for adump)

2.  Create a listener.ora and/or a new listener with a static SID_NAME entry for the Standby DB

3.  Add an entry for the Standby  in the Primary tnsnames.ora and for the Primary in the Standby tnsnames.ora

4.  Add at least one Standby Redo Log file to the Primary Database

5.  Ensure that you have the password for the SYS account (or will you be using SYSDG ?) on the Primary and copy the Password file to the Stadnby

6.  Start the Standby listener

7.  STARTUP NOMOUNT the Standby Instance (remember to have the ORACLE_SID set !!)

8.  Start rman on the Primary with :
rman target sys/manager auxiliary sys/manager@STDBYDB
and then issue the command
duplicate target database for standby from active database dorecover;


and thus the execution will be as :

oracle19c>rman target sys/manager auxiliary sys/manager@STDBYDB

Recovery Manager: Release 19.0.0.0.0 - Production on Sun Feb 23 23:38:59 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)
connected to auxiliary database: ORCLCDB (not mounted)

RMAN> duplicate target database for standby from active database dorecover;

Starting Duplicate Db at 23-FEB-20
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=21 device type=DISK
current log archived

contents of Memory Script:
{
backup as copy reuse
passwordfile auxiliary format '/opt/oracle/product/19c/dbhome_1/dbs/orapwSTDBYDB' ;
}
executing Memory Script

Starting backup at 23-FEB-20
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=274 device type=DISK
Finished backup at 23-FEB-20

contents of Memory Script:
{
backup as copy current controlfile for standby auxiliary format '/opt/oracle/oradata/STDBYDB/control01.ctl';
restore clone primary controlfile to '/opt/oracle/oradata/STDBYDB/control02.ctl' from
'/opt/oracle/oradata/STDBYDB/control01.ctl';
}
executing Memory Script

Starting backup at 23-FEB-20
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/opt/oracle/product/19c/dbhome_1/dbs/snapcf_ORCLCDB.f tag=TAG20200223T233924
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 23-FEB-20

Starting restore at 23-FEB-20
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: copied control file copy
Finished restore at 23-FEB-20

contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
set newname for tempfile 1 to
"/opt/oracle/oradata/STDBYDB/temp01.dbf";
set newname for tempfile 2 to
"/opt/oracle/oradata/STDBYDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf";
set newname for tempfile 3 to
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/temp01.dbf";
switch clone tempfile all;
set newname for datafile 1 to
"/opt/oracle/oradata/STDBYDB/system01.dbf";
set newname for datafile 3 to
"/opt/oracle/oradata/STDBYDB/sysaux01.dbf";
set newname for datafile 4 to
"/opt/oracle/oradata/STDBYDB/undotbs01.dbf";
set newname for datafile 5 to
"/opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf";
set newname for datafile 6 to
"/opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf";
set newname for datafile 7 to
"/opt/oracle/oradata/STDBYDB/users01.dbf";
set newname for datafile 8 to
"/opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf";
set newname for datafile 9 to
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/system01.dbf";
set newname for datafile 10 to
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/sysaux01.dbf";
set newname for datafile 11 to
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/undotbs01.dbf";
set newname for datafile 12 to
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/users01.dbf";
backup as copy reuse
datafile 1 auxiliary format
"/opt/oracle/oradata/STDBYDB/system01.dbf" datafile
3 auxiliary format
"/opt/oracle/oradata/STDBYDB/sysaux01.dbf" datafile
4 auxiliary format
"/opt/oracle/oradata/STDBYDB/undotbs01.dbf" datafile
5 auxiliary format
"/opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf" datafile
6 auxiliary format
"/opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf" datafile
7 auxiliary format
"/opt/oracle/oradata/STDBYDB/users01.dbf" datafile
8 auxiliary format
"/opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf" datafile
9 auxiliary format
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/system01.dbf" datafile
10 auxiliary format
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/sysaux01.dbf" datafile
11 auxiliary format
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/undotbs01.dbf" datafile
12 auxiliary format
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/users01.dbf" ;
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /opt/oracle/oradata/STDBYDB/temp01.dbf in control file
renamed tempfile 2 to /opt/oracle/oradata/STDBYDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf in control file
renamed tempfile 3 to /opt/oracle/oradata/STDBYDB/ORCLPDB1/temp01.dbf in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 23-FEB-20
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/opt/oracle/oradata/ORCLCDB/system01.dbf
output file name=/opt/oracle/oradata/STDBYDB/system01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/opt/oracle/oradata/ORCLCDB/sysaux01.dbf
output file name=/opt/oracle/oradata/STDBYDB/sysaux01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00010 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
output file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/sysaux01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf
output file name=/opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/opt/oracle/oradata/ORCLCDB/undotbs01.dbf
output file name=/opt/oracle/oradata/STDBYDB/undotbs01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00009 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
output file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/system01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf
output file name=/opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00011 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
output file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/undotbs01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00012 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
output file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/users01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00008 name=/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf
output file name=/opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=/opt/oracle/oradata/ORCLCDB/users01.dbf
output file name=/opt/oracle/oradata/STDBYDB/users01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 23-FEB-20

sql statement: alter system archive log current
current log archived

contents of Memory Script:
{
backup as copy reuse
archivelog like "/opt/oracle/archivelog/ORCLCDB/1_41_1007421686.dbf" auxiliary format
"/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf" archivelog like
"/opt/oracle/archivelog/ORCLCDB/1_42_1007421686.dbf" auxiliary format
"/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf" ;
catalog clone archivelog "/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf";
catalog clone archivelog "/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf";
switch clone datafile all;
}
executing Memory Script

Starting backup at 23-FEB-20
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=41 RECID=9 STAMP=1033170130
output file name=/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf RECID=0 STAMP=0
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=42 RECID=10 STAMP=1033170130
output file name=/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf RECID=0 STAMP=0
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 23-FEB-20

cataloged archived log
archived log file name=/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf RECID=1 STAMP=1033170133

cataloged archived log
archived log file name=/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf RECID=2 STAMP=1033170133

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/system01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/sysaux01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=6 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/undotbs01.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=7 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=8 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=9 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/users01.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=10 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=11 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/system01.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=12 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/sysaux01.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=13 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/undotbs01.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=14 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/users01.dbf

contents of Memory Script:
{
set until scn 4658614;
recover
standby
clone database
delete archivelog
;
}
executing Memory Script

executing command: SET until clause

Starting recover at 23-FEB-20
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 41 is already on disk as file /opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf
archived log for thread 1 with sequence 42 is already on disk as file /opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf
archived log file name=/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf thread=1 sequence=41
archived log file name=/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf thread=1 sequence=42
media recovery complete, elapsed time: 00:00:01
Finished recover at 23-FEB-20

contents of Memory Script:
{
delete clone force archivelog all;
}
executing Memory Script

released channel: ORA_DISK_1
released channel: ORA_AUX_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=274 device type=DISK
deleted archived log
archived log file name=/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf RECID=1 STAMP=1033170133
deleted archived log
archived log file name=/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf RECID=2 STAMP=1033170133
Deleted 2 objects

Finished Duplicate Db at 23-FEB-20

RMAN>


Note :  For simplicity, I didn't use the SPFILE specification in the DUPLICATE command to create and update an SPFILE at the Standby.  I am using a simple initSTDBYDB.ora pfile


In the next blog post, I will be covering how to begin (and then monitor) shipping of redo from the Primary to the Standby.


Categories: DBA Blogs

Basic Replication -- 13 : Some Interesting SYS tables

Sun, 2020-02-09 08:45
I found an interesting SQL in the AWR report from my previous blog post.

What do you think this SQL statement does ?

DELETE FROM SYS.MVREF$_STMT_STATS WHERE REFRESH_ID = :B2 AND MV_OBJ# = :B1

Here are some interesting objects (I don't know which Oracle release they started appearing in) :

SQL> l
1 select object_name, object_type
2 from dba_objects
3 where owner = 'SYS'
4 and object_name like 'MVREF$%'
5* order by 2,1
SQL> /

OBJECT_NAME OBJECT_TYPE
------------------------------ -----------------------
MVREF$_STATS_SEQ SEQUENCE
MVREF$_CHANGE_STATS TABLE
MVREF$_RUN_STATS TABLE
MVREF$_STATS TABLE
MVREF$_STATS_PARAMS TABLE
MVREF$_STATS_SYS_DEFAULTS TABLE
MVREF$_STMT_STATS TABLE

7 rows selected.

SQL>


Right now, the SYS.MVREF$_STMT_STATS table appears to be empty.
SQL> desc SYS.MVREF$_STMT_STATS
Name Null? Type
----------------------------------------- -------- ----------------------------
MV_OBJ# NOT NULL NUMBER
REFRESH_ID NOT NULL NUMBER
STEP NOT NULL NUMBER
SQLID NOT NULL VARCHAR2(14)
STMT NOT NULL CLOB
EXECUTION_TIME NOT NULL NUMBER
EXECUTION_PLAN SYS.XMLTYPE STORAGE BINARY

SQL>


It would be interesting to know how Oracle is using this and the other MVREF$% tables.
SYS.MVREF$_CHANGE_STATS obviously captures DML operations

This SYS.MVREF$_RUN_STATS captures the last refresh operation (*does it only capture the last operation ?*) And what does SYS.MVREF$_STATS capture :

SQL> l
1 select *
2 from SYS.MVREF$_RUN_STATS
3* where MVIEWS='"HEMANT"."MV_1"'
SQL> /

RUN_OWNER_USER# REFRESH_ID NUM_MVS_TOTAL NUM_MVS_CURRENT MVIEWS BASE_TABLES METHOD ROLLBACK P R PURGE_OPTION
--------------- ---------- ------------- --------------- ------------------ ------------ ------ -------- - - ------------
PARALLELISM HEAP_SIZE A N O NUMBER_OF_FAILURES START_TIME END_TIME ELAPSED_TIME LOG_SETUP_TIME
----------- ---------- - - - ------------------ -------------------------- -------------------------- ------------ --------------
LOG_PURGE_TIME C TXNFLAG ON_COMMIT_FLAG
-------------- - ---------- --------------
106 245 1 1 "HEMANT"."MV_1" Y N 1
0 0 Y N N 0 09-FEB-20 09.55.33.000000 09-FEB-20 09.55.49.000000 16 1
PM PM
9 Y 0 0


SQL>
SQL> l
1 select mviews, count(*) from sys.mvref$_run_Stats group by mviews
2* order by 1
SQL> /

MVIEWS COUNT(*)
------------------------------------------ ----------
"HEMANT"."MV_1" 1
"HEMANT"."MV_2" 8
"HEMANT"."MV_DEPT", "HEMANT"."MV_EMP" 1
"HEMANT"."MV_FAST_NOT_POSSIBLE" 1
"HEMANT"."MV_OF_SOURCE" 1
"HEMANT"."NEW_MV" 2
"HEMANT"."NEW_MV_2_1" 1
"HEMANT"."NEW_MV_2_2" 2
"HR"."HR_MV_ON_COMMIT" 1
"HR"."MY_LARGE_REPLICA" 1

10 rows selected.

SQL>
SQL> desc sys.mvref$_run_stats
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
RUN_OWNER_USER# NOT NULL NUMBER
REFRESH_ID NOT NULL NUMBER
NUM_MVS_TOTAL NOT NULL NUMBER
NUM_MVS_CURRENT NOT NULL NUMBER
MVIEWS VARCHAR2(4000)
BASE_TABLES VARCHAR2(4000)
METHOD VARCHAR2(4000)
ROLLBACK_SEG VARCHAR2(4000)
PUSH_DEFERRED_RPC CHAR(1)
REFRESH_AFTER_ERRORS CHAR(1)
PURGE_OPTION NUMBER
PARALLELISM NUMBER
HEAP_SIZE NUMBER
ATOMIC_REFRESH CHAR(1)
NESTED CHAR(1)
OUT_OF_PLACE CHAR(1)
NUMBER_OF_FAILURES NUMBER
START_TIME TIMESTAMP(6)
END_TIME TIMESTAMP(6)
ELAPSED_TIME NUMBER
LOG_SETUP_TIME NUMBER
LOG_PURGE_TIME NUMBER
COMPLETE_STATS_AVAILABLE CHAR(1)
TXNFLAG NUMBER
ON_COMMIT_FLAG NUMBER

SQL> desc sys.mvref$_stats
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
MV_OBJ# NOT NULL NUMBER
REFRESH_ID NOT NULL NUMBER
ATOMIC_REFRESH NOT NULL CHAR(1)
REFRESH_METHOD VARCHAR2(30)
REFRESH_OPTIMIZATIONS VARCHAR2(4000)
ADDITIONAL_EXECUTIONS VARCHAR2(4000)
START_TIME TIMESTAMP(6)
END_TIME TIMESTAMP(6)
ELAPSED_TIME NUMBER
LOG_SETUP_TIME NUMBER
LOG_PURGE_TIME NUMBER
INITIAL_NUM_ROWS NUMBER
FINAL_NUM_ROWS NUMBER
NUM_STEPS NUMBER
REFMET NUMBER
REFFLG NUMBER

SQL>
SQL> select mv_obj#, count(*)
2 from sys.mvref$_stats
3 group by mv_obj#
4 /

MV_OBJ# COUNT(*)
---------- ----------
73223 1
73170 1
73065 1
73244 1
73079 8
73094 1
73197 2
73113 2
73188 1
73167 1
73110 1

11 rows selected.

SQL>
SQL> desc sys.mvref$_stats_params
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
MV_OWNER NOT NULL VARCHAR2(128)
MV_NAME NOT NULL VARCHAR2(128)
COLLECTION_LEVEL NOT NULL NUMBER
RETENTION_PERIOD NOT NULL NUMBER

SQL> select count(*)
2 from sys.mvref$_stats_params;

COUNT(*)
----------
0

SQL> desc sys.mvref$_stats_sys_defaults
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
COLLECTION_LEVEL NOT NULL NUMBER
RETENTION_PERIOD NOT NULL NUMBER

SQL> select * from sys.mvref$_stats_sys_defaults
2 /

COLLECTION_LEVEL RETENTION_PERIOD
---------------- ----------------
1 31

SQL>



Oracle has been introducing some more "internal" tables to trace MView Refresh operations.


Categories: DBA Blogs

Basic Replication -- 12 : MV Refresh Captured in AWR

Sun, 2020-02-09 08:40
Building on the example of an Index having been created on a Materialized View  in my previous blog post in this series, I've captured some information from the AWR report in 19c when this code is executed :

SQL> delete source_table_1;

72454 rows deleted.

SQL> insert into source_table_1 select object_id, owner, object_name from source_table_2;

72366 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_mview.refresh('MV_1');

PL/SQL procedure successfully completed.


(Note that "MV_OF_SOURCE" is not dependent on SOURCE_TABLE_1 and as really had no rows to refresh, did not cause any load).

Some information in the AWR Report (note that this is 19.3) :

SQL ordered by Elapsed Time             DB/Inst: ORCLCDB/ORCLCDB  Snaps: 54-55
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
-> %Total - Elapsed Time as a percentage of Total DB time
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 108.1% of Total DB Time (s): 30
-> Captured PL/SQL account for 85.2% of Total DB Time (s): 30

Elapsed Elapsed Time
Time (s) Executions per Exec (s) %Total %CPU %IO SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
16.1 1 16.09 53.5 12.8 21.6 2uusn1kyhm9h8
Module: SQL*Plus
PDB: ORCLPDB1
BEGIN dbms_mview.refresh('MV_1'); END;

8.7 1 8.66 28.8 5.3 13.6 8chh7ksnytb52
PDB: ORCLPDB1
delete from "HEMANT"."MLOG$_SOURCE_TABLE_1" where snaptime$$ <= :1

4.5 1 4.55 15.1 17.3 75.6 57ctmbtabx1rw
Module: SQL*Plus
PDB: ORCLPDB1
BEGIN dbms_mview.refresh('MV_OF_SOURCE'); END;

4.0 1 3.96 13.2 37.2 26.1 dsyxhpb9annru
Module: SQL*Plus
PDB: ORCLPDB1
delete source_table_1

3.7 144,820 0.00 12.3 36.7 8.3 9ucb4uxnvzxc8
Module: SQL*Plus
PDB: ORCLPDB1
INSERT /*+ NO_DST_UPGRADE_INSERT_CONV IDX(0) */ INTO "HEMANT"."MLOG$_SOURCE_TABL
E_1" (dmltype$$,old_new$$,snaptime$$,change_vector$$,xid$$,"OBJECT_ID") VALUES (
:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,:x,:1)

3.5 1 3.52 11.7 19.7 45.9 dxnyhyq7sqf8j
PDB: ORCLPDB1
DELETE FROM "HEMANT"."MV_1" SNAP$ WHERE "OBJ_ID" IN (SELECT * FROM (SELECT MLOG$
."OBJECT_ID" "OBJ_ID" FROM "HEMANT"."MLOG$_SOURCE_TABLE_1" MLOG$ WHERE "SNAPTIME
$$" > :1 AND ("DMLTYPE$$" != 'I')) AS OF SNAPSHOT(:B_SCN) )

3.3 1 3.25 10.8 45.2 .6 9n1gw9vpj9248
Module: SQL*Plus
PDB: ORCLPDB1
insert into source_table_1 select object_id, owner, object_name from source_tabl
e_2

2.3 2 1.14 7.6 18.4 77.4 94z4z19ygx34a
Module: SQL*Plus
PDB: ORCLPDB1
begin sys.dbms_irefstats.run_sa(:1, :2, :3, :4, :5, :6); end;

2.1 1 2.11 7.0 19.1 21.6 a2sctn32qtwnf
PDB: ORCLPDB1
/* MV_REFRESH (MRG) */ MERGE INTO "HEMANT"."MV_1" "SNA$" USING (SELECT * FROM (S
ELECT CURRENT$."OBJ_ID",CURRENT$."OBJ_OWNER",CURRENT$."OBJ_NAME" FROM (SELECT "S
OURCE_TABLE_1"."OBJECT_ID" "OBJ_ID","SOURCE_TABLE_1"."OWNER" "OBJ_OWNER","SOURCE
_TABLE_1"."OBJECT_NAME" "OBJ_NAME" FROM "SOURCE_TABLE_1" "SOURCE_TABLE_1") CURRE

1.7 1 1.67 5.6 50.3 43.5 btqubgr940awu
Module: sqlplus@oracle-19c-vagrant (TNS V1-V3)
PDB: CDB$ROOT
BEGIN dbms_workload_repository.create_snapshot(); END;

1.3 1 1.33 4.4 27.3 .0 ggaxdw7tpmqjb
PDB: ORCLPDB1
update "HEMANT"."MLOG$_SOURCE_TABLE_1" set snaptime$$ = :1 where snaptime$$ > t
o_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')

0.9 89 0.01 3.1 1.7 98.6 3un99a0zwp4vd
PDB: ORCLPDB1
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,type#,flags,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and
p_obj#=obj#(+) order by order#

0.5 183 0.00 1.6 6.0 98.3 2sxqgx5hx76qr
PDB: ORCLPDB1
select /*+ rule */ bucket, endpoint, col#, epvalue, epvalue_raw, ep_repeat_count
, endpoint_enc from histgrm$ where obj#=:1 and intcol#=:2 and row#=:3 order by b
ucket

0.5 2 0.23 1.5 15.0 70.0 6tbg6ydrx9jmm
Module: SQL*Plus
PDB: ORCLPDB1
begin dbms_irefstats.purge_stats_mv_rp(in_time => :1, in_objnum => :2, in_r
etention_period => :3); end;

0.4 9 0.04 1.3 15.4 69.2 g1s379sraujaq
Module: SQL*Plus
PDB: ORCLPDB1
DELETE FROM SYS.MVREF$_STMT_STATS WHERE REFRESH_ID = :B2 AND MV_OBJ# = :B1

0.4 2 0.20 1.3 16.4 76.8 8szmwam7fysa3
Module: SQL*Plus
PDB: ORCLPDB1
insert into wri$_adv_objspace_trend_data select timepoint, space_usage, space_a
lloc, quality from table(dbms_space.object_growth_trend(:1, :2, :3, :4, NULL, N
ULL, NULL, 'FALSE', :5, 'FALSE'))

0.4 59 0.01 1.3 9.5 97.3 03guhbfpak0w7
PDB: CDB$ROOT
select /*+ index(idl_ub1$ i_idl_ub11) */ piece#,length,piece from idl_ub1$ where
obj#=:1 and part=:2 and version=:3 order by piece#

0.3 2 0.15 1.0 11.0 .0 a8xypykqc348c
PDB: ORCLPDB1
BEGIN dbms_stats_internal.advisor_setup_obj_filter(:tid, :rid, 'EXECUTE', FAL
SE); END;

0.3 2 0.15 1.0 8.7 .0 avf5k3k0x0cxn
PDB: ORCLPDB1
insert into stats_advisor_filter_obj$ (rule_id, obj#, flag
s, type) select :rule_id, obj#, :flag_include, :type_expanded
from stats_advisor_filter_obj$ where type = :type_priv
and (bitand(flags, :flag_orcl_owned) = 0 or :get_orcl_objects = 'T')


Quite interesting that there are large number of operations that occur.

Unlike a Trace File, the AWR does not report SQL operations as a chronologically-ordered sequence.  In this case, they are ordered by Elapsed Time per operation.

Also, remember that PL/SQL calls will include the time for "child" SQL calls, so you will encounter double-counting if you add up the figures (e.g. the "dbms_mview.refresh('MV_1');" call included a number of SQL calls --- technically you can identify them only if you *trace* the session making this PL/SQL call.  However, since there was no other activity in this database, almost everything that happened appears in this AWR extract.

The actual calls "delete source_table_1;" and "insert into source_table_1 select object_id, owner, object_name from source_table_2;" were issued *before* the "exec dbms_mview.refresh('MV_1');" and are are not "child" calls.  The child calls that do appear in the AWR are not necessarily in the same chronological order of their execution.

The interesting "child" calls from the "dbms_mview.refresh" call that I would like to point out are :

delete from "HEMANT"."MLOG$_SOURCE_TABLE_1" where snaptime$$ <= :1

INSERT /*+ NO_DST_UPGRADE_INSERT_CONV IDX(0) */ INTO "HEMANT"."MLOG$_SOURCE_TABL
E_1" (dmltype$$,old_new$$,snaptime$$,change_vector$$,xid$$,"OBJECT_ID") VALUES (
:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,:x,:1)

DELETE FROM "HEMANT"."MV_1" SNAP$ WHERE "OBJ_ID" IN (SELECT * FROM (SELECT MLOG$
."OBJECT_ID" "OBJ_ID" FROM "HEMANT"."MLOG$_SOURCE_TABLE_1" MLOG$ WHERE "SNAPTIME
$$" > :1 AND ("DMLTYPE$$" != 'I')) AS OF SNAPSHOT(:B_SCN) )

/* MV_REFRESH (MRG) */ MERGE INTO "HEMANT"."MV_1" "SNA$" USING (SELECT * FROM (S
ELECT CURRENT$."OBJ_ID",CURRENT$."OBJ_OWNER",CURRENT$."OBJ_NAME" FROM (SELECT "S
OURCE_TABLE_1"."OBJECT_ID" "OBJ_ID","SOURCE_TABLE_1"."OWNER" "OBJ_OWNER","SOURCE
_TABLE_1"."OBJECT_NAME" "OBJ_NAME" FROM "SOURCE_TABLE_1" "SOURCE_TABLE_1") CURRE


In my next post, I'll share some other findings after I found something interesting in the AWR report.


Categories: DBA Blogs

Running the (Segment) Space Advisor - on a Partitioned Table

Sat, 2020-01-18 08:30
Here is a quick demo on running the Segment Space Advisor manually

I need to start with the ADVISOR privilege

$sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sat Jan 18 22:02:10 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: system
Enter password:
Last Successful login time: Sat Jan 18 2020 22:00:32 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> grant advisor to hemant;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production


I can then connect with my account to run the Advisor

$sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sat Jan 18 22:02:35 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: hemant
Enter password:
Last Successful login time: Sat Jan 18 2020 21:50:05 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>
SQL> DECLARE
l_object_id NUMBER;
l_task_name VARCHAR2(32767) := 'Advice on My SALES_DATA Table';

BEGIN
DBMS_ADVISOR.create_task (
advisor_name => 'Segment Advisor',
task_name => l_task_name
);

DBMS_ADVISOR.create_object (
task_name => l_task_name,
object_type => 'TABLE',
attr1 => 'HEMANT',
attr2 => 'SALES_DATA',
attr3 => NULL,
attr4 => NULL,
attr5 => NULL,
object_id => l_object_id
);

DBMS_ADVISOR.set_task_parameter (
task_name => l_task_name,
parameter => 'RECOMMEND_ALL',
value => 'TRUE');

DBMS_ADVISOR.execute_task(task_name => l_task_name);
end;
/

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
PL/SQL procedure successfully completed.

SQL>


I can then review the advise :

SQL> set serveroutput on
begin
FOR cur_rec IN (SELECT f.impact,
o.type,
o.attr1,
o.attr2,
o.attr3,
o.attr4,
f.message,
f.more_info
FROM dba_advisor_findings f, dba_advisor_objects o
WHERE f.object_id = o.object_id
AND f.task_name = o.task_name
AND f.task_name = 'Advice on My SALES_DATA Table'
ORDER BY f.impact DESC)
LOOP
DBMS_OUTPUT.put_line('..');
DBMS_OUTPUT.put_line('Type : ' || cur_rec.type);
DBMS_OUTPUT.put_line('Schema : ' || cur_rec.attr1);
DBMS_OUTPUT.put_line('Table Name : ' || cur_rec.attr2);
DBMS_OUTPUT.put_line('Partition Name : ' || cur_rec.attr3);
DBMS_OUTPUT.put_line('Tablespace Name : ' || cur_rec.attr4);
DBMS_OUTPUT.put_line('Message : ' || cur_rec.message);
DBMS_OUTPUT.put_line('More info : ' || cur_rec.more_info);
END LOOP;
end;
/

SQL> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2015
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2016
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:1016: Reclaimable Space :64520:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2017
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:1016: Reclaimable Space :64520:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2018
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2019
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_MAXVALUE
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:

PL/SQL procedure successfully completed.

SQL>


Thus, it actually reports for each Partition in the table.


Note : Script based on script by Tim Hall  (@oraclebase)  at https://oracle-base.com/dba/script?category=10g&file=segment_advisor.sql


Categories: DBA Blogs

Basic Replication -- 11 : Indexes on a Materialized View

Tue, 2019-11-12 08:46
A Materialized View is actually also a physical Table (by the same name) that is created and maintained to store the rows that the MV query is supposed to present.

Since it is also a Table, you can build custom Indexes on it.

Here, my Source Table has an Index on OBJECT_ID :

SQL> create table source_table_1
2 as select object_id, owner, object_name
3 from dba_objects
4 where object_id is not null
5 /

Table created.

SQL> alter table source_table_1
2 add constraint source_table_1_pk
3 primary key (object_id)
4 /

Table altered.

SQL> create materialized view log on source_table_1;

Materialized view log created.

SQL>


I then build Materialized View with  an additional Index on it :

SQL> create materialized view mv_1
2 refresh fast on demand
3 as select object_id as obj_id, owner as obj_owner, object_name as obj_name
4 from source_table_1
5 /

Materialized view created.

SQL> create index mv_1_ndx_on_owner
2 on mv_1 (obj_owner)
3 /

Index created.

SQL>


Let's see if this Index is usable.

SQL> exec  dbms_stats.gather_table_stats('','MV_1');

PL/SQL procedure successfully completed.

SQL> explain plan for
2 select obj_owner, count(*)
3 from mv_1
4 where obj_owner like 'H%'
5 group by obj_owner
6 /

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2523122927

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 10 | 15 (0)| 00:00:01 |
| 1 | SORT GROUP BY NOSORT| | 2 | 10 | 15 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | MV_1_NDX_ON_OWNER | 5943 | 29715 | 15 (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------

2 - access("OBJ_OWNER" LIKE 'H%')
filter("OBJ_OWNER" LIKE 'H%')



Note how this Materialized View has a column called "OBJ_OWNER"  (while the Source Table column is called "OWNER") and the Index ("MV_1_NDX_ON_OWNER") on this column is used.


You  would have also noted that you can run DBMS_STATS.GATHER_TABLE_STATS on a Materialized View and it's Indexes.

However, it is NOT a good idea to define your own Unique Indexes (including Primary Key) on a Materialized View.  During the course of a Refresh, the MV may not be consistent and the Unique constraint may be violated.   See Oracle Support Document # 67424.1



Categories: DBA Blogs

Basic Replication -- 10 : ON PREBUILT TABLE

Mon, 2019-10-28 09:05
In my previous blog post, I've shown a Materialized View that is built as an empty MV and subsequently populated by a Refresh call.

You can also define a Materialized View over an *existing*  (pre-populated) Table.

Let's say you have a Source Table and have built a Replica of it it another Schema or Database.  Building the Replica may have taken an hour or even a few hours.  You now know that the Source Table will have some changes every day and want the Replica to be updated as well.  Instead of executing, say, a TRUNCATE and INSERT, into the Replica every day, you define a Fast Refresh Materialized View over it and let Oracle identify all the changes (which, on a daily basis, could be a small percentage of the total size of the Source/Replica) and update the Replica using a Refresh call.


Here's a quick demo.

SQL> select count(*) from my_large_source;

COUNT(*)
----------
72447

SQL> grant select on my_large_source to hr;

Grant succeeded.

SQL> connect hr/HR@orclpdb1
Connected.
SQL> alter session enable parallel dml;

Session altered.

SQL> create table my_large_replica
2 as select * from hemant.my_large_source
3 where 1=2;

Table created.

SQL> insert /*+ PARALLEL (8) */
2 into my_large_replica
3 select * from hemant.my_large_source;

72447 rows created.

SQL>


So, now, HR has a Replica of the Source Table in the HEMANT schema.  Without any subsequent updates to the Source Table, I create the Materialized View definition, with the "ON PREBUILT TABLE" clause.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> create materialized view log on my_large_source;

Materialized view log created.

SQL> grant select, delete on mlog$_my_large_source to hr;

Grant succeeded.

SQL> connect hr/HR@orclpdb1
Connected.
SQL>
SQL> create materialized view my_large_replica
2 on prebuilt table
3 refresh fast
4 as select * from hemant.my_large_source;

Materialized view created.

SQL> select count(*) from hemant.my_large_source;

COUNT(*)
----------
72447

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72447

SQL>


I am now ready to add data and Refresh the MV.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> desc my_large_source
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COL NOT NULL NUMBER
PRODUCT_NAME VARCHAR2(128)
FACTORY VARCHAR2(128)

SQL> insert into my_large_source
2 values (74000,'Revolutionary Pin','Outer Space');

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from mlog$_my_large_source;

COUNT(*)
----------
1

SQL>
SQL> connect hr/HR@orclpdb1
Connected.
SQL> select count(*) from hemant.my_large_source;

COUNT(*)
----------
72448

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72447

SQL>
SQL> execute dbms_mview.refresh('MY_LARGE_REPLICA','F');

PL/SQL procedure successfully completed.

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72448

SQL>
SQL> select id_col, product_name
2 from my_large_replica
3 where factory = 'Outer Space'
4 /

ID_COL
----------
PRODUCT_NAME
--------------------------------------------------------------------------------
74000
Revolutionary Pin


SQL>
SQL> select count(*) from hemant.mlog$_my_large_source;

COUNT(*)
----------
0

SQL>


Instead of rebuilding / repopulating the Replica Table with all 72,448 rows, I used the MV definition and the MV Log on the Source Table to copy over that 1 new row.

The above demonstration is against 19c.

Here are two older posts, one in March 2009 and the other in January 2012 on an earlier release of Oracle.


Categories: DBA Blogs

Basic Replication -- 9 : BUILD DEFERRED

Sun, 2019-10-27 10:41
A Materialized View can be created with all the target rows pre-inserted (and subsequently refreshed for changes).  This is the default behaviour.

However, it is possible to define a Materialized View without actually populating it.

You might want to take such a course of action for scenarios like :

1.  Building a number of Materialized Views along with a code migration but not wanting to spend time that would be required to actually populate the MVs  and deferring the population to a subsequent maintenance window after which the code and data will be referenced by the application/users

2.  Building a number of MVs in a Tablespace that is initially small but will be enlarged in the maintenance window to handle the millions of rows that will be inserted

3.  Building an MV definition without actually having all the "clean" Source Table(s) rows currently available, deferring the cleansing of data to a later date and then populating the MV after the cleansing

The BUILD DEFERRED clause comes in handy here.


Let's say that we have a NEW_SOURCE_TABLE (with many rows and/or with rows that are yet to be cleansed) and want to build an "empty" MV on it  (OR that this MV is one of a number of MVs that are being built together simply for migration of dependent code, without the data).

SQL> desc new_source_table
Name Null? Type
----------------------------------------- -------- ----------------------------
ID NOT NULL NUMBER
DATA_ELEMENT_1 VARCHAR2(15)
DATA_ELEMENT_2 VARCHAR2(15)
DATE_COL DATE

SQL>
SQL> create materialized view log on new_source_table;
create materialized view log on new_source_table
*
ERROR at line 1:
ORA-12014: table 'NEW_SOURCE_TABLE' does not contain a primary key constraint


SQL> create materialized view log on new_source_table with rowid;

Materialized view log created.

SQL>
SQL> create materialized view new_mv
2 build deferred
3 refresh with rowid
4 as select id as id_number,
5 data_element_1 as data_key,
6 data_element_2 as data_val,
7 date_col as data_date
8 from new_source_table
9 /

Materialized view created.

SQL>


Notice that my Source Table currently does not have a Primary Key.  The MV Log can be created with the "WITH ROWID" clause in the absence of the Primary Key.
The Materialized View is also built with the ROWID as the Refresh cannot use a Primary Key.
Of course, you may well have a Source Table with a Primary Key.  In that case, you can continue to default using the Primary Key instead of the ROWID

Once the Source Table is properly populated / cleansed and/or the tablespace containing the MV is large enough, the MV is first refreshed with a COMPLETE Refresh and subsequently with FAST Refresh's.

SQL> select count(*) from new_source_table;

COUNT(*)
----------
106

SQL> execute dbms_mview.refresh('NEW_MV','C',atomic_refresh=>FALSE);

PL/SQL procedure successfully completed.

SQL>


Subsequently, when one or more rows are inserted/updated in the Source Table, the next Refresh is a Fast Refresh.

SQL> execute dbms_mview.refresh('NEW_MV','F');

PL/SQL procedure successfully completed.

SQL>
SQL> select mview_name, refresh_mode, refresh_method, last_refresh_type
2 from user_mviews
3 where mview_name = 'NEW_MV'
4 /

MVIEW_NAME REFRESH_M REFRESH_ LAST_REF
------------------ --------- -------- --------
NEW_MV DEMAND FORCE FAST

SQL>


Thus, we started off with an empty MV and later used REFRESHs (COMPLETE and FAST) to populate it.


Categories: DBA Blogs

Basic Replication -- 8 : REFRESH_MODE ON COMMIT

Sat, 2019-10-19 09:26
So far, in previous posts in this series, I have demonstrated Materialized Views that set to REFRESH ON DEMAND.

You can also define a Materialized View that is set to REFRESH ON COMMIT -- i.e. every time DML against the Source Table is committed, the MV is also immediately updated.  Such an MV must be in the same database  (you cannot define an ON COMMIT Refresh across two databases  -- to do so, you have to build your own replication code, possibly using Database Triggers or external methods of 2-phase commit).

Here is a quick demonstration, starting with a Source Table in the HEMANT schema and then building a FAST REFRESH MV in the HR schema.

SQL> show user
USER is "HEMANT"
SQL> create table hemant_source_tbl (id_col number not null primary key, data_col varchar2(30));

Table created.

SQL> grant select on hemant_source_tbl to hr;

Grant succeeded.

SQL> create materialized view log on hemant_source_tbl;

Materialized view log created.

SQL> grant select on mlog$_hemant_source_tbl to hr;

Grant succeeded.

SQL>
SQL> grant create materialized view to hr;

Grant succeeded.

SQL> grant on commit refresh on hemant_source_tbl to hr;

Grant succeeded.

SQL>
SQL> grant on commit refresh on mlog$_hemant_source_tbl to hr;

Grant succeeded.

SQL>


Note : I had to grant the CREATE MATERIALIZED VIEW privilege to HR for this test case. Also, as the MV is to Refresh ON COMMIT, two additional object-level grants on the Source Table and the Materialized View Log are required as the Refresh is across schemas.

SQL> connect hr/HR@orclpdb1
Connected.
SQL> create materialized view hr_mv_on_commit
2 refresh fast on commit
3 as select id_col as primary_key_col, data_col as value_column
4 from hemant.hemant_source_tbl;

Materialized view created.

SQL>


Now that the Materialized View is created successfully, I will test DML against the table and check that an explicit REFRESH call (e.g. DBMS_MVIEW.REFRESH or DBMS_REFRESH.REFRESH) is not required.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> insert into hemant_source_tbl values (1,'First');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from hr.hr_mv_on_commit;

PRIMARY_KEY_COL VALUE_COLUMN
--------------- ------------------------------
1 First

SQL> connect hr/HR@orclpdb1
Connected.
SQL> select * from hr_mv_on_commit;

PRIMARY_KEY_COL VALUE_COLUMN
--------------- ------------------------------
1 First

SQL>


The Materialized View in the HR schema was refreshed immediately, without an explicit REFRESH call.

Remember : An MV that is to REFRESH ON COMMIT must be in the same database as the Source Table.




Categories: DBA Blogs

Basic Replication -- 7 : Refresh Groups

Fri, 2019-10-11 23:24
So far, all my blog posts in this series cover "single" Materialized Views (even if I have created two MVs, they are independent of each other and can be refreshed at different schedules).

A Refresh Group is what you would define if you want multiple MVs to be refreshed to the same point in time.  This allows for
(a) data from transaction that touch multiple tables
or
(b) views of multiple tables
to be consistent in the target MVs.

For example, if you have SALES_ORDER and LINE_ITEMS tables and the MVs on these are refreshed at different times, you might see the ORDER (Header) without the LINE_ITEMs (or, worse, in the absence of Referential Integrity constraints, LINE_ITEMs without the ORDER (Header) !).

Here's a demo, using the HR  DEPARTMENTS and EMPLOYEES table with corresponding MVs built in the HEMANT schema.

SQL> show user
USER is "HR"
SQL> select count(*) from departments;

COUNT(*)
----------
27

SQL> select count(*) from employees;

COUNT(*)
----------
107

SQL>
SQL> grant select on departments to hemant;

Grant succeeded.

SQL> grant select on employees to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on departments;

Materialized view log created.

SQL> grant select, delete on mlog$_departments to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on employees;

Materialized view log created.

SQL> grant select, delete on mlog$_employees to hemant;

Grant succeeded.

SQL>
SQL>


Having created the source MV Logs  note that I have to grant privileges to the account (HEMANT) that will be reading and deleting from the MV Logs.

Next, I setup the MVs and the Refresh Group

SQL> show user
USER is "HEMANT"
SQL>
SQL> select count(*) from hr.departments;

COUNT(*)
----------
27

SQL> select count(*) from hr.employees;

COUNT(*)
----------
107

SQL>
SQL>
SQL> create materialized view mv_dept
2 refresh fast on demand
3 as select department_id as dept_id, department_name as dept_name
4 from hr.departments
5 /

Materialized view created.

SQL>
SQL> create materialized view mv_emp
2 refresh fast on demand
3 as select department_id as dept_id, employee_id as emp_id,
4 first_name, last_name, hire_date
5 from hr.employees
6 /

Materialized view created.

SQL>
SQL> select count(*) from mv_dept;

COUNT(*)
----------
27

SQL> select count(*) from mv_emp;

COUNT(*)
----------
107

SQL>
SQL> execute dbms_refresh.make(-
> name=>'HR_MVs',-
> list=>'MV_DEPT,MV_EMP',-
> next_date=>sysdate+0.5,-
> interval=>'sysdate+1');

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>


Here, I have built two MVs and then a Refresh Group called "HR_MVS".  The first refresh will be 12hours from now and every subsequent refresh will be after 24hours.  (The Refresh Interval must be set to what would be larger than the time taken to execute the actual Refresh).

However, I can manually execute the Refresh after new rows are populated into the source tables. First, I insert new rows

SQL> show user
USER is "HR"
SQL> insert into departments (department_id, department_name)
2 values
3 (departments_seq.nextval, 'New Department');

1 row created.

SQL> select department_id
2 from departments
3 where department_name = 'New Department';

DEPARTMENT_ID
-------------
280

SQL> insert into employees(employee_id, first_name, last_name, email, hire_date, job_id, department_id)
2 values
3 (employees_seq.nextval, 'Hemant', 'Chitale', 'hkc@myenterprise.com', sysdate, 'AD_VP', 280);

1 row created.

SQL> select employee_id
2 from employees
3 where first_name = 'Hemant';

EMPLOYEE_ID
-----------
208

SQL> commit;

Commit complete.

SQL>


Now that there are new rows, the target MVs must be refreshed together.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> execute dbms_refresh.refresh('HR_MVS');

PL/SQL procedure successfully completed.

SQL> select count(*) from mv_dept;

COUNT(*)
----------
28

SQL> select count(*) from mv_emp;

COUNT(*)
----------
108

SQL>
SQL> select * from mv_dept
2 where dept_id=280;

DEPT_ID DEPT_NAME
---------- ------------------------------
280 New Department

SQL> select * from mv_emp
2 where emp_id=208;

DEPT_ID EMP_ID FIRST_NAME LAST_NAME HIRE_DATE
---------- ---------- -------------------- ------------------------- ---------
280 208 Hemant Chitale 12-OCT-19

SQL>


Both MVs have been Refresh'd together as an ATOMIC Transaction.  If either of the two MVs had failed to refresh (e.g. unable to allocate extent to grow the MV), both the INSERTs would be rolled back.  (Note : It is not a necessary requirement that both source tables have new / updated rows, the Refresh Group works even if only one of the two tables has new / updated rows).

Note : I have used DBMS_REFRESH.REFRESH (instead of DBMS_MVIEW.REFRESH) to execute the Refresh.

You can build multiple Refresh Groups, each consisting of *multiple* Source Tables from the same source database.
You would define each Refresh Group to maintain consistency of data across multiple MVs (sourced from different tables).
Besides the Refresh Group on two HR tables, I could have, within the HEMANT schema, more Refresh Groups on FINANCE schema tables as well.

(Can you have a Refresh Group sourcing from tables from different schemas ?  Try that out !)


What's the downside of Refresh Groups ?    
Undo and Redo !  Every Refresh consists of INSERT/UPDATE/DELETE operations on the MVs.  And if any one of the MVs fails to Refresh, the entire set of DMLs (across all the MVs in the Refresh Group) has to *Rollback* !


Categories: DBA Blogs

Basic Replication -- 6 : COMPLETE and ATOMIC_REFRESH

Sun, 2019-09-29 09:36
Till 9i, if you did a COMPLETE Refresh of a Single Materialized View, Oracle would do a TRUNCATE followed by an INSERT.
If you did a COMPLETE Refresh of a *group* of Materialized Views, Oracle would execute DELETE and INSERT so that all the MVs would be consistent to the same point in time.  Thus, if one of the MVs failed to refresh (e.g. the SELECT on the Source Table failed or the INSERT failed, it would be able to do a ROLLBACK of all the MVs to revert them to the status (i.e. all rows that were present) as of the time before the Refresh began.  This would also allow all MVs to be available for queries with the rows as of before the Refresh began, even as the Refresh was running (because the Refresh of the multiple MVs was a single transaction).

In 10g, the behaviour for a *group* of Materialized Views remained the same.  However, for a single MV, the default was now to do a DELETE and INSERT as well.  This would allow the MV to be queryable as well while the Refresh was running.
This change came as a surprise to many customers (including me at a site where I was managing multiple single MVs) !
This change meant that the single MV took longer to run (because DELETEing all the rows takes a long time !) and required much more Undo and Redo space !!

Here's a demonstration in 19c (as in the previous posts in this series) :

First, I start with a new, larger, Source Table  and then build two MVs on it :

SQL> create table source_table_2
2 as select *
3 from dba_objects
4 where object_id is not null
5 /

Table created.

SQL> alter table source_table_2
2 add constraint source_table_2_pk
3 primary key (object_id)
4 /

Table altered.

SQL> select count(*)
2 from source_table_2
3 /

COUNT(*)
----------
72366

SQL>
SQL> create materialized view new_mv_2_1
2 as select object_id, owner, object_name, object_type
3 from source_table_2
4 /

Materialized view created.

SQL> create materialized view new_mv_2_2
2 as select object_id, owner, object_name, object_type
3 from source_table_2
4 /

Materialized view created.

SQL>
SQL> select mview_name, refresh_mode, refresh_method, last_refresh_type, fast_refreshable
2 from user_mviews
3 where mview_name like 'NEW_MV%'
4 order by 1
5 /

MVIEW_NAME REFRESH_M REFRESH_ LAST_REF FAST_REFRESHABLE
---------------- --------- -------- -------- ------------------
NEW_MV_2_1 DEMAND FORCE COMPLETE DIRLOAD_DML
NEW_MV_2_2 DEMAND FORCE COMPLETE DIRLOAD_DML

SQL>


Note that it *IS* possible to have two Materialized Views with exactly the same QUERY co-existing.  They may have different REFRESH_METHODs (here both are the same) and/or may have different frequencies of Refresh calls when the REFRESH_MODE is 'DEMAND'

Note also that I did not specify any "refresh on demand" clause so both defaulted to FORCE and DEMAND.

(Question 1 : Why might I have two MVs with the same QUERY and the same REFRESH_METHOD but different frequency or different times when the Refresh is called ?)

(Question 2 : What is DIRLOAD_DML ?)


Now, let me issue two different COMPLETE Refresh calls and trace them.

SQL> execute dbms_mview.refresh('NEW_MV_2_1','C');
SQL> execute dbms_mview.refresh('NEW_MV_2_2','C',atomic_refresh=>FALSE); -- from a different session


Now, I look at the trace files.

For the NEW_MV_2_1  (where ATOMIC_REFRESH defaulted to TRUE), I see :

/* MV_REFRESH (DEL) */ delete from "HEMANT"."NEW_MV_2_1"

/* MV_REFRESH (INS) */INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "HEMANT"."NEW_MV_2_1"("OBJECT_ID","OWNER","OBJECT_NAME","OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER","SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM "SOURCE_TABLE_2" "SOURCE_TABLE_2"



And for the NEW_MV_2_2 (where ATOMIC_REFRESH was set to FALSE), I see :

LOCK TABLE "HEMANT"."NEW_MV_2_2" IN EXCLUSIVE MODE  NOWAIT

/* MV_REFRESH (DEL) */ truncate table "HEMANT"."NEW_MV_2_2" purge snapshot log

/* MV_REFRESH (INS) */INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND SKIP_UNQ_UNUSABLE_IDX */ INTO "HEMANT"."NEW_MV_2_2"("OBJECT_ID","OWNER","OBJECT_NAME","OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER","SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM "SOURCE_TABLE_2" "SOURCE_TABLE_2"


So, the default ATOMIC_REFRESH=TRUE caused a DELETE followed by an INSERT while the ATOMIC_REFRESH=FALSE caused a DELETE followed by an INSERT APPEND (a Direct Path Insert).  The second method is much faster.



More information from a tkprof for the NEW_MV_2_1 (ATOMIC_REFRESH=TRUE) is :

INSERT INTO "HEMANT"."NEW_MV_2_1"("OBJECT_ID","OWNER","OBJECT_NAME",
"OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER",
"SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM
"SOURCE_TABLE_2" "SOURCE_TABLE_2"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 66 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.01 0.01 0 66 0 0




delete from "HEMANT"."NEW_MV_2_1"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 1.47 1.77 151 173 224377 72366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.47 1.77 151 173 224377 72366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 DELETE NEW_MV_2_1 (cr=178 pr=151 pw=0 time=1783942 us starts=1)
72366 72366 72366 INDEX FAST FULL SCAN SYS_C_SNAP$_82SOURCE_TABLE_2_PK (cr=157 pr=150 pw=0 time=54982 us starts=1 cost=42 size=361830 card=72366)(object id 73111)




INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "HEMANT"."NEW_MV_2_1"("OBJECT_ID",
"OWNER","OBJECT_NAME","OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID",
"SOURCE_TABLE_2"."OWNER","SOURCE_TABLE_2"."OBJECT_NAME",
"SOURCE_TABLE_2"."OBJECT_TYPE" FROM "SOURCE_TABLE_2" "SOURCE_TABLE_2"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 4 0
Execute 1 0.71 0.71 0 2166 152128 72366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.71 0.71 0 2166 152132 72366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL NEW_MV_2_1 (cr=2257 pr=0 pw=0 time=723103 us starts=1)
72366 72366 72366 TABLE ACCESS FULL SOURCE_TABLE_2 (cr=1410 pr=0 pw=0 time=30476 us starts=1 cost=392 size=3980130 card=72366)




Note that the first INSERT was only Parsed but *not* Executed.


While that for NEW_MV_2_2 (ATOMIC_REFRESH=FALSE) shows :

LOCK TABLE "HEMANT"."NEW_MV_2_2" IN EXCLUSIVE MODE  NOWAIT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0




truncate table "HEMANT"."NEW_MV_2_2" purge snapshot log



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 1 0
Execute 1 0.06 0.56 13 15 511 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.06 0.57 13 15 512 0



INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND SKIP_UNQ_UNUSABLE_IDX */ INTO
"HEMANT"."NEW_MV_2_2"("OBJECT_ID","OWNER","OBJECT_NAME","OBJECT_TYPE")
SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER",
"SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM
"SOURCE_TABLE_2" "SOURCE_TABLE_2"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.09 0 43 0 0
Execute 1 0.22 0.56 3 1487 1121 72366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.24 0.65 3 1530 1121 72366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD AS SELECT NEW_MV_2_2 (cr=3688 pr=7 pw=586 time=953367 us starts=1)
72366 72366 72366 OPTIMIZER STATISTICS GATHERING (cr=3337 pr=0 pw=0 time=142500 us starts=1 cost=392 size=3980130 card=72366)
72366 72366 72366 TABLE ACCESS FULL SOURCE_TABLE_2 (cr=1410 pr=0 pw=0 time=40841 us starts=1 cost=392 size=3980130 card=72366)




ALTER INDEX "HEMANT"."SYS_C_SNAP$_83SOURCE_TABLE_2_PK" REBUILD NOPARALLEL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.08 0 1 1 0
Execute 1 0.11 0.48 586 626 680 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.11 0.56 586 627 681 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 INDEX BUILD UNIQUE SYS_C_SNAP$_83SOURCE_TABLE_2_PK (cr=632 pr=586 pw=150 time=392351 us starts=1)(object id 0)
72366 72366 72366 SORT CREATE INDEX (cr=590 pr=586 pw=0 time=148023 us starts=1)
72366 72366 72366 MAT_VIEW ACCESS FULL NEW_MV_2_2 (cr=590 pr=586 pw=0 time=86149 us starts=1 cost=166 size=361830 card=72366)



So, the ATOMIC_REFRESH=FALSE caused
a. TRUNCATE
b. INSERT APPEND (i.e. Direct Path Insert, minimizing Undo and reducing Redo)
c. REBUILD INDEX

I am not comparing Execution Time for the two Refresh's.  I would rather that you focus on the fact that the DELETE (in ATOMIC_REFRESH=TRUE) can be very expensive (think Undo and Redo) when it has delete, say, millions of rows.  Also, that the INSERT is a regular operation that also causes Undo and Redo to be generated.

ATOMIC_REFRESH=FALSE makes a significant difference to the Undo and Redo generation and will be faster for large Materialized Views.

What is the downside of ATOMIC_REFRESH=FALSE ?  Firstly, the MV will not present any rows to a query that executes against it while the Refresh is in progress.  Secondly, if the Refresh fails, the MV is left in a Truncated state (without rows) until another Refresh is executed.
The ATOMIC_REFRESH=TRUE avoids  these two pitfalls, at the expense of resources (Undo and Redo) and time to refresh.

For more information, see Oracle Support Document #553464.1


Categories: DBA Blogs

Basic Replication -- 5 : REFRESH_METHOD : FAST or FORCE ?

Wed, 2019-09-25 10:14
In the previous blog post, I had a remark "We'll explore the implications of "REFRESH FAST" and just "REFRESH" alone in a subsequent blog post."

This is in the context of whether it is a FORCE or FAST that shows up as the REFRESH_METHOD.  A FORCE attempts a FAST and, if it can't do so (e.g. the Materialized View Log is not accessible), attempts a COMPLETE Refresh from all the rows of the Source Table.

Other than a MV Log being a requirement, there are constraints on which types of Materialized Views can do a FAST Refresh.

SQL> create materialized view mv_fast_not_possible
2 refresh fast on demand
3 as select id, data_element_2, sysdate
4 from source_table
5 /
as select id, data_element_2, sysdate
*
ERROR at line 3:
ORA-12015: cannot create a fast refresh materialized view from a complex query


SQL> !oerr ora 12015
12015, 00000, "cannot create a fast refresh materialized view from a complex query"
// *Cause: Neither ROWIDs and nor primary key constraints are supported for
// complex queries.
// *Action: Reissue the command with the REFRESH FORCE or REFRESH COMPLETE
// option or create a simple materialized view.

SQL>


Thus, a "complex" query -- here one that adds a SYSDATE column -- cannot use a FAST Refresh.
(For all the restrictions, see Paragraph "5.3.8.4 General Restrictions on Fast Refresh" in the 19c documentation. )

SQL> create materialized view mv_fast_not_possible
2 refresh force on demand
3 as select id, data_element_2, sysdate
4 from source_table
5 /

Materialized view created.

SQL> select refresh_mode, refresh_method, last_refresh_type
2 from user_mviews
3 where mview_name = 'MV_FAST_NOT_POSSIBLE'
4 /

REFRESH_M REFRESH_ LAST_REF
--------- -------- --------
DEMAND FORCE COMPLETE

SQL>
SQL> insert into source_table
2 values (2000,'TwoThousand','NewTwoTh',sysdate);

1 row created.

SQL> select * from source_table order by date_col ;

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
101 First One 18-AUG-19
103 Third Three 18-AUG-19
104 Fourth Updated 09-SEP-19
5 Fifth Five 16-SEP-19
6 Sixth TwoHundred 19-SEP-19
7 Seventh ThreeHundred 19-SEP-19
2000 TwoThousand NewTwoTh 25-SEP-19

7 rows selected.

SQL>
SQL> commit;

Commit complete.

SQL> exec dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_mview.refresh('MV_2');

PL/SQL procedure successfully completed.

SQL> exec dbms_mview.refresh('MV_FAST_NOT_POSSIBLE');

PL/SQL procedure successfully completed.

SQL>
SQL> select mview_name, refresh_mode,refresh_method,last_refresh_type, last_refresh_date
2 from user_mviews
3 order by last_refresh_date
4 /

MVIEW_NAME REFRESH_M REFRESH_ LAST_REF LAST_REFR
--------------------- --------- -------- -------- ---------
MV_OF_SOURCE DEMAND FAST FAST 25-SEP-19
MV_2 DEMAND FORCE FAST 25-SEP-19
MV_FAST_NOT_POSSIBLE DEMAND FORCE COMPLETE 25-SEP-19

SQL>


MV_FAST_NOT_POSSIBLE will always undergo a COMPLETE Refresh using REFRESH_METHOD='FORCE'.

MV_2 has REFRESH_METHOD='FORCE' because it was created with "refresh on demand" with the "fast" keyword missing.  Nevertheless, it is a "simple" Materialized View so does a FAST Refresh.

MV_OF_SOURCE was created with "refresh fast on demand", so it is already configured as REFRESH_METHOD='FAST'



Categories: DBA Blogs

Basic Replication -- 4 : Data Dictionary Queries

Tue, 2019-09-17 08:58
Now that we have two Materialized Views against a Source table, how can we identify the relationship via the data dictionary ?

This is the query to the data dictionary in the database where the Source Table exists :

SQL> l
1 select v.owner MV_Owner, v.name MV_Name, v.snapshot_site, v.refresh_method,
2 l.log_table MV_Log_Name, l.master MV_Source,
3 to_char(l.current_snapshots,'DD-MON-RR HH24:MI:SS') Last_Refresh_Date
4 from dba_registered_snapshots v, dba_snapshot_logs l
5 where v.snapshot_id = l.snapshot_id
6* and l.log_owner = 'HEMANT'
SQL> /

MV_OWNER MV_NAME SNAPSHOT_SITE REFRESH_MET MV_LOG_NAME MV_SOURCE LAST_REFRESH_DATE
-------- ---------------- ------------------ ----------- ------------------ --------------------- ------------------
HEMANT MV_OF_SOURCE ORCLPDB1 PRIMARY KEY MLOG$_SOURCE_TABLE SOURCE_TABLE 16-SEP-19 22:41:04
HEMANT MV_2 ORCLPDB1 PRIMARY KEY MLOG$_SOURCE_TABLE SOURCE_TABLE 16-SEP-19 22:44:37

SQL>


I have run the query on the DBA_REGISTERED_SNAPSHOTS and DBA_SNAPSHOT_LOGS because the join on SNAPSHOT_ID is not available between DBA_REGISTERED_MVIEWS and DBA_MVIEW_LOGS.  Similarly, the CURRENT_SNAPSHOTS column is also not available in DBA_MVIEW_LOGS.  These two columns are important when you have *multiple* MViews against the same Source Table.

Note the "Snapshot_Site" is required because the Materialized View can be in a different database.  In this example, the MViews are in the same database as the Source Table. 

The target database containing the MViews will not have the Source Table "registered" into a data dictionary view.  The Source Table will be apparently from the QUERY column of DBA_MVIEWS (also, if the Source Table is in a different database, look at the MASTER_LINK column to identify the Database Link that connects to the source database).


UPDATE :  In case you are wondering what query you'd write against the database containing the Materialized View(s), you can simply query DBA_MVIEWS.

SQL> l
1 select mview_name, query, master_link, refresh_mode, refresh_method,
2 last_refresh_type, to_char(last_refresh_date,'DD-MON-RR HH24:MI:SS') Last_Refresh_Date
3 from dba_mviews
4 where owner = 'HEMANT'
5* order by 1 desc
SQL> /

MVIEW_NAME
------------
QUERY
--------------------------------------------------------------------------------
MASTER_LINK REFRESH_M REFRESH_ LAST_REF LAST_REFRESH_DATE
------------ --------- -------- -------- ---------------------------
MV_OF_SOURCE
SELECT "SOURCE_TABLE"."ID" "ID","SOURCE_TABLE"."DATA_ELEMENT_1" "DATA_ELEMENT_1"
,"SOURCE_TABLE"."DATA_ELEMENT_2" "DATA_ELEMENT_2","SOURCE_TABLE"."DATE_COL" "DAT
E_COL" FROM "SOURCE_TABLE" "SOURCE_TABLE"
DEMAND FAST FAST 16-SEP-19 22:41:04

MV_2
select id, data_element_2
from source_table
DEMAND FORCE FAST 16-SEP-19 22:44:37


SQL>


Here, the MASTER_LINK would specify the name of the Database Link used to connect to the Master (i.e. Source) table, if it was a different database.

REFRESH_MODE is ON DEMAND so that the MVs can be refreshed by either scheduled jobs or manually initiated calls -- as I've done in previous blog posts.  (The alternative can be ON COMMIT, if the Source Table and MV are in the same database).

LAST_REFRESH_TYPE is FAST, meaning that the refresh was able to use the MV Log on the Source Table to identify changes and merge them into the MV.  See the entries from the trace file that I've shown in the previous blog post.

Note the difference in the two REFRESH_METHOD values for the two MVs.
MV_OF_SOURCE was created as "refresh fast on demand" while "MV_2" was created as "refresh on demand".

We'll explore the implications of "REFRESH FAST" and just "REFRESH" alone in a subsequent blog post.

Question : Why does the QUERY look so different between MV_OF_SOURCE and MV_2 ?



Categories: DBA Blogs

Basic Replication -- 3 : Multiple Materialized Views

Mon, 2019-09-16 09:53
You can define multiple Materialized Views against the same Source Table with differences in :
a) the SELECT clause column list
b) Predicates in the WHERE clause
c) Joins to one or more other Source Table(s) in the FROM clause
d) Aggregates in the SELECT clause

Thus, for my Source Table, I can add another Materialized View :

SQL> create materialized view mv_2
2 refresh on demand
3 as select id, data_element_2
4 from source_table;

Materialized view created.

SQL>
SQL> select count(*) from mlog$_source_table;

COUNT(*)
----------
0

SQL> insert into source_table
2 values (5, 'Fifth','Five',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from mlog$_source_table;

COUNT(*)
----------
1

SQL>
SQL> execute dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> select * from mv_of_source;

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
5 Fifth Five 16-SEP-19
101 First One 18-AUG-19
103 Third Three 18-AUG-19
104 Fourth Updated 09-SEP-19

SQL> select count(*) from mlog$_source_table;

COUNT(*)
----------
1

SQL>


Now that there are two MVs referencing the Source Table, the MV Log is not completely purged when only one of the two MVs is refreshed.  Oracle still maintains entries in the MV Log for the second MV to be able to execute a Refresh.

SQL> select * from mlog$_source_table;

ID SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
--------------------------------------------------------------------------------
XID$$
----------
5 16-SEP-19 I N
FE
5.6299E+14


SQL> execute dbms_mview.refresh('MV_2');

PL/SQL procedure successfully completed.

SQL> select * from mlog$_source_table;

no rows selected

SQL> select * from mv_2;

ID DATA_ELEMENT_2
---------- ---------------
101 One
103 Three
104 Updated
5 Five

SQL>


The MV Log is "purged" only when the second (actually the last) MV executes a Refresh.  Of course, if more rows were inserted / updated in the Source Table between the Refresh of MV_OF_SOURCE and MV_2, there would be corresponding entries in the MV Log.

So, Oracle does use some mechanism to track MVs that execute Refresh's and does continue to "preserve" rows in the MV Log for MVs that haven't been refreshed yet.

As I've noted in two earlier posts, in 2007 and 2012, the MV Log (called "Snapshot Log" in the 2007 post) can keep growing for a long time if you have one or more Materialized Views that just aren't executing their Refresh  calls.


Categories: DBA Blogs

Basic Replication -- 2b : Elements for creating a Materialized View

Mon, 2019-09-09 09:02
Continuing the previous post, what happens when there is an UPDATE to the source table ?

SQL> select * from source_table;

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
1 First One 18-AUG-19
3 Third Three 18-AUG-19
4 Fourth Four 18-AUG-19

SQL> select * from mlog$_source_table;

no rows selected

SQL> select * from rupd$_source_table;

no rows selected

SQL>
SQL> update source_table
2 set data_element_2 = 'Updated', date_col=sysdate
3 where id=4;

1 row updated.

SQL> select * from rupd$_source_table;

no rows selected

SQL> commit;

Commit complete.

SQL> select * from rupd$_source_table;

no rows selected

SQL> select * from mlog$_source_table;

ID SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
--------------------------------------------------------------------------------
XID$$
----------
4 01-JAN-00 U U
18
8.4443E+14


SQL>

So, it is clear that UPDATES, too, go to the MLOG$ table.

What about multi-row operations ?

SQL> update source_table set id=id+100;

3 rows updated.

SQL> select * from rupd$_source_table;

no rows selected

SQL> select * from mlog$_source_table;

ID SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
--------------------------------------------------------------------------------
XID$$
----------
4 01-JAN-00 U U
18
8.4443E+14

1 01-JAN-00 D O
00
1.4075E+15

101 01-JAN-00 I N
FF
1.4075E+15

3 01-JAN-00 D O
00
1.4075E+15

103 01-JAN-00 I N
FF
1.4075E+15

4 01-JAN-00 D O
00
1.4075E+15

104 01-JAN-00 I N
FF
1.4075E+15


7 rows selected.

SQL>



Wow ! Three rows updated in the Source Table translated to 6 rows in the MLOG$ table ! Each update row was represented by an DMLTYPE$$='D' and OLD_NEW$$='O'  followed by a DMLTYPE$$='I' and OLD_NEW$$='N'.   So that should mean "delete the old row from the materialized view and insert the new row into the materialized view" ??

(For the time being, we'll ignore SNAPTIME$$ being '01-JAN-00').

So an UPDATE to the Source Table of a Materialized View can be expensive during the UPDATE (as it creates two entries in the MLOG$ table) and for subsequent refresh's as well !

What happens when I refresh the Materialized View ?

SQL> execute dbms_session.session_trace_enable;

PL/SQL procedure successfully completed.

SQL> execute dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> execute dbms_session.session_trace_disable;

PL/SQL procedure successfully completed.

SQL>


The session trace file shows these operations (I've excluded a large number of recursive SQLs and SQLs that were sampling the data for optimisation of execution plans):

update "HEMANT"."MLOG$_SOURCE_TABLE" 
set snaptime$$ = :1
where snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')

/* QSMQ VALIDATION */ ALTER SUMMARY "HEMANT"."MV_OF_SOURCE" COMPILE

select 1 from "HEMANT"."MLOG$_SOURCE_TABLE"
where snaptime$$ > :1
and ((dmltype$$ IN ('I', 'D')) or (dmltype$$ = 'U' and old_new$$ in ('U', 'O')
and sys.dbms_snapshot_utl.vector_compare(:2, change_vector$$) = 1))
and rownum = 1

SELECT /*+ NO_MERGE(DL$) ROWID(MAS$) ORDERED USE_NL(MAS$) NO_INDEX(MAS$) PQ_DISTRIBUTE(MAS$,RANDOM,NONE) */
COUNT(*) cnt
FROM ALL_SUMDELTA DL$, "HEMANT"."SOURCE_TABLE" MAS$
WHERE DL$.TABLEOBJ# = :1 AND DL$.TIMESTAMP > :2 AND DL$.TIMESTAMP <= :3
AND MAS$.ROWID BETWEEN DL$.LOWROWID AND DL$.HIGHROWID

select dmltype$$, count(*) cnt from "HEMANT"."MLOG$_SOURCE_TABLE"
where snaptime$$ > :1 and snaptime$$ <= :2
group by dmltype$$ order by dmltype$$

delete from "HEMANT"."MLOG$_SOURCE_TABLE"
where snaptime$$ <= :1


and this being the refresh (merge update) of the target MV
DELETE FROM "HEMANT"."MV_OF_SOURCE" SNAP$ 
WHERE "ID" IN
(SELECT * FROM (SELECT MLOG$."ID"
FROM "HEMANT"."MLOG$_SOURCE_TABLE" MLOG$
WHERE "SNAPTIME$$" > :1 AND ("DMLTYPE$$" != 'I'))
AS OF SNAPSHOT(:B_SCN) )

/* MV_REFRESH (MRG) */ MERGE INTO "HEMANT"."MV_OF_SOURCE" "SNA$" USING
(SELECT * FROM (SELECT CURRENT$."ID",CURRENT$."DATA_ELEMENT_1",CURRENT$."DATA_ELEMENT_2",CURRENT$."DATE_COL"
FROM (SELECT "SOURCE_TABLE"."ID" "ID","SOURCE_TABLE"."DATA_ELEMENT_1" "DATA_ELEMENT_1","SOURCE_TABLE"."DATA_ELEMENT_2" "DATA_ELEMENT_2","SOURCE_TABLE"."DATE_COL" "DATE_COL"
FROM "SOURCE_TABLE" "SOURCE_TABLE") CURRENT$,
(SELECT DISTINCT MLOG$."ID" FROM "HEMANT"."MLOG$_SOURCE_TABLE" MLOG$ WHERE "SNAPTIME$$" > :1
AND ("DMLTYPE$$" != 'D')) LOG$ WHERE CURRENT$."ID" = LOG$."ID") AS OF SNAPSHOT(:B_SCN) )"AV$" ON ("SNA$"."ID" = "AV$"."ID")
WHEN MATCHED THEN UPDATE SET "SNA$"."DATA_ELEMENT_1" = "AV$"."DATA_ELEMENT_1","SNA$"."DATA_ELEMENT_2" = "AV$"."DATA_ELEMENT_2","SNA$"."DATE_COL" = "AV$"."DATE_COL"
WHEN NOT MATCHED THEN INSERT (SNA$."ID",SNA$."DATA_ELEMENT_1",SNA$."DATA_ELEMENT_2",SNA$."DATE_COL")
VALUES (AV$."ID",AV$."DATA_ELEMENT_1",AV$."DATA_ELEMENT_2",AV$."DATE_COL")


So, we see a large number of intensive operations against the MLOG$ Materialized View Log object.

And on the MV, there is a DELETE followed by a MERGE (UPDATE/IINSERT)


Two takeaways :
1.  Updating the Source Table of a Materialized View can have noticeable overheads
2.  Refreshing a Materialized View takes some effort on the part of the database

(Did you notice the strange year 2100 date in the update of the MLOG$ table?
.
.
.
.
.
.
Categories: DBA Blogs

Basic Replication -- 2a : Elements for creating a Materialized View

Sun, 2019-08-18 04:02
The CREATE MATERIALIZED VIEW statement is documented here.  It can look quite complex so I am presenting only the important elements here.  In this post, I begin with only the basic elements.

(EDIT: These SQL operations, queries and results were in a 19c Database)

First, I recreate the SOURCE_TABLE properly, with a Primary Key :

SQL> drop table source_table;

Table dropped.

SQL> create table source_table
2 (id number not null,
3 data_element_1 varchar2(15),
4 data_element_2 varchar2(15),
5 date_col date)
6 /

Table created.

SQL> create unique index source_table_pk
2 on source_table(id);

Index created.

SQL> alter table source_table
2 add constraint source_table_pk
3 primary key (id)
4 /

Table altered.

SQL>


Then I create a Materialized View Log on SOURCE_TABLE.  This will capture all DML against this table and will be read by the target Materialized View to identify "changed" rows at every refresh.

SQL> create materialized view log on source_table;

Materialized view log created.

SQL>


I then identify the objects that were created.

SQL> select object_id, object_name, object_type
2 from user_objects
3 where created > sysdate-1
4 order by object_id
5 /

OBJECT_ID OBJECT_NAME OBJECT_TYPE
---------- ------------------------------ -----------------------
73055 SOURCE_TABLE TABLE
73056 SOURCE_TABLE_PK INDEX
73057 MLOG$_SOURCE_TABLE TABLE
73058 RUPD$_SOURCE_TABLE TABLE
73059 I_MLOG$_SOURCE_TABLE INDEX

SQL>
SQL> desc mlog$_source_table;
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
ID NUMBER
SNAPTIME$$ DATE
DMLTYPE$$ VARCHAR2(1)
OLD_NEW$$ VARCHAR2(1)
CHANGE_VECTOR$$ RAW(255)
XID$$ NUMBER

SQL> desc rupd$_source_table;
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
ID NUMBER
DMLTYPE$$ VARCHAR2(1)
SNAPID NUMBER(38)
CHANGE_VECTOR$$ RAW(255)

SQL>


Interesting that the "CREATE MATERIAIZED VIEW LOG" statement created 3 database objects.

What happens after I perform DML on the SOURCE_TABLE ?

SQL> insert into source_table
2 values (1,'First','One',sysdate);

1 row created.

SQL> insert into source_table
2 values (2,'Second','Two',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> delete source_table
2 where id=2
3 /

1 row deleted.

SQL>
SQL> commit;

Commit complete.

SQL> select * from mlog$_source_table;

ID SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
------------------------------------------------------------------------------------------------------------------------------------
XID$$
----------
1 01-JAN-00 I N
FE
2.8158E+14

2 01-JAN-00 I N
FE
2.8158E+14

2 01-JAN-00 D O
00
2.5334E+15


SQL>
SQL> select * from rupd$_source_table;

no rows selected

SQL>


So the MLOG$_SOURCE_TABLE is the log that captures 2 INSERT statements and 1 DELETE statement.  (OR is it 2 INSERT *rows* and 1 DELETE *row* ??)
We don't know what the RUPD$_SOURCE_TABLE captures yet.

Let me create a Materialized View and then query MLOG$_SOURCE_TABLE (which is the "MV Log")

SQL> create materialized view
2 mv_of_source
3 refresh fast on demand
4 as select * from source_table
5 /

Materialized view created.

SQL> select * from mv_of_source
2 /

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
1 First One 18-AUG-19

SQL>
SQL> select * from mlog$_source_table;

no rows selected

SQL>


So, the CREATE MATERIALIZED VIEW statement has also done a cleanup of the MV Log entries with a SNAPTIME$ older than when it was created.

Let me insert two new rows and then refresh the Materialized View and check the MV Log again.

SQL> insert into source_table
2 values (3,'Third','Three',sysdate);

1 row created.

SQL> insert into source_table
2 values (4,'Fourth','Four',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from mlog$_source_table;

ID SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
------------------------------------------------------------------------------------------------------------------------------------
XID$$
----------
3 01-JAN-00 I N
FE
1.6889E+15

4 01-JAN-00 I N
FE
1.6889E+15


SQL>
SQL> execute dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> select * from mlog$_source_table;

no rows selected

SQL> select * from mv_of_source;

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
1 First One 18-AUG-19
3 Third Three 18-AUG-19
4 Fourth Four 18-AUG-19

SQL>


So, the 2 single-row INSERTs did create two entries in the MV Log and the REFRESH of the Materialized View did a cleanup of those two entries.

I haven't yet explored :
a.  UPDATEs
b. Multi-Row Operations
Categories: DBA Blogs

Basic Replication -- 1 : Introduction

Thu, 2019-08-15 23:24
Basic Replication, starting with Read Only Snapshots has been available in Oracle since  V7.   This was doable with the "CREATE SNAPSHOT" command.

In 8i, the term was changed from "Snapshot" to "Materialized View"  and the "CREATE MATERIALIZED VIEW" command was introduced, while "CREATE SNAPSHOT" was still supported.

Just as CREATE SNAPSHOT is still available in 19c,  DBMS_SNAPSHOT.REFRESH is also available.


























Not that I recommend that you use CREATE SNAPSHOT and DBMS_SNAPSHOT anymore.  DBAs and Developers should have been using CREATE MATERIALIZED VIEW and DBMS_REFRESH since 8i.

In the next few blog posts (this will be a very short series) I will explore Basic Replication.  Let me know if you want to see it in 11.2 and 12c as well.



Categories: DBA Blogs

Pages