Feed aggregator

Announcing SLOB 2.5.4

Kevin Closson - Fri, 2021-05-14 17:49

SLOB 2.5.4 has important fixes to support very large session count testing. Testing with a large number of sessions is generally done in conjunction with enabling SLOB think time. Please see the documentation for more information about testing with SLOB think time. SLOB 2.5.4 has been tested with 12,288 sessions on 8-node RAC clusters (1,536 sessions per RAC instance) with varying think time.

When testing with very large session count, please consider taking advantage of the SLOB 2.5.3 feature of improved SLOB sessions placement as explained here.

Install Agent on linux without install all the OEM

Tom Kyte - Fri, 2021-05-14 10:46
i have a Solarias OEM installation, we use DBs across differents platforms 90% on Solaris and 10% on Linux hoe can i install the linux agent whitout install all the OEM on linux and how to load this Linux agent on Solaris OEM
Categories: DBA Blogs

SQL Developer extract to Excel Date Format

Tom Kyte - Fri, 2021-05-14 10:46
SQL Developer extract to Excel - Date Format is changing from 02-NOV-2020 12:00:00 AM to 11/2/20 Hi I'm using SQL Developer 20.4 and Excel 365 16.0.13127.21490. When I run a SQL with dates, the dates are delivered in the format of 02-NOV-2020 12:00:00 AM. When I export to Excel, the excel file is showing the dates as 11/2/20. How do I go about keeping the original date format that Excel had?
Categories: DBA Blogs

Buffer parameter in the legacy IMP utility and unique constraints

Tom Kyte - Fri, 2021-05-14 10:46
Hi, When using the legacy IMP tool, if you have specified ignore=y commit=y and buffer=9999999 what is the behavior of IMP when a unique constraint is violated by a row in the buffer? Assuming the buffer holds more than one row, will IMP reject the entire buffer or only reject the specific offending row/rows but then insert the other rows that are in the buffer. I am unable to find documentation for the behavior in this specific case. Thanks in advance for any info.
Categories: DBA Blogs

Multiple Recepients

Tom Kyte - Fri, 2021-05-14 10:46
In the Example of Multiple Recipients in the https://oracle-base.com/articles/misc/email-from-oracle-plsql#multiple-recipients link, mail goes to all the recipients. But, all the names in the "To" list is available to all the recipients.Is there any parameter setting such that, only the intended recipients name appears. So that each recipient is not aware of other names in the "To" list.
Categories: DBA Blogs

RMAN duplicate to RAC ASM

Tom Kyte - Fri, 2021-05-14 10:46
Hello team, we are planning to clone a two nodes RAC database on ASM to a test database on one server only. The test server has grid and Oracle database software installed and ASM diskgroups configured. After RMAN duplicate from active database with cluster_database=false, we will have to convert the test database into RAC. What is the correct way to do this? Can we do that with pfile modification and then srvctl add database? Thanks and Regards, Le
Categories: DBA Blogs

Exporting Clob fields to Excel - getting an empty file at export

Tom Kyte - Fri, 2021-05-14 10:46
Hello I'm using SQL Dev 20.2 and since I've upgraded from 17.2, I'm having issues exporting to Excel any data that has a Clob field in it. The data gets extracted and shows on SQL Developer okay but when I export it to Excel it creates an empty 4KB file. Works in 17.2 as tested by a co-worker who has not yet upgraded. Is there a setting needed or a bug fix? I see that V 20.4 is available for download now internally. Update - I upgraded to version 20.4 and it resolved the issue with the export. All is well. Please close out this question.
Categories: DBA Blogs

UTL_SMTP and SSL

Tom Kyte - Fri, 2021-05-14 10:46
I have the script below that was working normally, however, I believe, that the gmail certificate has expired. With OpenSSL> s_client -connect smtp.gmail.com/1687 -starttls smtp, I generated another certificate for my wallet and updated the ACLs, but still it still has an error to send on the l_reply line : = sys.utl_smtp.starttls (L_MAIL_CONN ); and ORA-28759: ora-28759 failure to open file, by if I get command bellow, it's work. select UTL_HTTP.request(url => 'http://www.google.com', wallet_path => 'file:/oracle/ora11g/owm/wallets/', wallet_password => 'xxxxxxxxx') from dual; <code> CREATE OR REPLACE PROCEDURE PR_JUN_MAIL_NF_DEVOL AS VAR_CONTROLE NUMBER; VAR_MENSAGEM VARCHAR2(32767); L_MAILHOST VARCHAR2(64) := 'smtp.gmail.com'; P_USERNAME_ VARCHAR2(50) := 'adminjun@jundia.net'; P_PASSWORD_ VARCHAR2(50) := 'xxxxxxxx'; L_DE VARCHAR2(64) := 'Intranet'; L_FROM VARCHAR2(64) := 'adminjun@jundia.net'; k_wallet_path CONSTANT VARCHAR2(100) := 'file:/oracle/ora11g/owm/wallets'; k_wallet_password CONSTANT VARCHAR2(100) := 'xxxxxxxxxxxxx'; k_domain CONSTANT VARCHAR2(100) := 'localhost'; l_port INTEGER := 587; l_reply utl_smtp.reply; l_replies utl_smtp.replies; TO_NAME VARCHAR2(200):= 'ti@gmail.net'; SUBJECT VARCHAR2(200):= 'NOTAS FISCAIS DE DEVOLUCAO EMITIDAS'; L_MAIL_CONN UTL_SMTP.CONNECTION; BEGIN VAR_CONTROLE:=1; BEGIN SELECT distinct 0 INTO VAR_CONTROLE from tambem_JUN_NFE nfe inner join bgm_notafiscal bnf on nfe.codintnf_bgmnf = bnf.codintnf inner join esfclass e on e.codclassfisc = bnf.codclassfisc INNER join BGM_CLIENTE CLI on bnf.Codcli = CLI.CODCLI INNER join CTR_FILIAL F on nfe.empresa = f.codigoempresa and nfe.filial = f.codigofl WHERE NFE.DATAEMISSAO > to_date('01/02/2017', 'dd/mm/yyyy') and upper(e.descclassfisc) like '%DEVOLU%' AND NFe.Codintnf_Bgmnf || upper(decode(nfe.status,'A','AUTORIZADA', 'C','CANCELADA', 'I','INUTILIZADA', 'N','NORMAL', 'P','PROCESSANDO', 'R','REJEITADA')) not in (select jc.codintnf || upper(jc.statunf) from CONTROLE_NF_DEV JC); EXCEPTION WHEN OTHERS THEN VAR_CONTROLE:=1; end; if VAR_CONTROLE = 0 then l_reply := UTL_SMTP.OPEN_CONNECTION( host => L_MAILHOST, port => l_port, c => L_MAIL_CONN, wallet_path => k_wallet_path, wallet_password => k_wallet_password, secure_connection_before_smtp => FALSE); Dbms_Output.Put_Line ('apos conectar'); l_replies := sys.utl_smtp.ehlo(L_MAIL_CONN, k_domain); l_reply := sys.utl_smtp.starttls(L_MAIL_CONN); l_replies := sys.utl_smtp.ehlo(L_MAIL_CONN, k_domain); l_reply := sys.utl_smtp.auth(L_MAIL_CONN, P_USERNAME_, P_PASSWORD_, utl_smtp.all_schemes); l_reply := sys.utl_smtp.mail(L_MAIL_CONN, L_FROM); l_reply := sys.utl_smtp.RCPT(L_MAIL_CONN, TO_NAME); l_reply := UTL_SMTP.OPEN_DATA(L_MAIL_CONN); UTL_SMTP.WRITE_DATA( L_MAIL_CONN, 'FROM:' ||L_DE||'<'|| L_FROM|| '>' || UTL_TCP.CRLF); UTL_SMTP.WRITE_DATA( L_MAIL_CONN, 'TO:' ||TO_NAME||UTL_TCP.CRLF); UTL_SMTP.WRITE_DATA( L_MAIL_CONN, 'SUBJECT:' ||SUBJECT||UTL_TCP.CRLF); UTL_SMTP.WRITE_DATA(L_MAIL_CONN, 'Content-Type: text/plain; boundary="gc0p4Jq0M2Yt08jU534c0p"'||UTL_TCP.CRLF); UTL_SMTP.WRITE_DATA(L_MAIL_CONN, 'MIME-Version: 1.0'||UTL_TCP.CRLF); UTL_SMTP.WRITE_DATA(L_MAIL_CONN, '--gc0p4Jq0M2Yt08jU534c0p' ); UTL_SMTP.WRITE_DATA(L_MAIL_CONN, 'Content-Type: text/plain' ); UTL_SMTP.WRITE_DATA(L_MAIL_CONN, ' '||UT...
Categories: DBA Blogs

Gather Your Session Info For Killing

Michael Dinh - Thu, 2021-05-13 23:40

So there I was, running emremove.sql as part of pre-upgrade task; however, it was taking longer than expected.

Session was stuck at the output shown below and desperately CTRL-C did not work.

14:35:05 472  /
old  70:     IF (upper('&LOGGING') = 'VERBOSE')
new  70:     IF (upper('VERBOSE') = 'VERBOSE')

^C

^C^C

^C

I checked for blocking session and there were blocking locks from SYS which was really strange.

I made a gutsy call and kill SYS session from OS prompt based on timestamp.

~ $ ps -ef|grep sysdba
oracle    57147 231962  0 May12 pts/4    00:00:00 sqlplus   as sysdba
oracle   155919 139352  0 14:34 pts/1    00:00:00 sqlplus   as sysdba
oracle   244619 216760  0 15:25 pts/5    00:00:00 grep --color=auto sysdba

~ $ kill -9 155919

As it turns out, another DBA was logged in as sysdba causing havoc.

I was lucky to have killed the correct SYS session and will you be as lucky as I was?

Based on my near disaster, it would be better to create good practice of gathering your session info to be able to kill the correct session.

Here is current session info.

SQL> @my
SQL> select b.sid, b.serial#, a.spid processid, b.process clientpid from v$process a, v$session b
  2  where a.addr = b.paddr
  3  and b.audsid = userenv('sessionid')
  4  and b.sid=userenv('sid')
  5  ;

       SID    SERIAL# PROCESSID                CLIENTPID
---------- ---------- ------------------------ ------------------------
         5        101 16428                    16427

SQL>

[oracle@ol7-112-dg1 ~]$ ps -ef|grep 16427
oracle   16427  8573  0 03:44 pts/0    00:00:00 sqlplus   as sysdba
oracle   16428 16427  0 03:44 ?        00:00:00 oraclehawk (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   16461 11677  0 03:45 pts/1    00:00:00 grep --color=auto 16427
[oracle@ol7-112-dg1 ~]$

Kill OS process using sqlplus PROCESSID – don’t know session is killed until DML is performed.

[oracle@ol7-112-dg1 ~]$ kill -9 16428

SQL> select * from dual;
select * from dual
*
ERROR at line 1:
ORA-03135: connection lost contact
Process ID: 16428
Session ID: 5 Serial number: 101


SQL>

Another test

--- Session Info
SQL> @my
SQL> select b.sid, b.serial#, a.spid processid, b.process clientpid from v$process a, v$session b
  2  where a.addr = b.paddr
  3  and b.audsid = userenv('sessionid')
  4  and b.sid=userenv('sid')
  5  ;

       SID    SERIAL# PROCESSID                CLIENTPID
---------- ---------- ------------------------ ------------------------
         5        103 16533                    16532

SQL>

--- From another session, check waits for above session
SQL> r
  1  select NVL(s.username,'(oracle)') AS username, s.sid, s.serial#,
  2  sw.event, sw.seconds_in_wait, sw.state
  3  from v$session_wait sw, v$session s
  4  where s.sid = sw.sid and s.sid=&sid
  5*
Enter value for sid: 5
old   4: where s.sid = sw.sid and s.sid=&sid
new   4: where s.sid = sw.sid and s.sid=5

USERNAME               SID    SERIAL# EVENT                          SECONDS_IN_WAIT STATE
--------------- ---------- ---------- ------------------------------ --------------- -------------------
SYS                      5        115 SQL*Net message from client                169 WAITING

SQL>

Kill OS process using sqlplus CLIENTPID – immediate feedback –

[oracle@ol7-112-dg1 ~]$ ps -ef|grep 16532
oracle   16532  8573  0 03:46 pts/0    00:00:00 sqlplus   as sysdba
oracle   16533 16532  0 03:46 ?        00:00:00 oraclehawk (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   16557 11677  0 03:47 pts/1    00:00:00 grep --color=auto 16532
[oracle@ol7-112-dg1 ~]$


[oracle@ol7-112-dg1 ~]$ kill -9 16532


SQL> Killed
[oracle@ol7-112-dg1 ~]$

Hopefully you will never have to kill your own session.

When you need kill your session, it’s better to have the correct information versus guessing.

Restore Entreprise Edition RMAN Backup to Standard Edition

Tom Kyte - Wed, 2021-05-12 03:46
Hello experts , Is it possible to restore entreprise edition rman backup which contains partitionned tables to a Standard edition instance? Thanks in advance. Regards Nabil.
Categories: DBA Blogs

Deleting many rows from a big table

Tom Kyte - Tue, 2021-05-11 09:26
Tom: We have a 6 millons rows table and we need to clean it. This process will delete 1,5 millons. My first approach was create a SP with this lines: SET TRANSACTION USE ROLLBACK SEGMENT Rbig; DELETE FROM CTDNOV WHERE CTDEVT IN (4,15); (1,5m rows) COMMIT; Then I submited a job to run the SP at night. This process took more than 4 hours to finish and generated a huge amounts of archives. 2nd Approach (i didn't test it, i think this is slower than the 1st approach. According to you book is faster one commit at the end than a lots of m/n-rows commits) from Metalink Create PL/SQL procedure: CREATE OR REPLACE PROCEDURE delete_tab (tablename IN VARCHAR2, empno IN NUMBER , nrows IN NUMBER ) IS sSQL1 VARCHAR2(2000); sSQL2 VARCHAR2(2000); nCount NUMBER; BEGIN nCount := 0; sSQL1:='delete from '|| tablename || ' where ROWNUM < ' || nrows || ' and empno=' || empno; sSQL2:='select count(ROWID) from ' || tablename || ' where empno= ' || empno; LOOP EXECUTE IMMEDIATE sSQL1; EXECUTE IMMEDIATE sSQL2 INTO nCount; DBMS_OUTPUT.PUT_LINE('Existing records: ' || to_char(ncount) ); commit; EXIT WHEN nCount = 0; END LOOP; END delete_tab; / 3. Execute above created procedure SQL> execute delete_tab('big_emp',7369,5000) Existing records: 60537 Existing records: 55538 Existing records: 50539 Existing records: 45540 Existing records: 40541 Existing records: 35542 Existing records: 30543 Existing records: 25544 Existing records: 20545 Existing records: 15546 Existing records: 10547 Existing records: 5548 Existing records: 549 Existing records: 0 PL/SQL procedure successfully completed. 3th Approach (it's seems to be the better choice) When a DELETE is issued, Oracle stores the whole deleted row in the rollback segments, so you can undo the changes later, if you want to. So there is an image of the rows in rollback which are currently not present in the table. Hence the phrase 'before image'. Now all the rollback blocks are written to the redo log files too. So you have the data blocks with the table (without the deleted rows, of course) and the rollback blocks with the old image both producing redo, which accounts for additional archive logs. I cant comment on the ratio of deleted records to ratio of redo produced, though. If you do an insert, only the rowid is stored in the rollback segs, which generates less redo. Depending on the number of rows in your tables, it may be better for you to insert the records you dont want to delete into a temp table, truncate the main table and move the records back to the main. This would probably generate less redo than delete, but that depends on number of rows. Above, i describe the table i want to clean from dba_segments SEGMENT_NAME EXTENTS BYTES ---------------------------------------- ---------- ---------- CTDNOV 26 2276614144 from dba_extents SEGMENT_NAME EXTENT_ID Kbytes ---------------------------------------- ---------- ---------- CTDNOV 0 520 1 520 2 800 3 1200 4 1800 5 2680 6 4000 7 ...
Categories: DBA Blogs

how to delete data from table

Tom Kyte - Mon, 2021-05-10 15:06
with this normal query of delete delete from table where condition-'a'; will it work in table data are present from year 2014. i want to delete all data starting from 2014 to till now.
Categories: DBA Blogs

Column default value as another column from same table

Tom Kyte - Mon, 2021-05-10 15:06
Hello, We have a requirement to add a new column in the table which needs a default value like column1 || column2. For some reason application code can not be changed to deal with this new column and so the default value. I thought of two approaches that can solve this, using trigger to update the new column if any of two columns column1 or column2 are updated - so the new column can be initially updated and then trigger can be enabled to handle any future changes. Other approach is use virtual column. Now it seems that a direct insertion of data or an update might be required for the new column, that rules out the virtual column. And on trigger, web is full of articles that they are problematic and I am having tough time convincing that for a low volume table (number of records as well as the number of transactions) trigger may not be the worst idea, though I understand the maintenance headaches and side effects etc. Is there any other approach? Also why Oracle does not support the column default value as another column? Thank you, Priyank
Categories: DBA Blogs

Single row cursor for short text string from dual produces CHAR(32767)

Tom Kyte - Mon, 2021-05-10 15:06
Hi I have tried 19.9 - 19.11 I have noticed some suspicious behaviour regarding dual. I will create an example. At the moment, here you can see that from mytab, there comes only single row. Then I will dump the datatype to output. <code> SQL> set serveroutput on size unlimited; declare a clob; l_msg_content_begin CLOB := EMPTY_CLOB(); CURSOR cur IS with mytab as ( select 'SOMERANDOMTABLE' as main_table from dual --union select 'ALSOSOMERANDOMTABLE' as main_table from dual ) select main_table, lower_main_table from ( select main_table, lower(main_table) as lower_main_table from mytab ) order by 1 desc; rec cur%rowtype; BEGIN FOR rec IN cur LOOP dbms_output.put_line(rec.main_table); select dump(rec.lower_main_table) into a from dual; dbms_output.put_line(a); -- ORA-06502: PL/SQL: numeric or value error: character string buffer too small -- If you have only one row from dual, then you get error if you uncomment this: "l_msg_content_begin := ..." -- With 2 or more rows from dual, all good --l_msg_content_begin := 'blabla '||rec.lower_main_table||' blablabla '||rec.lower_main_table||'bla'||UTL_TCP.CRLF; END LOOP; --dbms_output.put_line(substr(l_msg_content_begin, 1, 2000) || 'AA'); END; / </code> And here you can see, datatype is CHAR (Typ=96), and check the length (so whole string padded with spaces "ascii32 == space") <code> SOMERANDOMTABLE Typ=96 Len=32767: 115,111,109,101,114,97,110,100,111,109,116,97,98,108,101,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,.................... </code> Seems like <b>lower()</b> function somehow produces this strange behaviour. Is this normal..? Also when I dump rec.main_table instead (so not lower() function output) <code>select dump(rec.main_table) into a from dual;</code> Then I get type CHAR and an actual length. So it is expected. On contrast, when I uncomment this second line also <code>--union select 'ALSOSOMERANDOMTABLE' as main_table from dual</code> Then it is expected: <code> SOMERANDOMTABLE Typ=1 Len=15: 115,111,109,101,114,97,110,100,111,109,116,97,98,108,101 ALSOSOMERANDOMTABLE Typ=1 Len=19: 97,108,115,111,115,111,109,101,114,97,110,100,111,109,116,97,98,108,101 </code> Type is varchar and length is actual length. Regards Raul
Categories: DBA Blogs

The most peculiar Oracle situation in my career- Oracle changes how it records a block read from direct read to not recording it in an I/O wait event at all

Tom Kyte - Mon, 2021-05-10 15:06
Greetings, I have this extremely perplexing situation where Oracle changes how it records a block read. Last week it wasn't COUNTING block reads at all in an I/O wait event; this week it started to add it to the ?direct read? wait event. This is occurring in our production environment; however, I was able to reproduce the situation in our test environment with test data. I used all_source view to create two test tables until I reached 1.2 million for table 1 and 4 million for table 2: Table1 ( 1.2 Mil records) create table table1 as select * from dba_source where rownum; Table2 ( 4 Mil records ) create table table2 as select * from dba_source; create index t1_pk on table1(owner); create index t2_pk on table2(owner, line); exec dbms_stats.gather_schema_stats('JOHN'); Then I ran this select statement 120 times: <code>select count(*) from Table1 where line=1 and owner in (select Table2.owner from Table2 where Table2.owner=Table1.owner) order by owner;</code> In some cases Oracle 19c records the I/O in "direct path read" wait events and in other cases, it doesn't seem to report in any I/O wait event. That is soooo odd. TEST CASE 1: IOStats summary doesn't record I/O nor does it in a wait event: <code>Top 10 Foreground Events by Total Wait Time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Wait Avg % DB Wait Event Waits Time (sec) Wait time Class ------------------------------ ----------- ---------- --------- ------ -------- DB CPU 20.2 99.6 PGA memory operation 2,524 .1 20.27us .3 Other Disk file operations I/O 520 0 59.49us .2 User I/O db file sequential read 211 0 12.33us .0 User I/O Parameter File I/O 8 0 257.00us .0 User I/O enq: RO - fast object reuse 2 0 784.50us .0 Applicat control file sequential read 209 0 5.32us .0 System I log file sync 1 0 .95ms .0 Commit SQL*Net message to client 546 0 1.53us .0 Network SQL*Net more data to client 22 0 33.77us .0 Network SQL ordered by Gets DB/Inst: ORACLE/stbyoracle Snaps: 2727-2728 -> Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code. -> %Total - Buffer Gets as a percentage of Total Buffer Gets -> %CPU - CPU Time as a percentage of Elapsed Time -> %IO - User I/O Time as a percentage of Elapsed Time -> Total Buffer Gets: 3,399,948 -> Captured SQL account for 98.1% of Total Buffer Gets Elapsed Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id ----------- ----------- ------------ ------ ---------- ----- ----- ------------- 3,241,728 120 27,014.4 95.3 14.4 99.5 0 82mps751cqh84 Module: SQL*Plus select count(*) from Table1 where line=1 and owner in (select Table2.owner from Table2 where Table2.owner=Table1.owner) order by owner IOStat by Function summary DB/Inst: ORACLE/stbyoracle Snaps: 2727-2728 -> 'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000 -> ordered by (Data Read + Write) desc Reads: Reqs Data Writes: Reqs Data Waits: Avg Function Name Data per sec per sec Data per sec per sec Count Time --------------- ------- ------- ------- ------- ------- ------- ------- -------- LGWR 3M 1.5 .022M 10M 3.6 .075M 678 368.73us Others 7M 2...
Categories: DBA Blogs

Method to measure performance gain of clustered table vs non-clustered tables

Tom Kyte - Mon, 2021-05-10 15:06
I have 2 pairs of parent and child tables ,1 pair is stored in a clustered object and the other is non-clustered. The primary key of the master table (which is foreign key) in the child table is clustered. An index on cluster is also created. The structure of 2 the parent tables is identical and structure of 2 child tables is also identical. Records in the 2 pairs are also identical. I want measure the performance gain of clustered tables vs non clustered table for SELECT statement. I am using SET TIMING ON and printing the elapsed time after the SELECT is executed on the tables. The SELECT statement is also identical. I was expecting the elapsed time of clustered tables to be less than the non-clustered table, consistently. But it is the not. Can you please explain this? Also , is there other way to measure the performance of non-clustered vs clustered, using auto trace or explain plan?
Categories: DBA Blogs

Celery Distributed Task Queue Display with Flower UI

Andrejus Baranovski - Mon, 2021-05-10 06:09
I explain how you can visualize Celery distributed task queue with Flower UI. This is useful when monitoring asynchronous tasks.

 

Tracking the Standby Lag from the Primary

Hemant K Chitale - Sun, 2021-05-09 10:38

 Here is a quick way of tracking the Standby Lag from the Primary.

This relies on the information in V$ARCHIVE_DEST on the Primary.

Note that this query will not work if the lag is so great that the SCN_TO_TIMESTAMP mapping fails (because the underlying table holds only a limited number of records) OR if the Standby instance is shutdown and the Primary cannot communicate with it.


Note : The lag based on "SCN_TO_TIMESTAMP" is always an approximation.  

SQL> l
1 select scn_to_timestamp(current_scn) - scn_to_timestamp(applied_scn) Time_Diff
2 from v$database d,
3* (select applied_scn from v$archive_dest a where target = 'STANDBY')
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 00:41:09.000000000

SQL>
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:07:22.000000000

SQL>
SQL> l
1 select scn_to_timestamp(current_scn) - scn_to_timestamp(applied_scn) Time_Diff
2 from v$database d,
3* (select applied_scn from v$archive_dest a where target = 'STANDBY')
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:07:58.000000000

SQL>
SQL> l
1 select scn_to_timestamp(current_scn) - scn_to_timestamp(applied_scn) Time_Diff
2 from v$database d,
3* (select applied_scn from v$archive_dest a where target = 'STANDBY')
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:13:16.000000000

SQL>
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:13:37.000000000

SQL>
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000000 00:00:00.000000000

SQL>


Here, the lag was 4 days and it took some time for the Standby to catchup with the Primary.
(this is my Lab environment, not a real production environment at my work place, so don't ask how I managed to create a lag of 4 days or how long it took for the Standby to catch-up with the Pirmary)

Note : If the Standby database is down and/or the lag is very high, you will get error :
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1

for the "applied_scn" from v$archive_dest.  (If the Standby is down, the value for "applied_scn" in v$archive_dest on the Primary is "0").


If you have access to the Standby you can run this query :

select name, value from v$dataguard_stats where name like '%lag'


The demo above is only a quick away by querying the Primary without accessing the Standby
Categories: DBA Blogs

Driving site patch

Jonathan Lewis - Sun, 2021-05-09 06:23

A recent question on the Oracle-L list server asked “Is there a way I can use an SQL_PATCH to add a driving_site() hint to a query?” to which the reply was: “Just do it, it should work.” Unfortunately this produced the response: “I’m on 11.2.0.4, I’ve tried it, it doesn’t seem to be working.” I do approve of the “doesn’t seem to be” – it’s much more encouraging than a flat assertion that “it doesn’t work”, and helps encourage further dialogue.

I’ve come across this problem before (though I couldn’t find any notes I’d written about it – so possibly they’re only on a client site, or maybe it was a different hint displaying the same symptom) and it’s possible that the solution is very easy. Here’s a little data set to test with – created on 11.2.0.4 and then tested on 19.3.0.0:

rem
rem     Script:         patch_driving_site.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2021
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             11.2.0.4
rem

define m_target=orclpdb@loopback

execute sys.dbms_sqldiag.drop_sql_patch('driving_site');

create table t1
as
select
        *
from
        all_objects
where
        rownum <= 10000
;

alter table t1 add constraint t1_pk primary key (object_id);

create table t2
as
select
        *
from
        all_objects
where
        rownum <= 10000
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 1 for columns owner size 254'
        );
end;
/

I’ve created two tables which I’m going to join, but I’ve created a loopback database link that I’ll use to make one of them appear to be at a remote database. The data comes from view all_objects, and includes just 7 rows out of 10,000 for owner=’OUTLN’, which is why I’ve created a histogram on the owner column – but only for one of the tables.

You’ll notice I’ve issued a call to the dbms_sqldiag package to drop an SQL patch called “driving_site” just to make sure I don’t confuse the issue (i.e. myself) if I re-run the test in a couple of days time (I’ve also got a similar call at the end of the script). To run this test you’ll have to grant execute privileges on this package to your test schema, and if you’re on 11g you’ll also have to grant execute privileges on the package dbms_sqldiag_internal. so that you can create the SQL patch.

Here’s my query, with the default execution plan I got on the run against 19.3.0.0:

select
        t1.object_name,
        t1.object_type,
        t2.object_name,
        t2.object_type
from
        t1,
        t2@&m_target    t2
where
        t2.object_id = t1.object_id
and     t2.owner     = 'OUTLN'
/

select * from table(dbms_xplan.display_cursor(format=>'outline alias'));


SQL_ID  4dssxbjvzzrc0, child number 0
-------------------------------------
select  t1.object_name,  t1.object_type,  t2.object_name,
t2.object_type from  t1,  t2@orclpdb@loopback t2 where  t2.object_id =
t1.object_id and t2.owner     = 'OUTLN'

Plan hash value: 3828311863

------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |       |       |    54 (100)|          |        |      |
|   1 |  NESTED LOOPS                |       |  1111 |   165K|    54   (8)| 00:00:01 |        |      |
|   2 |   NESTED LOOPS               |       |       |       |            |          |        |      |
|   3 |    REMOTE                    | T2    |  1111 |   130K|    26   (4)| 00:00:01 | ORCLP~ | R->S |
|*  4 |    INDEX UNIQUE SCAN         | T1_PK |       |       |            |          |        |      |
|   5 |   TABLE ACCESS BY INDEX ROWID| T1    |     1 |    33 |    27   (8)| 00:00:01 |        |      |
------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   3 - SEL$1 / T2@SEL$1
   4 - SEL$1 / T1@SEL$1
   5 - SEL$1 / T1@SEL$1

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      INDEX(@"SEL$1" "T1"@"SEL$1" ("T1"."OBJECT_ID"))
      NLJ_BATCHING(@"SEL$1" "T1"@"SEL$1")
      USE_NL(@"SEL$1" "T1"@"SEL$1")
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T2"@"SEL$1")
      LEADING(@"SEL$1" "T2"@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."OBJECT_ID"="T1"."OBJECT_ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "OWNER","OBJECT_NAME","OBJECT_ID","OBJECT_TYPE" FROM "T2" "T2" WHERE
       "OWNER"='OUTLN' (accessing 'ORCLPDB.LOCALDOMAIN@LOOPBACK' )

Note
-----
   - this is an adaptive plan

The optimizer has decided to do a nested loop join, fetching an estimated 1,111 rows (there are 10,000 rows in total and 9 distinct values for owner – and the optimizer doesn’t examine remote histograms!) The costing may seem a little surprising – an incremental cost of 27 for 1,111 probes of the index and table does seem a little low, but I’m not going to comment on that in this note.

We are actually going to get only 7 rows in the tablescan, so the path is a good one; but the Note tells us it is an adaptive plan and if at run-time the number of rows had been too large (as it would probably be for some other value of owner) Oracle would have switched to a hash join as the query is running.

Let us pretend, however, that we know that we could get better performance if the remote database optimised and executed the query. If we add the hint /*+ driving_site(t2) */ to the query we get the following from the call to dbms_xplan.display_cursor():

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
SQL_ID  86n3j9s9q9k47, child number 0

select  /*+ driving_site(t2) */  t1.object_name,  t1.object_type,
t2.object_name,  t2.object_type from  t1,  t2@orclpdb@loopback t2 where
 t2.object_id = t1.object_id and t2.owner     = 'OUTLN'

NOTE: cannot fetch plan for SQL_ID: 86n3j9s9q9k47, CHILD_NUMBER: 0
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

The call can’t find the plan because it’s the remote database that generated it from a piece of text that the local database sent to it, and the plan “belongs to” that text, not to the original query. So that almost tells us that the driving_site() hint has done its job properly – but we can double-check by searching the remote database’s library cache for the SQL that it actually ran.

set linesize 132
column sql_text wrap word format a75

select  sql_id, sql_text 
from    V$sql 
where   sql_text like '%OUTLN%'
;


SQL_ID        SQL_TEXT
------------- ---------------------------------------------------------------------------
5hmjcxgt0jc8t SELECT
              "A2"."OBJECT_NAME","A2"."OBJECT_TYPE","A1"."OBJECT_NAME","A1"."OBJECT_TYPE"
              FROM "T1"@! "A2","T2" "A1" WHERE "A1"."OBJECT_ID"="A2"."OBJECT_ID" AND
              "A1"."OWNER"='OUTLN'


select * from table(dbms_xplan.display_cursor('5hmjcxgt0jc8t'));


SQL_ID  5hmjcxgt0jc8t, child number 0
-------------------------------------
SELECT "A2"."OBJECT_NAME","A2"."OBJECT_TYPE","A1"."OBJECT_NAME","A1"."OB
JECT_TYPE" FROM "T1"@! "A2","T2" "A1" WHERE
"A1"."OBJECT_ID"="A2"."OBJECT_ID" AND "A1"."OWNER"='OUTLN'

Plan hash value: 3485226535

-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    33 (100)|          |        |      |
|   1 |  NESTED LOOPS      |      |     7 |   924 |    33   (4)| 00:00:01 |        |      |
|*  2 |   TABLE ACCESS FULL| T2   |     7 |   280 |    26   (4)| 00:00:01 |        |      |
|   3 |   REMOTE           | T1   |     1 |    92 |     1   (0)| 00:00:01 |      ! | R->S |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("A1"."OWNER"='OUTLN')

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "OBJECT_NAME","OBJECT_ID","OBJECT_TYPE" FROM "T1" "A2" WHERE
       :1="OBJECT_ID" (accessing '!' )


As we can see from the plan – which executed from the remote instance – the (now-local) histogram comes into play with a correct cardinality estimate for t2 and we do a nested loop requesting one row at a time from the (now remote) t1 by object_id. So the driving_site() hint works when written inline, which means we can work on creating a patch to apply it from “outside”.

begin
        dbms_output.put_line('19.3 patch name: ' || sys.dbms_sqldiag.create_sql_patch(
--      sys.dbms_sqldiag_internal.i_create_patch(
                sql_text        =>  
q'{
select
        t1.object_name,
        t1.object_type,
        t2.object_name,
        t2.object_type
from
        t1,
        t2@&m_target    t2
where
        t2.object_id = t1.object_id
and     t2.owner     = 'OUTLN'
}',
                hint_text       => 'driving_site(t2)',
--              hint_text       => 'driving_site(t2@sel$1)',
                name            => 'driving_site'
        ))       -- extra bracket for dbms_output.put_line
        ;
end;
/

There are a couple of funny format details in this code fragment. First, dbms_sqldiag in 19c uses a function to create an SQL patch while 11g uses a procedure in dbms_sqldiag_internal, so there’s a messy optional bit near the top of the code and the reminder to count closing brackets near the bottom.

Second – when I used an SQL patch to add the hint that had worked it didn’t do what it was suppoed to do (even though the Note for the execution plan reported: “SQL patch “driving_site” used for this statement”. That’s why you see two versions (one commented) of the hint_text parameter. To get the SQL patch working I had to use the fully qualified alias of the remote table which, as you can see in the Query Block / Object Alias information from the first plan I reported, is t2@sel$1.

With the corrected SQL patch in place the text sent to the remote database was exactly the same as it was when I used the inline (but slightly lucky) /*+ driving_site(t2) */ hint. Quite possibly I should have gone one step further and made the hint_text include the query block as well, vizl: driving_site(@sel$1 t2@sel$1) I suspect, though, that that might not always be necessary (or even correct) – at some point I’ll have to check what happens if the reported query block has appeared as a consequence of a transformation and no longer matches the original query block from the object alias.

Summary

I’ve often made the point that you have to be very thorough with hints and this note demonstrates a variation of that theme. There are cases where you can get away with being a little sloppy when specifying an object alias in an “inline” hint but have to be more precise when imposing it from outside the original source SQL.

Adding the “originating” query block to construct the “fully qualified” object alias is the first refinement that you should think of; adding in a “target” query block name may also be necessary for some hints. Both these pieces of information can be reported in the various calls to dbms_xplan by including the “alias” option in the format parameter as you build your test cases towards a complete solution.

Zip a .csv file present at DB directory using PL/SQL

Tom Kyte - Fri, 2021-05-07 13:46
We have a requirement where we are generating .csv file from DB and placed it to a DB directory. We want to zip these .csv files so that size can be optimised. Could you please suggest a way to achieve it by using PL/SQL.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator