Feed aggregator

[AZ-103] Roles And Responsibilities Of A Microsoft Azure Administrator

Online Apps DBA - Wed, 2020-01-22 07:07

AZ-103 | Roles and Responsibilities as a Microsoft Azure Administrator Want to know the Roles and Responsibilities of a Microsoft Azure Administrator or knowledge and role-specific skills he/she should possess for bigger & better job opportunities? Check out our blog at https://k21academy.com/az10312 which covers: ▪ What Is Microsoft Azure Certification? ▪ Why Is Azure Certification […]

The post [AZ-103] Roles And Responsibilities Of A Microsoft Azure Administrator appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Add a deployment to Oracle GoldenGate 19c Microservices

DBASolved - Tue, 2020-01-21 13:33

Once you have an up and running Oracle GoldenGate Microservices environment, there may come a time when you need to add another deployment to the enviornment.  Adding deployments is easily done using Oracle GoldenGate Configuration Assistant (OGGCA).  In the below video, I show you how to add a deployment to an existing ServiceManager.  Additionally, I […]

The post Add a deployment to Oracle GoldenGate 19c Microservices appeared first on DBASolved.

Categories: DBA Blogs

PostgreSQL 13: parallel vacuum for indexes

Yann Neuhaus - Mon, 2020-01-20 01:06

Because of its implementation of MVCC PostgreSQL needs a way to cleanup old/dead rows and this is the responsibility of vacuum. Up to PostgreSQL 12 this is done table per table and index per index. There are a lot of parameters to fine tune auto vacuum but none of those allowed vacuum to run in parallel against a relation. The only option you had to allow auto vacuum to do more work in parallel was to increase autovacuum_max_workers so that more relations can be worked on at the same time. Working against multiple indexes of the same table was not possible. Once PostgreSQL 13 will be released this will change.

When you have a look at the help of vacuum you will notice a new option:

postgres=# \h vacuum
Command:     VACUUM
Description: garbage-collect and optionally analyze a database
Syntax:
VACUUM [ ( option [, ...] ) ] [ table_and_columns [, ...] ]
VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_and_columns [, ...] ]

where option can be one of:

    FULL [ boolean ]
    FREEZE [ boolean ]
    VERBOSE [ boolean ]
    ANALYZE [ boolean ]
    DISABLE_PAGE_SKIPPING [ boolean ]
    SKIP_LOCKED [ boolean ]
    INDEX_CLEANUP [ boolean ]
    TRUNCATE [ boolean ]
    PARALLEL integer

and table_and_columns is:

    table_name [ ( column_name [, ...] ) ]

URL: https://www.postgresql.org/docs/devel/sql-vacuum.html

By providing a positive integer to the “PARALLEL” option you tell vacuum how many background workers should be used to vacuum indexes for a given table (0 will disable parallel processing). Lets do a small test setup to demonstrate this:

postgres=# create table t1 as select i as a, i::text as b, now() as c from generate_series(1,3000000) i;
SELECT 3000000
postgres=# create index i1 on t1(a);
CREATE INDEX
postgres=# 
postgres=# create index i2 on t1(b);
CREATE INDEX
postgres=# create index i3 on t1(c);
CREATE INDEX

One table, four indexes. If we go for parallel 4 we should see four background workers doing the work against the indexes:

postgres=# update t1 set a=5,b='ccc',c=now() where mod(a,5)=0;
UPDATE 600000
postgres=# vacuum (parallel 4) t1;
VACUUM

As the table and the indexes are quite small we need to be fast but at least 2 parallel workers show up in the process list for the vacuum operation:

postgres 16688 15925 13 07:30 ?        00:01:07 postgres: postgres postgres [local] VACUUM
postgres 19184 15925  0 07:39 ?        00:00:00 postgres: parallel worker for PID 16688   
postgres 19185 15925  0 07:39 ?        00:00:00 postgres: parallel worker for PID 16688   

Nice. Please note that indexes are only considered for parallel vacuum when they meet the min_parallel_index_scan_size criteria. For FULL vacuum there will be no parallel processing as well.

Btw: The current maximal value is 1024:

postgres=# vacuum (parallel -4) t1;
ERROR:  parallel vacuum degree must be between 0 and 1024
LINE 1: vacuum (parallel -4) t1;

You can also see the parallel stuff on the verbose output:

postgres=# vacuum (parallel 4, verbose true) t1;
INFO:  vacuuming "public.t1"
INFO:  launched 2 parallel vacuum workers for index vacuuming (planned: 2)
INFO:  scanned index "i2" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.24 s, system: 0.06 s, elapsed: 0.89 s
INFO:  scanned index "i1" to remove 600000 row versions
DETAIL:  CPU: user: 0.17 s, system: 0.10 s, elapsed: 1.83 s
INFO:  scanned index "i3" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.16 s, system: 0.14 s, elapsed: 1.69 s
INFO:  scanned index "i4" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.25 s, system: 0.09 s, elapsed: 1.17 s
INFO:  "t1": removed 600000 row versions in 20452 pages
DETAIL:  CPU: user: 0.17 s, system: 0.16 s, elapsed: 1.43 s
INFO:  index "i1" now contains 3000000 row versions in 14308 pages
DETAIL:  600000 index row versions were removed.
1852 index pages have been deleted, 640 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i2" now contains 3000000 row versions in 14305 pages
DETAIL:  600000 index row versions were removed.
1851 index pages have been deleted, 640 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i3" now contains 3000000 row versions in 14326 pages
DETAIL:  600000 index row versions were removed.
3941 index pages have been deleted, 1603 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i4" now contains 3000000 row versions in 23391 pages
DETAIL:  600000 index row versions were removed.
5527 index pages have been deleted, 2246 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "t1": found 600000 removable, 3000000 nonremovable row versions in 21835 out of 22072 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 499
There were 132 unused item identifiers.
Skipped 0 pages due to buffer pins, 237 frozen pages.
0 pages are entirely empty.
CPU: user: 0.75 s, system: 0.36 s, elapsed: 5.07 s.
INFO:  vacuuming "pg_toast.pg_toast_16392"
INFO:  index "pg_toast_16392_index" now contains 0 row versions in 1 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "pg_toast_16392": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 499
There were 0 unused item identifiers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
VACUUM

Cet article PostgreSQL 13: parallel vacuum for indexes est apparu en premier sur Blog dbi services.

Running the (Segment) Space Advisor - on a Partitioned Table

Hemant K Chitale - Sat, 2020-01-18 08:30
Here is a quick demo on running the Segment Space Advisor manually

I need to start with the ADVISOR privilege

$sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sat Jan 18 22:02:10 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: system
Enter password:
Last Successful login time: Sat Jan 18 2020 22:00:32 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> grant advisor to hemant;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production


I can then connect with my account to run the Advisor

$sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sat Jan 18 22:02:35 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: hemant
Enter password:
Last Successful login time: Sat Jan 18 2020 21:50:05 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>
SQL> DECLARE
l_object_id NUMBER;
l_task_name VARCHAR2(32767) := 'Advice on My SALES_DATA Table';

BEGIN
DBMS_ADVISOR.create_task (
advisor_name => 'Segment Advisor',
task_name => l_task_name
);

DBMS_ADVISOR.create_object (
task_name => l_task_name,
object_type => 'TABLE',
attr1 => 'HEMANT',
attr2 => 'SALES_DATA',
attr3 => NULL,
attr4 => NULL,
attr5 => NULL,
object_id => l_object_id
);

DBMS_ADVISOR.set_task_parameter (
task_name => l_task_name,
parameter => 'RECOMMEND_ALL',
value => 'TRUE');

DBMS_ADVISOR.execute_task(task_name => l_task_name);
end;
/

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
PL/SQL procedure successfully completed.

SQL>


I can then review the advise :

SQL> set serveroutput on
begin
FOR cur_rec IN (SELECT f.impact,
o.type,
o.attr1,
o.attr2,
o.attr3,
o.attr4,
f.message,
f.more_info
FROM dba_advisor_findings f, dba_advisor_objects o
WHERE f.object_id = o.object_id
AND f.task_name = o.task_name
AND f.task_name = 'Advice on My SALES_DATA Table'
ORDER BY f.impact DESC)
LOOP
DBMS_OUTPUT.put_line('..');
DBMS_OUTPUT.put_line('Type : ' || cur_rec.type);
DBMS_OUTPUT.put_line('Schema : ' || cur_rec.attr1);
DBMS_OUTPUT.put_line('Table Name : ' || cur_rec.attr2);
DBMS_OUTPUT.put_line('Partition Name : ' || cur_rec.attr3);
DBMS_OUTPUT.put_line('Tablespace Name : ' || cur_rec.attr4);
DBMS_OUTPUT.put_line('Message : ' || cur_rec.message);
DBMS_OUTPUT.put_line('More info : ' || cur_rec.more_info);
END LOOP;
end;
/

SQL> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2015
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2016
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:1016: Reclaimable Space :64520:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2017
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:1016: Reclaimable Space :64520:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2018
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2019
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_MAXVALUE
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:

PL/SQL procedure successfully completed.

SQL>


Thus, it actually reports for each Partition in the table.


Note : Script based on script by Tim Hall  (@oraclebase)  at https://oracle-base.com/dba/script?category=10g&file=segment_advisor.sql


Categories: DBA Blogs

Deploying your own PostgreSQL image on Nutanix Era – 2 – Deploying a new PostgreSQL VM

Yann Neuhaus - Sat, 2020-01-18 00:00

In the last post I described how you can create your own PostgreSQL image in Nutanix Era. In Nutanix wording this is a “Software profile”. This profile can now be used to deploy PostgreSQL VMs with just a few clicks or by using the API. In this post we’ll look at how this can be done and if it is really as easy as Nutanix promises. We’ll be doing it by using the graphical console first and then by using the API. Lets go.

For deploying new databases we obviously need to go to the database section of Era:

For now we are going to deploy a single instance:

Our new software profile is available and this is what we use. Additionally a public ssh key should be provided for accessing the node:

Provide the details for the new instance:

Time machine allows you to go back in time and creates automatic storage snapshots according to the schedule you define here:

SLAs define retention policies for the snapshots. I will not cover that her.

Kick it off and wait for the task to finish:

A few minutes later it is done:

Our new database is ready:

From a deployment point of view this is super simple: A few clicks and a new PostgreSQL server is ready to use in minutes. The API call to do the same on the command line would be this:

curl -k -X POST \
	https://10.38.11.9/era/v0.8/databases/provision \
	-H 'Content-Type: application/json' \
	-H 'Authorization: Basic YWRtaW46bngyVGVjaDAwNyE=' \
	-d \
	'{"databaseType":"postgres_database","databaseName":"db2","databaseDescription":"db2","clusterId":"ccdab636-2a11-477b-a358-2b8db0382421","softwareProfileId":"aa099964-e17c-4264-b047-00881dc5e2f8","softwareProfileVersionId":"7ae7a44c-d7c5-46bc-bf0f-f9fe7665f537","computeProfileId":"fb1d20bb-38e4-4964-852f-6209990402c6","networkProfileId":"8ed5214c-4e2a-4ce5-a899-a07103bc60cc","dbParameterProfileId":"3679ab5b-7688-41c4-bcf3-9fb4491544ea","newDbServerTimeZone":"Europe/Zurich","timeMachineInfo":{"name":"db2_TM","description":"","slaId":"4d9dcd6d-b6f8-47f0-8015-9e691c1d3cf4","schedule":{"snapshotTimeOfDay":{"hours":1,"minutes":0,"seconds":0},"continuousSchedule":{"enabled":true,"logBackupInterval":30,"snapshotsPerDay":1},"weeklySchedule":{"enabled":true,"dayOfWeek":"FRIDAY"},"monthlySchedule":{"enabled":true,"dayOfMonth":"17"},"quartelySchedule":{"enabled":true,"startMonth":"JANUARY","dayOfMonth":"17"},"yearlySchedule":{"enabled":false,"dayOfMonth":31,"month":"DECEMBER"}},"tags":[],"autoTuneLogDrive":true},"provisionInfo":[{"name":"application_type","value":"postgres_database"},{"name":"nodes","value":"1"},{"name":"listener_port","value":"5432"},{"name":"proxy_read_port","value":"5001"},{"name":"proxy_write_port","value":"5000"},{"name":"database_size","value":10},{"name":"working_dir","value":"/tmp"},{"name":"auto_tune_staging_drive","value":true},{"name":"enable_synchronous_mode","value":false},{"name":"dbserver_name","value":"postgres-2"},{"name":"dbserver_description","value":"postgres-2"},{"name":"ssh_public_key","value":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDgJb4co+aaRuyAJv8F6fsz2YgO0ZHldX6qS22c813iDeUGWP/1bGhF598uODlgK5nx29sW2WTlcmorZuCgHC2BJfzhL1xsL5Ca94uAEg48MbA+1+Woe49mF6bPDJWLXqC0YUpCDBZI9VHnrij4QhX2r3G87W0WNnNww1POwIJN3xpVq/DnDWye+9ZL3s+eGR8d5f71iKnBo0BkcvY5gliOtM/DuLT7Cs7KkjPgY+S6l5Cztvcj6hGO96zwlOVN3kjB18RH/4q6B7YkhSxgTDMXCdoOf9F6BIhAjZga4lodHGbkMEBW+BJOnYnFTl+jVB/aY1dALECnh4V+rr0Pd8EL daw@ltdaw"},{"name":"db_password","value":"postgres"}]}'

As said, provisioning is easy. Now lets see how the instance got created. Using the IP Address that was assigned, the postgres user and the public ssh key we provided we can connect to the VM:

ssh postgres@10.38.11.40
The authenticity of host '10.38.11.40 (10.38.11.40)' can't be established.
ECDSA key fingerprint is SHA256:rN4QzdL2y55Lv8oALnW6pBQzBKOInP+X4obiJrqvK8o.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.38.11.40' (ECDSA) to the list of known hosts.
Last login: Fri Jan 17 18:48:26 2020 from 10.38.11.9

The first thing I would try is to connect to PostgreSQL:

-bash-4.2$ psql postgres
psql (11.6 dbi services build)
Type "help" for help.

postgres=# select version();
                                                          version
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit
(1 row)

postgres=#

Ok, that works. What I did before I created the Software profile in the last post: I installed our DMK and that gets started automatically as the required stuff is added to “.bash_profile” (you can see that in the last post because PS1 is not the default). But here I did not get our environment because Era has overwritten “.bash_profile”:

-bash-4.2$ cat .bash_profile
export PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/u01/app/postgres/product/11/db_6/bin:/u01/app/postgres/product/11/db_6/bin
export PGDATA=/pg_data_544474d1_822c_4912_8db4_7c1c7f45df54
export PGDATABASE=postgres
export PGUSER=postgres
export PGPORT=5432
export ANSIBLE_LIBRARY=/opt/era_base/era_engine/drivers/nutanix_era/era_drivers/AnsibleModules/bin

Would be great if the original “.bash_profile” would still be there so custom adjustment are not lost.

Looking at mountpoinzs:

-bash-4.2$ df -h
Filesystem                                                                                                                          Size  Used Avail Use% Mounted on
devtmpfs                                                                                                                            7.8G     0  7.8G   0% /dev
tmpfs                                                                                                                               7.8G  8.0K  7.8G   1% /dev/shm
tmpfs                                                                                                                               7.8G  9.7M  7.8G   1% /run
tmpfs                                                                                                                               7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/centos_centos7postgres12-root                                                                                            26G  2.4G   24G   9% /
/dev/sda1                                                                                                                          1014M  149M  866M  15% /boot
/dev/sdb                                                                                                                             27G   74M   26G   1% /u01/app/postgres/product/11/db_6
tmpfs                                                                                                                               1.6G     0  1.6G   0% /run/user/1000
/dev/mapper/VG_PG_DATA_544474d1_822c_4912_8db4_7c1c7f45df54-LV_PG_DATA_544474d1_822c_4912_8db4_7c1c7f45df54                          50G  108M   47G   1% /pg_data_544474d1_822c_4912_8db4_7c1c7f45df54
/dev/mapper/VG_PG_DATA_TBLSPC_544474d1_822c_4912_8db4_7c1c7f45df54-LV_PG_DATA_TBLSPC_544474d1_822c_4912_8db4_7c1c7f45df54           9.8G   44M  9.2G   1% /pg_dbidb1_544474d1_822c_4912_8db4_7c1c7f45df54
/dev/mapper/ntnx_era_agent_vg_1700f784395111eab53f506b8d241a39-ntnx_era_agent_lv_era_software_1700f784395111eab53f506b8d241a39      1.5G  534M  821M  40% /opt/era_base/era_engine
/dev/mapper/ntnx_era_agent_vg_1700f784395111eab53f506b8d241a39-ntnx_era_agent_lv_era_logs_1700f784395111eab53f506b8d241a39          3.9G   18M  3.6G   1% /opt/era_base/logs
/dev/mapper/ntnx_era_agent_vg_1700f784395111eab53f506b8d241a39-ntnx_era_agent_lv_era_config_1700f784395111eab53f506b8d241a39         47M  1.1M   42M   3% /opt/era_base/cfg
/dev/mapper/ntnx_era_agent_vg_5c3a7120261b4354a3b38ee168726298-ntnx_era_agent_lv_db_stagging_logs_5c3a7120261b4354a3b38ee168726298   99G   93M   94G   1% /opt/era_base/db_logs

The PostgreSQL binaries got their own mountpoint. PGDATA is on its own mountpoint and it seems we have a tablespace in “/pg_dbidb1_544474d1_822c_4912_8db4_7c1c7f45df54”:

-bash-4.2$ psql -c "\db" postgres
                            List of tablespaces
     Name      |  Owner   |                    Location
---------------+----------+-------------------------------------------------
 pg_default    | postgres |
 pg_global     | postgres |
 tblspc_dbidb1 | postgres | /pg_dbidb1_544474d1_822c_4912_8db4_7c1c7f45df54
(3 rows)

-bash-4.2$ psql -c "\l+" postgres
                                                                     List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   |  Size   |  Tablespace   |                Description
-----------+----------+----------+-------------+-------------+-----------------------+---------+---------------+--------------------------------------------
 dbidb1    | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7717 kB | tblspc_dbidb1 |
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7717 kB | pg_default    | default administrative connection database
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7577 kB | pg_default    | unmodifiable empty database
           |          |          |             |             | postgres=CTc/postgres |         |               |
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7577 kB | pg_default    | default template for new databases
           |          |          |             |             | postgres=CTc/postgres |         |               |
(4 rows)

This confirms that our database “dbidb1” was placed into a separate tablespace. There is another mountpoint which seems to be there for the archived WAL files, and we can confirm that as well:

-bash-4.2$ psql
psql (11.6 dbi services build)
Type "help" for help.

postgres=# show archive_command ;
                      archive_command
-----------------------------------------------------------
  sh /opt/era_base/cfg/postgres/archive_command.sh %p  %f
(1 row)

postgres=# \! cat /opt/era_base/cfg/postgres/archive_command.sh
test ! -f /opt/era_base/db_logs/dbidb1/$2 &&  cp -p $1 /opt/era_base/db_logs/dbidb1//$2
postgres=#

So archiving is enabled and this is what I expected. The costing parameters seem to be the default:

postgres=# select name,setting from pg_settings where name like '%cost%';
             name             | setting
------------------------------+---------
 autovacuum_vacuum_cost_delay | 20
 autovacuum_vacuum_cost_limit | -1
 cpu_index_tuple_cost         | 0.005
 cpu_operator_cost            | 0.0025
 cpu_tuple_cost               | 0.01
 jit_above_cost               | 100000
 jit_inline_above_cost        | 500000
 jit_optimize_above_cost      | 500000
 parallel_setup_cost          | 1000
 parallel_tuple_cost          | 0.1
 random_page_cost             | 4
 seq_page_cost                | 1
 vacuum_cost_delay            | 0
 vacuum_cost_limit            | 200
 vacuum_cost_page_dirty       | 20
 vacuum_cost_page_hit         | 1
 vacuum_cost_page_miss        | 10
(17 rows)

Memory parameters seem to be the default as well:

postgres=# show shared_buffers ;
 shared_buffers
----------------
 128MB
(1 row)

postgres=# show work_mem;
 work_mem
----------
 4MB
(1 row)

postgres=# show maintenance_work_mem ;
 maintenance_work_mem
----------------------
 64MB
(1 row)

postgres=# show effective_cache_size ;
 effective_cache_size
----------------------
 4GB
(1 row)

There are “Parameter profiles” you can use and create but currently there are not that many parameters that can be adjusted:

That’s it for now. Deploying new PostgreSQL instances is really easy. I am currently not sure why there needs to be a tablespace but maybe that becomes clear in a future post when we’ll look at backup and restore. From a PostgreSQL configuration perspective you for sure need to do some adjustments when the databases become bigger and load increases.

Cet article Deploying your own PostgreSQL image on Nutanix Era – 2 – Deploying a new PostgreSQL VM est apparu en premier sur Blog dbi services.

Data Guard Fast-Start Failover Test – Shutdown Standby Host

Michael Dinh - Fri, 2020-01-17 15:41

Data Guard Fast-Start Failover Test – Shutdown Primary Host

Review primary host and start observer:
[oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 17 20:42:54 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PRIMARY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg1 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 13 seconds ago)

DGMGRL> enable fast_start failover
Enabled.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 12 seconds ago)

DGMGRL> validate database cdb1

  Database Role:    Primary database

  Ready for Switchover:  Yes

DGMGRL> validate database cdb1_stby

  Database Role:     Physical standby database
  Primary Database:  cdb1

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

DGMGRL> show database cdb1

Database - cdb1

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    cdb1

  Database Error(s):
    ORA-16820: fast-start failover observer is no longer observing this database

Database Status:
ERROR

DGMGRL> show database cdb1_stby

Database - cdb1_stby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 2.00 KByte/s
  Real Time Query:    ON
  Instance(s):
    cdb1

  Database Error(s):
    ORA-16820: fast-start failover observer is no longer observing this database

Database Status:
ERROR

DGMGRL> start observer
[P001 01/17 20:46:01.38] Authentication failed.
DGM-16979: Unable to log on to the primary or standby database as SYSDBA
Failed.
DGMGRL> connect sys@cdb1
Password:
Connected as SYSDBA.
DGMGRL> start observer
Observer started
Restart standby host, listener, and database:
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant status
Current machine states:

default                   poweroff (virtualbox)

The VM is powered off. To restart the VM, simply run `vagrant up`

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...

====================================================================
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant status
Current machine states:

default                   running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

====================================================================
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant ssh
Last login: Fri Jan 17 20:11:35 2020 from 10.0.2.2
[vagrant@ol7-121-dg2 ~]$ sudo su - oracle
Last login: Fri Jan 17 20:11:44 UTC 2020 on pts/0
[oracle@ol7-121-dg2 ~]$ . oraenv <<< cdb1
ORACLE_SID = [cdb1] ? The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ol7-121-dg2 ~]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 17-JAN-2020 20:53:20

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                17-JAN-2020 20:53:22
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "cdb1_stby_DGMGRL" has 1 instance(s).
  Instance "cdb1", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@ol7-121-dg2 ~]$ cd /sf_working/sql
[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 17 20:53:38 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@cdb1> startup mount;
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size                  2924928 bytes
Variable Size             520097408 bytes
Database Buffers         1073741824 bytes
Redo Buffers               13848576 bytes
Database mounted.
SYS@cdb1> @stby.sql

Session altered.

*** v$database ***

DB          OPEN                   DATABASE           REMOTE     SWITCHOVER      DATAGUARD  PRIMARY_DB
UNIQUE_NAME MODE                   ROLE               ARCHIVE    STATUS          BROKER     UNIQUE_NAME
----------- ---------------------- ------------------ ---------- --------------- ---------- ---------------
cdb1_stby   MOUNTED                PHYSICAL STANDBY   ENABLED    NOT ALLOWED     ENABLED    cdb1

*** gv$archive_dest ***

                                                                                              MOUNT
 THREAD#  DEST_ID DESTINATION               STATUS       TARGET           SCHEDULE PROCESS       ID
-------- -------- ------------------------- ------------ ---------------- -------- ---------- -----
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL            ACTIVE   ARCH           0
       1       32 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL            ACTIVE   RFS            0

*** gv$archive_dest_status ***

                               DATABASE        RECOVERY
 INST_ID  DEST_ID STATUS       MODE            MODE                    GAP_STATUS      ERROR
-------- -------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1        1 VALID        MOUNTED-STANDBY IDLE                                    NONE
       1       32 VALID        UNKNOWN         IDLE                                    NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                   26 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       1        1 NO              25 17-JAN-2020 20:53:53         2 41.68333
       1        1 YES             23 17-JAN-2020 20:12:12

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 24 thread 1;

*** v$dataguard_stats ***

NAME                      VALUE              UNIT
------------------------- ------------------ ------------------------------
transport lag             +00 00:00:00       day(2) to second(0) interval
apply lag                                    day(2) to second(0) interval

*** gv$managed_standby ***

no rows selected

SYS@cdb1> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg2 sql]$
Screen output from observer:
DGMGRL> start observer
Observer started
[W000 01/17 20:48:58.27] The primary database has requested a transition to the UNSYNC/LAGGING state.
[W000 01/17 20:48:58.28] Permission granted to the primary database to transition to UNSYNC/LAGGING state.
[W000 01/17 20:50:01.29] The primary database has been in UNSYNC/LAGGING state for 63 seconds.
[W000 01/17 20:51:04.31] The primary database has been in UNSYNC/LAGGING state for 126 seconds.
[W000 01/17 20:52:07.33] The primary database has been in UNSYNC/LAGGING state for 189 seconds.
[W000 01/17 20:53:10.36] The primary database has been in UNSYNC/LAGGING state for 252 seconds.
[W000 01/17 20:54:13.39] The primary database has been in UNSYNC/LAGGING state for 315 seconds.
[W000 01/17 20:54:16.39] The primary database returned to SYNC/NOT LAGGING state.
Validate Data Guard configuration:
[oracle@ol7-121-dg2 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 10 seconds ago)

DGMGRL> validate database cdb1

  Database Role:    Primary database

  Ready for Switchover:  Yes

DGMGRL> validate database cdb1_stby

  Database Role:     Physical standby database
  Primary Database:  cdb1

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

DGMGRL> exit
[oracle@ol7-121-dg2 sql]$
Open database read only:

This is required because database is not register to cluster resource.

[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 17 21:33:07 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG2:(SYS@cdb1):PHYSICAL STANDBY> alter database open read only;

Database altered.

OL7-121-DG2:(SYS@cdb1):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg2 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show database cdb1_stby

Database - cdb1_stby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 1.00 KByte/s
  Real Time Query:    ON
  Instance(s):
    cdb1

Database Status:
SUCCESS

DGMGRL> exit
[oracle@ol7-121-dg2 sql]$

Dbvisit 9: Adding datafiles and or tempfiles

Yann Neuhaus - Fri, 2020-01-17 11:57

One question I was asking is if the standby_file_management parameter is relevant in a Dbvisit environment with Oracle Standard Edition. I did some tests and I show here what I did.
We suppose that the Dbvisit is already set and that the replication is fine

[oracle@dbvisit1 trace]$ /u01/app/dbvisit/standby/dbvctl -d dbstd -i
=============================================================
Dbvisit Standby Database Technology (9.0.08_0_g99a272b) (pid 19567)
dbvctl started on dbvisit1: Fri Jan 17 16:48:16 2020
=============================================================

Dbvisit Standby log gap report for dbstd at 202001171648:
-------------------------------------------------------------
Description       | SCN          | Timestamp
-------------------------------------------------------------
Source              2041731        2020-01-17:16:48:18 +01:00
Destination         2041718        2020-01-17:16:48:01 +01:00

Standby database time lag (DAYS-HH:MI:SS): +00:00:17

Report for Thread 1
-------------------
SOURCE
Current Sequence 8
Last Archived Sequence 7
Last Transferred Sequence 7
Last Transferred Timestamp 2020-01-17 16:48:07

DESTINATION
Recovery Sequence 8

Transfer Log Gap 0
Apply Log Gap 0

=============================================================
dbvctl ended on dbvisit1: Fri Jan 17 16:48:23 2020
=============================================================

[oracle@dbvisit1 trace]$

While the standby_file_management is set to MANUAL on both servers

[oracle@dbvisit1 trace]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 17 16:50:50 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL
SQL>


[oracle@dbvisit2 back_dbvisit]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 17 16:51:15 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

SQL>  show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL
SQL>

Let’s create a tablespace MYTAB on the primary database

SQL> create tablespace mytab datafile '/u01/app/oracle/oradata/DBSTD/mytab01.dbf' size 10M;

Tablespace created.

SQL>

SQL> alter system switch logfile;

System altered.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB

6 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/system01.dbf
/u01/app/oracle/oradata/DBSTD/sysaux01.dbf
/u01/app/oracle/oradata/DBSTD/undotbs01.dbf
/u01/app/oracle/oradata/DBSTD/mytab01.dbf
/u01/app/oracle/oradata/DBSTD/users01.dbf

SQL>

A few moment we can see that the new datafile is replicated on the standby

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB

6 rows selected.

SQL>  select name from v$datafile
  2  ;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/system01.dbf
/u01/app/oracle/oradata/DBSTD/sysaux01.dbf
/u01/app/oracle/oradata/DBSTD/undotbs01.dbf
/u01/app/oracle/oradata/DBSTD/mytab01.dbf
/u01/app/oracle/oradata/DBSTD/users01.dbf

SQL>

Now let’s repeat the tablespace creation while the parameter is set to AUTO on both side

SQL> show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      AUTO
SQL>  create tablespace mytab2 datafile '/u01/app/oracle/oradata/DBSTD/mytab201.dbf' size 10M;

Tablespace created.

SQL>

SQL> alter system switch logfile;

System altered.

SQL>

A few moment later the tablespace mytab2 was also replicated on the standby

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB
MYTAB2

7 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/system01.dbf
/u01/app/oracle/oradata/DBSTD/mytab201.dbf
/u01/app/oracle/oradata/DBSTD/sysaux01.dbf
/u01/app/oracle/oradata/DBSTD/undotbs01.dbf
/u01/app/oracle/oradata/DBSTD/mytab01.dbf
/u01/app/oracle/oradata/DBSTD/users01.dbf

6 rows selected.

In Dbvisit documentation we can find this
Note 2: This feature is independent of the Oracle parameter STANDBY_FILE_MANAGEMENT. Dbvisit Standby will detect if STANDBY_FILE_MANAGEMENT has added the datafile to the standby database, and if so, Dbvisit Standby will not add the datafile.
Note 3: STANDBY_FILE_MANAGEMENT can only be used in Enterprise Edition and should not be set in Standard Edition.

Dbvisit does not use STANDBY_FILE_MANAGEMENT for datafile replication. So I decide to set this value to its default value which is MANUAL.

What about adding tempfile in a dbvisit environment. In the primary I create a new temporary tablespace

SQL> show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL
SQL> create temporary tablespace temp2 tempfile '/u01/app/oracle/oradata/DBSTD/temp2_01.dbf' size 10M;

Tablespace created.

SQL> alter system switch logfile;

System altered.

SQL>

We can see on the primary that we now have two tempfiles.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB
MYTAB2
TEMP2

8 rows selected.

SQL>  select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/temp01.dbf
/u01/app/oracle/oradata/DBSTD/temp2_01.dbf

SQL>

On standby side, the new temporary tablespace was replicated.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB
MYTAB2
TEMP2

8 rows selected.

But the new tempfile is not listed on the standby

SQL>  select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/temp01.dbf

SQL>

In fact it’s the expected behavior. In the documentation we can find following
If your preference is to have exactly the same number of temp files referenced in the standby control file as your current primary database, then once a new temp file has been added on the primary, you need to recreate a standby control file by running the following command from the primary server:
dbvctl -f create_standby_ctl -d DDC

So let’s recreate the standby control file

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -f create_standby_ctl -d dbstd
=>Replace current standby controfiles on dbvisit2 with new standby control
file?  [No]: yes

>>> Create standby control file... done

>>> Copy standby control file to dbvisit2... done

>>> Recreate standby control file... done

>>> Standby controfile(s) on dbvisit2 recreated. To complete please run dbvctl on the
    primary, then on the standby.
[oracle@dbvisit1 ~]$

And then after we can verify that the new tempfile is now visible at standby side

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/temp01.dbf
/u01/app/oracle/oradata/DBSTD/temp2_01.dbf

SQL>

Cet article Dbvisit 9: Adding datafiles and or tempfiles est apparu en premier sur Blog dbi services.

Deploying your own PostgreSQL image on Nutanix Era

Yann Neuhaus - Fri, 2020-01-17 11:03

Some days ago we had a very good training on Nutanix. Nutanix is a Hyper-converged infrastructure and that means that all is software driven and the system can be deployed on many hardware configurations. I will not go into the details of the system itself but rather look at one component/module which is called Era. Era promises to simplify database deployments by providing a clean and simple user interface (and an API) that provides deployment procedures for PostgreSQL, MS SQL, MySQL, MariaDB and Oracle. There are predefined templates you can use but in this post I’ll look at how you can use Era to deploy your own PostgreSQL image.

Before you can register a software profile with Era there needs to be a VM up and running which already has PostgreSQL installed. For that I’ll import the latest CentOS 7 ISO with Prism (CentOS 8 is not yet supported).

Importing images is done in the “Images Configuration” section under “Settings” of Prism:


Once you start the upload a new task is generated which can be monitored in the tasks section:

Now that the image is ready we need to deploy a new virtual machine which will use the image as installation source:








As the virtual machine is now defined we need to power it on and then launch the console:


Follow your preferred way of doing the CentOS installation and once it is done you need to power off the virtual machine for removing the ISO. Otherwise you will always land in the installation procedure when the virtual machine is started:


After you powered of the virtual machine again you should be able to connect with ssh:

The next step is to install PostgreSQL as you prefer to do it. Here is an example for doing it from source code. We will not create a PostgreSQL instance, the binaries are enough. In my case everything was installed here:

 postgres@centos7postgres12:/home/postgres/ [pg121] echo $PGHOME
/u01/app/postgres/product/12/db_1/
postgres@centos7postgres12:/home/postgres/ [pg121] ls $PGHOME
bin  include  lib  share

Now that we have out PostgreSQL server we need to register the server in Era. Before doing that you should download and execute the pre-check script on the new database server:

postgres@centos7postgres12:/home/postgres/ [pg121] sudo ./era_linux_prechecks.sh

----------------------------------------------------------------------------------
   Error: Database type not specified
   Syntax: $ ./era_linux_prechecks.sh -t|--database_type  [-c|--cluster_ip ] [-p|--cluster_port] [-d|--detailed]
   Database type can be: oracle_database, postgres_database, mariadb_database, mysql_database
----------------------------------------------------------------------------------

postgres@centos7postgres12:/home/postgres/ [pg121] sudo ./era_linux_prechecks.sh -t postgres_database
libselinux-python-2.5-14.1.el7.x86_64


--------------------------------------------------------------------
|              Era Pre-requirements Validation Report              |
--------------------------------------------------------------------

     General Checks:
     ---------------
         1] Username           : root
         2] Package manager    : yum
         2] Database type      : postgres_database

     Era Configuration Dependencies:
     -------------------------------
         1] User has sudo access                         : YES
         2] User has sudo with NOPASS access             : YES
         3] Crontab configured for user                  : YES
         4] Secure paths configured in /etc/sudoers file : YES

     Era Software Dependencies:
     --------------------------
          1] GCC                  : N/A
          2] readline             : YES
          3] libselinux-python    : YES
          4] crontab              : YES
          5] lvcreate             : YES
          6] lvscan               : YES
          7] lvdisplay            : YES
          8] vgcreate             : YES
          9] vgscan               : YES
         10] vgdisplay            : YES
         11] pvcreate             : YES
         12] pvscan               : YES
         13] pvdisplay            : YES
         14] zip                  : NO
         15] unzip                : YES
         16] rsync                : NO

     Summary:
     --------
         This machine does not satisfy all of the dependencies required by Era.
         It can not be onboarded to Era unless all of these are satified.

     **WARNING: Cluster API was not provided. Couldn't go ahead with the Prism API connectivity check.
     Please ensure Prism APIs are callable from the host.
====================================================================
1postgres@centos7postgres12:/home/postgres/ [pg121]

In my case only “zip” and “rsync” are missing which of course is easy to fix:

postgres@centos7postgres12:/home/postgres/ [pg121] sudo yum install -y zip rsync
...
postgres@centos7postgres12:/home/postgres/ [pg121] sudo ./era_linux_prechecks.sh -t postgres_database
libselinux-python-2.5-14.1.el7.x86_64


--------------------------------------------------------------------
|              Era Pre-requirements Validation Report              |
--------------------------------------------------------------------

     General Checks:
     ---------------
         1] Username           : root
         2] Package manager    : yum
         2] Database type      : postgres_database

     Era Configuration Dependencies:
     -------------------------------
         1] User has sudo access                         : YES
         2] User has sudo with NOPASS access             : YES
         3] Crontab configured for user                  : YES
         4] Secure paths configured in /etc/sudoers file : YES

     Era Software Dependencies:
     --------------------------
          1] GCC                  : N/A
          2] readline             : YES
          3] libselinux-python    : YES
          4] crontab              : YES
          5] lvcreate             : YES
          6] lvscan               : YES
          7] lvdisplay            : YES
          8] vgcreate             : YES
          9] vgscan               : YES
         10] vgdisplay            : YES
         11] pvcreate             : YES
         12] pvscan               : YES
         13] pvdisplay            : YES
         14] zip                  : YES
         15] unzip                : YES
         16] rsync                : YES

     Summary:
     --------
         This machine satisfies dependencies required by Era, it can be onboarded.

Looks good and the database server can now be registered:




Era as well has a task list which can be monitored:

… and then it fails because PostgreSQL 12.1 is not supported. That is fine but I would have expected the pre-check script to tell me that. Same procedure again, this time with PostgreSQL 11.6 and that succeeds:

This database server is now the source for a new “Software profile”:




And that’s it: Our new PostgreSQL software profile is ready to use. In the next post we’ll try to deploy a new virtual machine from that profile.

Cet article Deploying your own PostgreSQL image on Nutanix Era est apparu en premier sur Blog dbi services.

Dbvisit Standby 9 : Do you know the new snapshot feature?

Yann Neuhaus - Thu, 2020-01-16 10:23

Dbvisit snapshot option is a new feature available starting from version 9.0.06. I have tested this option and in this blog I am describing the tasks I have done.
The configuration I am using is following
dbvist1 : primary server
dbvist 2 : standby server
orcl : oracle 19c database
We suppose that the dbvisit environment is already set and the replication is going fine. See previous blog for setting up dbvisit standby
First there are two options
-Snapshot Groups
-Single Snapshots

Snapshot Groups
This option is ideal for companies that are using Oracle Standard Edition (SE,SE1,SE2) where they would like to have a “database that is read-only, but also being kept up to date” . It allows to create 2 or more read-only snapshots of the standby at a time interval. The snapshot will be opened in a read-only mode after its creation. A new service which will point to the latest snapshot is created and will be automatically managed by the listener.
The use of the feature is very easy. As it required managing filesystems like mount and umount, the user which will create the snapshot need some admin privileges.
In our case we have configured the user with full sudo privileges, but you can find all required privileges in the documentation .

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] sudo grep oracle /etc/sudoers
oracle ALL=(ALL) NOPASSWD:ALL
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

The use of the option can be done by command line or using the graphical dbvserver console. But It is highly recommended to use the graphical tool.
When the option snapshot is available with your license, you will see following tab in the dbvserver console

To create a snapshot groups we just have to choose the option NEW SNAPSHOT GROUP

And then fill the required information. In this example
-We will create a service named snap_service
-We will generate a snapshot every 15 minutes (900 secondes)
There will be a maximum of 2 snapshots. When the limit is reached, the oldest snapshop is deleted. Just note that the latest snapshot will be deleted only when the new snapshot is successfully created. That’s why we can have sometimes 3 snapshots
-The snapshots will be prefixed by MySna

After the snapshots are created without errors, we have to start the snapshots generation by clicking on the green button

And then we can see the status

On OS level we can verify that the first snapshot is created and that the corresponding instance started

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    2300     1  0 09:05 ?        00:00:00 ora_pmon_orcl
oracle    6765     1  0 09:57 ?        00:00:00 ora_pmon_MySna001
oracle    7794  1892  0 10:00 pts/0    00:00:00 grep --color=auto pmon
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

We can also verify that the service snap_service is registered in the listener. This service will be automatically pointed to the latest snapshot of the group.

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 16-JAN-2020 10:01:49

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbvisit2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                16-JAN-2020 09:05:07
Uptime                    0 days 0 hr. 56 min. 42 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/dbvisit2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbvisit2)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
…
…
…
Service "snap_service" has 1 instance(s).
  Instance "MySna001", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

To connect to this service, we just have to create an alias like

snapgroup =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dbvisit2)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = snap_service)
    )
  )

15 minutes later we can see that a new snapshot was generated

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    2300     1  0 09:05 ?        00:00:00 ora_pmon_orcl
oracle    6765     1  0 09:57 ?        00:00:00 ora_pmon_MySna001
oracle   11355     1  0 10:11 ?        00:00:00 ora_pmon_MySna002
oracle   11866  1892  0 10:13 pts/0    00:00:00 grep --color=auto pmon
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

Note that we can only open the snapshot in a read only mode

oracle@dbvisit1:/home/oracle/ [orcl (CDB$ROOT)] sqlplus sys/*****@snapgroup as sysdba

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         4 PDB2                           MOUNTED
SQL> alter pluggable database all open;
alter pluggable database all open
*
ERROR at line 1:
ORA-65054: Cannot open a pluggable database in the desired mode.


SQL>  alter pluggable database all open read only;

Pluggable database altered.

Of course you have probably seen that you can stop, start, pause and remove snapshots from the GUI.

Single Snapshot
This option allows to create read-only as well read-write snapshots for the database.
To create a single snapshot , we just have to choose the NEW SINGLE SNAPSHOT
In the following example the snapshot will be opened in a read write mode

At this end of the creation we can see the status

We can verify that a service SingleSn was also created

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] lsnrctl status | grep Singl
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
Service "SingleSn" has 1 instance(s).
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service

And that the instance SinglSn is started

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    3294     1  0 16:04 ?        00:00:00 ora_pmon_SingleSn
oracle    3966  1748  0 16:08 pts/0    00:00:00 grep --color=auto pmon
oracle   14349     1  0 13:57 ?        00:00:00 ora_pmon_orcl
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

We can also remark that at OS level, a filesystem is mounted for the snap. So there must be enough free space at the LVM that host the database.

oracle@dbvisit2:/home/oracle/ [MySna004 (CDB$ROOT)] df -h | grep snap
/dev/mapper/ora_data-SingleSn   25G   18G  6.3G  74% /u01/app/dbvisit/standby/snap/orcl/SingleSn
oracle@dbvisit2:/home/oracle/ [MySna004 (CDB$ROOT)]

Using the alias

singlesnap =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dbvisit2)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = SingleSn)
    )
  )

We can see that new snapshot is opened in read write mode

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
SINGLESN  READ WRITE

SQL>
Conclusion

What we will retain is that a snapshot groups is a group of snapshots taken at a regular time interval. These kinds of snapshots can only be opened in a read only mode.
The single snapshot can be created in a read only or read write mode. When opened in read write mode, it can be compared maybe to Oracle snapshot standby.
The option dbvisit snapshot required a license and is only supported on linux.

Cet article Dbvisit Standby 9 : Do you know the new snapshot feature? est apparu en premier sur Blog dbi services.

Disable fast_start failover

Michael Dinh - Wed, 2020-01-15 18:54

Data Guard Fast-Start Failover Test

If you recalled, observer was started from ol7-121-dg1 which was standby at the time and is now primary after failover.

[oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:21:28 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY&gt; exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg1 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1_stby - Primary database
Warning: ORA-16819: fast-start failover observer not started

cdb1 - (*) Physical standby database
Warning: ORA-16819: fast-start failover observer not started

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 16 seconds ago)

DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020

21:34:56.44 Wednesday, January 15, 2020
Initiating reinstatement for database "cdb1_stby"...
Reinstating database "cdb1_stby", please wait...
Reinstatement of database "cdb1_stby" succeeded
21:35:15.40 Wednesday, January 15, 2020
Kill observer process from OS:
[oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jan 16 00:32:04 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PRIMARY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg1 sql]$

[oracle@ol7-121-dg1 sql]$ ps -ef|grep dgmgrl
oracle 10381 32397 0 00:32 pts/1 00:00:00 grep --color=auto dgmgrl
oracle 31831 30778 0 Jan15 pts/0 00:00:01 dgmgrl

[oracle@ol7-121-dg1 sql]$ kill -9 31831
Output from Observer session:
DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020

21:34:56.44 Wednesday, January 15, 2020
Initiating reinstatement for database "cdb1_stby"...
Reinstating database "cdb1_stby", please wait...
Reinstatement of database "cdb1_stby" succeeded
21:35:15.40 Wednesday, January 15, 2020

Killed *****
[oracle@ol7-121-dg1 sql]$
Disable fast_start failover:
[oracle@ol7-121-dg1 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
Error: ORA-16820: fast-start failover observer is no longer observing this database

cdb1_stby - (*) Physical standby database
Error: ORA-16820: fast-start failover observer is no longer observing this database

Fast-Start Failover: ENABLED

Configuration Status:
ERROR (status updated 31 seconds ago)

DGMGRL> disable fast_start failover
Disabled.

DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
cdb1_stby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 12 seconds ago)

DGMGRL> exit
[oracle@ol7-121-dg1 sql]$

Data Guard Fast-Start Failover Test – Shutdown Primary Host

Michael Dinh - Wed, 2020-01-15 16:37

Note: Primary Database: cdb1_stby is because failover was previously performed.

This also demonstrate why it may not be a good idea to suffix stby for standby database.

Review Data Guard using sqlplus:
OL7-121-DG2:(SYS@cdb1):PRIMARY> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      cdb1
db_unique_name                       string      CDB1_STBY
pdb_file_name_convert                string
OL7-121-DG2:(SYS@cdb1):PRIMARY> show parameter fal

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
fal_client                           string
fal_server                           string      cdb1
OL7-121-DG2:(SYS@cdb1):PRIMARY>

********************************************************************************

OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      cdb1
db_unique_name                       string      cdb1
pdb_file_name_convert                string
OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY> show parameter fal

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
fal_client                           string
fal_server                           string      cdb1_stby
OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY>
Review Data Guard configuration:
DGMGRL> show configuration verbose
Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1_stby - Primary database
cdb1 - Physical standby database

Properties:
FastStartFailoverThreshold = '30'
OperationTimeout = '30'
TraceLevel = 'USER'
FastStartFailoverLagLimit = '30'
CommunicationTimeout = '180'
ObserverReconnect = '0'
FastStartFailoverAutoReinstate = 'TRUE'
FastStartFailoverPmyShutdown = 'TRUE'
BystandersFollowRoleChange = 'ALL'
ObserverOverride = 'FALSE'
ExternalDestination1 = ''
ExternalDestination2 = ''
PrimaryLostWriteAction = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

********************************************************************************

DGMGRL> show database verbose cdb1_stby

Database - cdb1_stby

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
cdb1

Properties:
DGConnectIdentifier = 'cdb1_stby'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = 'cdb1'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg2)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=cdb1_stby_DGMGRL)(INSTANCE_NAME=cdb1)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
TopWaitEvents = '(monitor)'

Database Status:
SUCCESS

********************************************************************************

DGMGRL> show database verbose cdb1

Database - cdb1

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 2.00 KByte/s
Active Apply Rate: 0 Byte/s
Maximum Apply Rate: 0 Byte/s
Real Time Query: ON
Instance(s):
cdb1

Properties:
DGConnectIdentifier = 'cdb1'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = 'cdb1_stby'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg1)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=cdb1_DGMGRL)(INSTANCE_NAME=cdb1)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
TopWaitEvents = '(monitor)'

Database Status:
SUCCESS

DGMGRL>
Validate Data Guard configuration:
DGMGRL> validate database verbose cdb1_stby

Database Role: Primary database

Ready for Switchover: Yes

Capacity Information:
Database Instances Threads
cdb1_stby 1 1

Temporary Tablespace File Information:
cdb1_stby TEMP Files: 1

Flashback Database Status:
cdb1_stby: On

Data file Online Move in Progress:
cdb1_stby: No

Transport-Related Information:
Transport On: Yes

Log Files Cleared:
cdb1_stby Standby Redo Log Files: Cleared

Automatic Diagnostic Repository Errors:
Error cdb1_stby
No logging operation NO
Control file corruptions NO
System data file missing NO
System data file corrupted NO
System data file offline NO
User data file missing NO
User data file corrupted NO
User data file offline NO
Block Corruptions found NO

********************************************************************************

DGMGRL> validate database verbose cdb1

Database Role: Physical standby database
Primary Database: cdb1_stby

Ready for Switchover: Yes
Ready for Failover: Yes (Primary Running)

Capacity Information:
Database Instances Threads
cdb1_stby 1 1
cdb1 1 1

Temporary Tablespace File Information:
cdb1_stby TEMP Files: 3
cdb1 TEMP Files: 3

Flashback Database Status:
cdb1_stby: On
cdb1: On

Data file Online Move in Progress:
cdb1_stby: No
cdb1: No

Standby Apply-Related Information:
Apply State: Running
Apply Lag: 0 seconds (computed 1 second ago)
Apply Delay: 0 minutes

Transport-Related Information:
Transport On: Yes
Gap Status: No Gap
Transport Lag: 0 seconds (computed 1 second ago)
Transport Status: Success

Log Files Cleared:
cdb1_stby Standby Redo Log Files: Cleared
cdb1 Online Redo Log Files: Cleared
cdb1 Standby Redo Log Files: Available

Current Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
(cdb1_stby) (cdb1)
1 3 4 Sufficient SRLs

Future Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
(cdb1) (cdb1_stby)
1 3 4 Sufficient SRLs

Current Configuration Log File Sizes:
Thread # Smallest Online Redo Smallest Standby Redo
Log File Size Log File Size
(cdb1_stby) (cdb1)
1 50 MBytes 50 MBytes

Future Configuration Log File Sizes:
Thread # Smallest Online Redo Smallest Standby Redo
Log File Size Log File Size
(cdb1) (cdb1_stby)
1 50 MBytes 50 MBytes

Apply-Related Property Settings:
Property cdb1_stby Value cdb1 Value
DelayMins 0 0
ApplyParallel AUTO AUTO

Transport-Related Property Settings:
Property cdb1_stby Value cdb1 Value
LogXptMode ASYNC ASYNC
RedoRoutes
Dependency
DelayMins 0 0
Binding optional optional
MaxFailure 0 0
MaxConnections 1 1
ReopenSecs 300 300
NetTimeout 30 30
RedoCompression DISABLE DISABLE
LogShipping ON ON

Automatic Diagnostic Repository Errors:
Error cdb1_stby cdb1
No logging operation NO NO
Control file corruptions NO NO
SRL Group Unavailable NO NO
System data file missing NO NO
System data file corrupted NO NO
System data file offline NO NO
User data file missing NO NO
User data file corrupted NO NO
User data file offline NO NO
Block Corruptions found NO NO

DGMGRL>
Validate Data Guard connectivity from all hosts:
[oracle@ol7-121-dg2 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1
Password:
Connected as SYSDBA.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> exit
[oracle@ol7-121-dg2 sql]$

********************************************************************************

[oracle@ol7-121-dg1 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1
Password:
Connected as SYSDBA.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> exit
[oracle@ol7-121-dg1 sql]$
Start Data Guard observer from standby:

Note: this is not good practice for real world and only for testing purposes only.

oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:21:28 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg1 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1_stby - Primary database
Warning: ORA-16819: fast-start failover observer not started

cdb1 - (*) Physical standby database
Warning: ORA-16819: fast-start failover observer not started

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 16 seconds ago)

DGMGRL> start observer
Observer started
DGMGRL>
Shutdown primary host:
[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:26:51 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG2:(SYS@cdb1):PRIMARY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg2 sql]$ logout

[vagrant@ol7-121-dg2 ~]$ logout
Connection to 127.0.0.1 closed.

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant halt
==> default: Attempting graceful shutdown of VM...

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$
Failover succeeded:
DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020
Review Data Guard configuration:
[oracle@ol7-121-dg1 ~]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
Warning: ORA-16829: fast-start failover configuration is lagging

cdb1_stby - (*) Physical standby database (disabled)
ORA-16661: the standby database needs to be reinstated

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 41 seconds ago)

DGMGRL>
Start primary host:
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant status
Current machine states:

default poweroff (virtualbox)

The VM is powered off. To restart the VM, simply run `vagrant up`

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant up

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master) 
$ vagrant status
Current machine states:

default running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$
Start listener:
[oracle@ol7-121-dg2 ~]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 15-JAN-2020 21:33:11

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 15-JAN-2020 21:33:12
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "cdb1_stby_DGMGRL" has 1 instance(s).
Instance "cdb1", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@ol7-121-dg2 ~]$
Startup mount database:
[oracle@ol7-121-dg2 sql]$ ps -ef|grep pmon
oracle 17762 17567 0 21:34 pts/0 00:00:00 grep --color=auto pmon

[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:34:09 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SYS@cdb1> startup mount;
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size 2924928 bytes
Variable Size 520097408 bytes
Database Buffers 1073741824 bytes
Redo Buffers 13848576 bytes
Database mounted.
SYS@cdb1> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg2 sql]$
Review Data Guard configuration:
[oracle@ol7-121-dg2 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration
ORA-16795: the standby database needs to be re-created

Configuration details cannot be determined by DGMGRL
DGMGRL>
Review Observer:
DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020

21:34:56.44 Wednesday, January 15, 2020
Initiating reinstatement for database "cdb1_stby"...
Reinstating database "cdb1_stby", please wait...
Reinstatement of database "cdb1_stby" succeeded
21:35:15.40 Wednesday, January 15, 2020
Review and Validate Data Guard configuration:
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
cdb1_stby - (*) Physical standby database
Warning: ORA-16829: fast-start failover configuration is lagging

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 54 seconds ago)

DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
cdb1_stby - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS (status updated 24 seconds ago)

DGMGRL> validate database cdb1

Database Role: Primary database

Ready for Switchover: Yes

DGMGRL> validate database cdb1_stby

Database Role: Physical standby database
Primary Database: cdb1

Ready for Switchover: Yes
Ready for Failover: Yes (Primary Running)

DGMGRL>

Installing Oracle GoldenGate 19c Microservices – Binaries Only

DBASolved - Wed, 2020-01-15 12:43

It has taken me some time to get around to doing more videos; mostly because I don’t like the way that I sound when I hear my voice. Maybe next time, I’ll just do a silent video..lol.  Moving forward,  I’ll make more of an effort to produce a more videos that explain a few of […]

The post Installing Oracle GoldenGate 19c Microservices – Binaries Only appeared first on DBASolved.

Categories: DBA Blogs

@DATE, @DATENOW … Date functions in GoldenGate

DBASolved - Tue, 2020-01-14 11:59

Dates are always fun to play with when it comes to the Oracle Database, much less any other relational database.  Dates are used for many thinks in a wide range of application and schemas.  You have birthdays, ship dates, order dates, registration date, etc….  You get the picture.   In the Oracle Database you can […]

The post @DATE, @DATENOW … Date functions in GoldenGate appeared first on DBASolved.

Categories: DBA Blogs

12.2.0.1 And Later Support (Limited) Extended Stats On Virtual columns / Column Groups of Expressions

Randolf Geist - Mon, 2020-01-13 05:14
I do have a demo as part of my optimizer related workshops that shows the restriction / limitation of DBMS_STATS not supporting extended statistics on virtual columns / group of expressions, so for example the combination of both expressions and column groups, like ((TRUNC(COL1)), (TRUNC(COL2))).

Surprisingly, when following a certain sequence of operation, this starts working (to some degree) from 12.2.0.1 on.

The official documentation up to and including 19c still mentions this as a restriction, and since it doesn't work when explicitly referencing virtual columns (see the test case what I exactly mean by this) I assume this is more like a side effect / unintended feature.

Nevertheless, the optimizer happily picks up this additional information and comes up with improved estimates when having a combination of skew and correlation on expressions, for example.

The following test case shows the change in behaviour from 12.2.0.1 on:

set echo on linesize 200 trimspool on trimout on tab off pagesize 999 timing on

alter session set nls_language = american;

drop table t1;

purge table t1;

-- Initialize the random generator for "reproducible" pseudo-randomness
exec dbms_random.seed(0)

-- ATTR1 and ATTR2 are both skewed and correlated
create table t1
as
select
rownum as id
, trunc(dbms_random.value(1, 1000000000000)) as fk
, case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr1
, case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;

-- Histograms on ATTR1 and ATTR2 for representing skew properly
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2')

-- In addition histograms / extended stats on (ATTR1, ATTR2) for covering both skew and correlation
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (attr1, attr2)')

-- Assuming a query expression TRUNC(ATTR1) = x and TRUNC(ATTR2) = x create a histogram on each expression separately for representing skew properly
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc(attr1)), (trunc(attr2))')

-- In addition histogram / extended stats on (TRUNC(ATTR1), TRUNC(ATTR2)) for covering both skew and correlation on the expressions
-- Officially this is isn't allowed and errors out up to and including 12.1.0.2
-- ORA-20001: Error when processing extension - missing right parenthesis
-- Starting with 12.2.0.1 this is successful, but documentation still lists this as restriction, including 19c
-- But: It's only supported when there is a corresponding virtual column already existing for each expression used - so Oracle can map this expression back to a virtual column
-- Remove the previous step and it will error out:
-- ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

-- Although it doesn't look like officially supported / possibly just a side effect of some bug fix, the optimizer picks up the information properly
-- Pre 12.2.0.1 don't recognize the correlation and show approx. 800K rows as estimate
-- 12.2.0.1 and later with the column group on the expression in place show approx. 900K rows as estimate which is (almost) correct
explain plan for
select
count(*)
from
t1 a
where
trunc(attr1) = 1
and trunc(attr2) = 1;

select * from table(dbms_xplan.display(format => 'TYPICAL'));

-- But: Explicitly referencing a virtual column doesn't work
-- This will error out:
-- ORA-20001: Error when processing extension - virtual column is referenced in a column expression

exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr1))')

exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr2))')

-- Remove this and the extended stats on the virtual columns below will work in 12.2.0.1 and later (because the extension still exists from above call)
exec dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')


alter table t1 add (trunc_attr1 as (trunc(attr1)));

alter table t1 add (trunc_attr2 as (trunc(attr2)));

-- This works and is supported in all versions supporting virtual columns
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, trunc_attr1, trunc_attr2')

-- This errors out: ORA-20001: Error when processing extension - virtual column is referenced in a column expression
-- even in 12.2.0.1 and later
-- But: Works in 12.2.0.1 and later if the call to
-- dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
-- above is removed, because the extension then already exists (!)
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)')

-- But: This works again from 12.2.0.1 on, not explicitly referencing the virtual columns but using the expressions covered by the virtual columns instead
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

And here is the output I get when using 12.1.0.2:

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 10 12:39:00 2020

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED SPID OPID CPID SADDR PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
CBO_TEST orcl121 DELLXPS13 368 46472 12.1.0.2.0 20200110 6908 59 15536:4996 00007FFA110E9B88 00007FFA12B6F1E8



SQL>
SQL> alter session set nls_language = american;

Session altered.

Elapsed: 00:00:00.00
SQL>
SQL> drop table t1;

Table dropped.

Elapsed: 00:00:00.02
SQL>
SQL> purge table t1;

Table purged.

Elapsed: 00:00:00.06
SQL>
SQL> -- Initialize the random generator for "reproducible" pseudo-randomness
SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> -- ATTR1 and ATTR2 are both skewed and correlated
SQL> create table t1
2 as
3 select
4 rownum as id
5 , trunc(dbms_random.value(1, 1000000000000)) as fk
6 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr1
7 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr2
8 , rpad('x', 100) as filler
9 from
10 dual
11 connect by
12 level <= 1000000
13 ;

Table created.

Elapsed: 00:00:13.07
SQL>
SQL> -- Histograms on ATTR1 and ATTR2 for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.43
SQL>
SQL> -- In addition histograms / extended stats on (ATTR1, ATTR2) for covering both skew and correlation
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (attr1, attr2)')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.08
SQL>
SQL> -- Assuming a query expression TRUNC(ATTR1) = x and TRUNC(ATTR2) = x create a histogram on each expression separately for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc(attr1)), (trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.51
SQL>
SQL> -- In addition histogram / extended stats on (TRUNC(ATTR1), TRUNC(ATTR2)) for covering both skew and correlation on the expressions
SQL> -- Officially this is isn't allowed and errors out up to and including 12.1.0.2
SQL> -- ORA-20001: Error when processing extension - missing right parenthesis
SQL> -- Starting with 12.2.0.1 this is successful, but documentation still lists this as restriction, including 19c
SQL> -- But: It's only supported when there is a corresponding virtual column already existing for each expression used - so Oracle can map this expression back to a virtual column
SQL> -- Remove the previous step and it will error out:
SQL> -- ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - missing right parenthesis


Elapsed: 00:00:00.04
SQL>

SQL> -- Although it doesn't look like officially supported / possibly just a side effect of some bug fix, the optimizer picks up the information properly
SQL> -- Pre 12.2.0.1 don't recognize the correlation and show approx. 800K rows as estimate
SQL> -- 12.2.0.1 and later with the column group on the expression in place show approx. 900K rows as estimate which is (almost) correct
SQL> explain plan for
2 select
3 count(*)
4 from
5 t1 a
6 where
7 trunc(attr1) = 1
8 and trunc(attr2) = 1;

Explained.

Elapsed: 00:00:00.02
SQL>
SQL> select * from table(dbms_xplan.display(format => 'TYPICAL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 4777 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
|* 2 | TABLE ACCESS FULL| T1 | 807K| 6305K| 4777 (1)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(TRUNC("ATTR1")=1 AND TRUNC("ATTR2")=1)

14 rows selected.

Elapsed: 00:00:00.11
SQL>
SQL> -- But: Explicitly referencing a virtual column doesn't work
SQL> -- This will error out:
SQL> -- ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr1))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.13
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09
SQL>
SQL> -- Remove this and the extended stats on the virtual columns below will work in 12.2.0.1 and later (because the extension still exists from above call)
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
BEGIN dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - missing right parenthesis



Elapsed: 00:00:00.02
SQL>
SQL>
SQL> alter table t1 add (trunc_attr1 as (trunc(attr1)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> alter table t1 add (trunc_attr2 as (trunc(attr2)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> -- This works and is supported in all versions supporting virtual columns
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, trunc_attr1, trunc_attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.57
SQL>
SQL> -- This errors out: ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL> -- even in 12.2.0.1 and later
SQL> -- But: Works in 12.2.0.1 and later if the call to
SQL> -- dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
SQL> -- above is removed, because the extension then already exists (!)
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - virtual column is referenced in a column expression
ORA-06512: at "SYS.DBMS_STATS", line 34634
ORA-06512: at line 1


Elapsed: 00:00:00.06
SQL>
SQL> -- But: This works again from 12.2.0.1 on, not explicitly referencing the virtual columns but using the expressions covered by the virtual columns instead
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - missing right parenthesis


Elapsed: 00:00:00.04
SQL>

And that is what I get from 12.2.0.1 on, here using 19.3:

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 10 12:35:18 2020

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Verbunden mit:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production


USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED SPID OPID CPID SADDR PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
CBO_TEST orcl19c DELLXPS13 146 48961 19.0.0.0.0 20200110 5648 53 7260:13644 00007FF91687B3D8 00007FF91656B858



SQL>
SQL> alter session set nls_language = american;

Session altered.

Elapsed: 00:00:00.00
SQL>
SQL> drop table t1;

Table dropped.

Elapsed: 00:00:00.02
SQL>
SQL> purge table t1;

Table purged.

Elapsed: 00:00:00.10
SQL>
SQL> -- Initialize the random generator for "reproducible" pseudo-randomness
SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> -- ATTR1 and ATTR2 are both skewed and correlated
SQL> create table t1
2 as
3 select
4 rownum as id
5 , trunc(dbms_random.value(1, 1000000000000)) as fk
6 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr1
7 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr2
8 , rpad('x', 100) as filler
9 from
10 dual
11 connect by
12 level <= 1000000
13 ;

Table created.

Elapsed: 00:00:11.89
SQL>
SQL> -- Histograms on ATTR1 and ATTR2 for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.68
SQL>
SQL> -- In addition histograms / extended stats on (ATTR1, ATTR2) for covering both skew and correlation
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (attr1, attr2)')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.75
SQL>
SQL> -- Assuming a query expression TRUNC(ATTR1) = x and TRUNC(ATTR2) = x create a histogram on each expression separately for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc(attr1)), (trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.43
SQL>
SQL> -- In addition histogram / extended stats on (TRUNC(ATTR1), TRUNC(ATTR2)) for covering both skew and correlation on the expressions
SQL> -- Officially this is isn't allowed and errors out up to and including 12.1.0.2
SQL> -- ORA-20001: Error when processing extension - missing right parenthesis
SQL> -- Starting with 12.2.0.1 this is successful, but documentation still lists this as restriction, including 19c
SQL> -- But: It's only supported when there is a corresponding virtual column already existing for each expression used - so Oracle can map this expression back to a virtual column
SQL> -- Remove the previous step and it will error out:
SQL> -- ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.24
SQL>
SQL> -- Although it doesn't look like officially supported / possibly just a side effect of some bug fix, the optimizer picks up the information properly
SQL> -- Pre 12.2.0.1 don't recognize the correlation and show approx. 800K rows as estimate
SQL> -- 12.2.0.1 and later with the column group on the expression in place show approx. 900K rows as estimate which is (almost) correct
SQL> explain plan for
2 select
3 count(*)
4 from
5 t1 a
6 where
7 trunc(attr1) = 1
8 and trunc(attr2) = 1;

Explained.

Elapsed: 00:00:00.01
SQL>
SQL> select * from table(dbms_xplan.display(format => 'TYPICAL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 4797 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
|* 2 | TABLE ACCESS FULL| T1 | 893K| 6977K| 4797 (1)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(TRUNC("ATTR1")=1 AND TRUNC("ATTR2")=1)

14 rows selected.

Elapsed: 00:00:00.10
SQL>
SQL> -- But: Explicitly referencing a virtual column doesn't work
SQL> -- This will error out:
SQL> -- ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr1))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.33
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.33
SQL>
SQL> -- Remove this and the extended stats on the virtual columns below will work in 12.2.0.1 and later (because the extension still exists from above call)
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.11
SQL>
SQL>
SQL> alter table t1 add (trunc_attr1 as (trunc(attr1)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> alter table t1 add (trunc_attr2 as (trunc(attr2)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> -- This works and is supported in all versions supporting virtual columns
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, trunc_attr1, trunc_attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.49
SQL>
SQL> -- This errors out: ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL> -- even in 12.2.0.1 and later
SQL> -- But: Works in 12.2.0.1 and later if the call to
SQL> -- dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
SQL> -- above is removed, because the extension then already exists (!)
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - virtual column is referenced in a column expression
ORA-06512: at "SYS.DBMS_STATS", line 40751
ORA-06512: at "SYS.DBMS_STATS", line 40035
ORA-06512: at "SYS.DBMS_STATS", line 38912
ORA-06512: at "SYS.DBMS_STATS", line 15726
ORA-06512: at "SYS.DBMS_STATS", line 22064
ORA-06512: at "SYS.DBMS_STATS", line 22162
ORA-06512: at "SYS.DBMS_STATS", line 22232
ORA-06512: at "SYS.DBMS_STATS", line 22864
ORA-06512: at "SYS.DBMS_STATS", line 38313
ORA-06512: at "SYS.DBMS_STATS", line 39738
ORA-06512: at "SYS.DBMS_STATS", line 40183
ORA-06512: at "SYS.DBMS_STATS", line 40732
ORA-06512: at line 1


Elapsed: 00:00:00.09
SQL>
SQL> -- But: This works again from 12.2.0.1 on, not explicitly referencing the virtual columns but using the expressions covered by the virtual columns instead
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

PL/SQL procedure successfully completed.


Elapsed: 00:00:05.32
SQL>

So, from 12.2.0.1 it looks like extended statistics on virtual columns are supported to some degree, when following a certain sequence of operation - first creating extended statistics on each of the expressions used, which creates corresponding virtual columns under the cover, and afterwards creating extended statistics using those expressions as part of the column group expression.

Strange enough, when explicitly creating virtual columns for those expressions creating a column group explicitly referencing those virtual columns doesn't work - but using the expressions covered by the virtual columns instead works even with those virtual columns created explicitly.

Collection limitation

Jonathan Lewis - Fri, 2020-01-10 11:43

The ODC SQL and PL/SQL forum came up with an example a couple of days ago that highlighted an annoying limitation in the optimizer’s handling of table functions. The requirement was for a piece of SQL that would generate “installments” information from a table of contract agreements and insert into another table any installments that were not yet recorded there.

The mechanism to turn a single row of contract data into a set of installments was a (optionally pipelined) table function that involved some business logic that (presumably) dealt with the timing and size of the installments. The final SQL to create the data that needed to be inserted was reported as follows (though it had clearly been somewhat modified):

select 
        y.*,
        trunc(sysdate,'mm'),
        user
from 
        table_a a
join 
        table(
                my_function(
                        a.loan_acct_nbr, 
                        a.start_dt,
                        a.maturity_dt,
                        a.num_of_terms
                )
        ) y
on 
        a.loan_acct_nbr = y.loan_acct_nbr
where 
        a.yr_mnth_cd = to_char(add_months(sysdate,-1),'yyyymm')       -- last month
and     a.loan_typ   = 'ABC'
and     not exists ( 
                select  1 
                from
                        final_load_table l
                where
                        l.loan_acct_nbr = y.loan_acct_nbr
                and     l.yr_mnth_cd    = y.yr_mnth_cd 
        )
;

In this query, table_a is the set of contracts, final_load_table is the set of installments, and my_function() is the pipelined function returning a table of installments derived from the start date, maturity date, and number of installments for a contract. The query needs a “not exists” subquery to eliminate any installments that are already known to the database. Once this query is operating efficiently it could be used either to drive a PL/SQL loop or (generally the better strategy) to do a simple “insert as select”.

We were told that the function would return around 60 rows per contract; that the starting size of the final_load_table would be about 60M rows, and the size of the result set from the initial join would be about 60M or 70M rows (which suggests about 1M rows acquired from table_a).

The owner of this problem seemed to be saying that the query had returned no data after 4 or 5 hours – which suggests (a) the optimizer chose a bad plan and/or (b) the PL/SQL function is working very inefficiently and/or (c) the join had generated a vast amount of data but the effect of the subquery was to discard all of it .

Clearly we needed to see an execution plan (preferably from SQL Monitor) and be given have some idea of how much of the 60M/70M rows predicted for the join would be discarded because it already existed.

The owner did send us an execution plan – which included a very obvious threat and suggested a couple of optimizer errors – but supplied it as a picture rather than a flat text.

You’ll notice, of course, that the tables and columns have changed from the original statement. More significantly, though, there are serious problems with the numbers – the estimated row count for the basic join is only 5,326 rather than 50 Million which, at first sight, is probably why the optimizer has decided that a filter subquery (operation 1) to do an index-only probe (operation 5) is a good way of handling the subquery. Perhaps if the estimates had been a little more accurate (e.g. through the addition of a couple of column groups or, if necessary, by opt_estimate() or cardinality() hints) the subquery would have been unnested and turned into a hash anti-join.

I suggested a quick test of a suitable cardinality() hint – but ran up a little model to check that I’d got the hint right – and found that I had but it wasn’t going to help. So I decided to write up the model (and a possible solution for the owner of the problem) in this blog note.

Here’s the code to create the set of objects I’m going to work with. The naming follows the naming in the original statement of the problem suggested by the owner:


rem
rem     Script:         table_function_plan.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table table_a (
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        start_dt        date,
        maturity_dt     date    ,
        number_of_terms number,
        constraint ta_pk primary key(loan_acct_nbr, yr_mnth_cd)
)
;

execute dbms_random.seed(0)

insert /*+ append */
into    table_a
with generator as (
        select rownum id
        from    dual
        connect by
                level <= 4000 -- > comment to avoid wordpress issue
)
select
        trunc(dbms_random.value(1e9, 2e9)),
        to_char(sysdate-(365-mod(rownum,365)),'yyyymm'),
        sysdate-(365-mod(rownum,365)),
        sysdate+(1500+mod(rownum,365)),
        60
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

create table final_load_table_l(
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        v1              varchar2(10),
        padding         varchar2(200),
        constraint lt_pk primary key(loan_acct_nbr, yr_mnth_cd)
)
;

insert /*+ append */ into final_load_table_l
with generator as (
        select rownum id
        from    dual
        connect by
                level <= 4000 -- > comment to avoid wordpress issue
)
select
        trunc(dbms_random.value(1e9, 2e9)),
        to_char(sysdate-(365-mod(rownum,365)),'yyyymm'),
        lpad(rownum,10),
        lpad('x',200,'x')
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'table_a',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'final_load_table_l',
                method_opt  => 'for all columns size 1'
        );
end;
/

create type my_row_type as object (
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        v1              varchar2(10),
        padding         varchar2(200)
);
/

create type my_table_type as table of my_row_type;
/

create  or replace function my_function (
        i_loan_acct_nbr         in      number,
        i_yr_mnth_cd            in      varchar2,
        i_start_dt              in      date,
        i_maturity_dt           in      date,
        i_number_of_terms       in      number
)       return  my_table_type pipelined
as
begin
        for i in 1..i_number_of_terms loop
                pipe row (
                        my_row_type(
                                i_loan_acct_nbr,
                                to_char(i_start_dt+32*i,'yyyymm'),
                                i,
                                lpad('x',200,'x')
                        )
                );
        end loop;
        return;
end;
/

I was planning to create some large tables – hence the option to generate 16M rows from my generator CTEs – but I discovered the critical problem almost as soon as I had some data and code in place, so I didn’t need to go large.

I’ve had to create an object type and table type in order to create a pipelined function that returns the table type by piping rows of the object type. The data I’ve created, and the way the function generates data probably doesn’t bear much resemblance to the real system of course, but I don’t think it needs to be very close to make the crucial point.

Here’s the text of the select statement the OP wants to run, with the execution plan I got from my data set after running the query and pulling the plan from memory:

alter session set statistics_level = all;

select 
        /*+ find this 1 */
        y.* 
from 
        table_a a, 
        table(my_function(
                a.loan_acct_nbr,
                a.yr_mnth_cd,
                a.start_dt,
                a.maturity_dt,
                a.number_of_terms
        )) y
where 
        a.yr_mnth_cd = '202001'
and     not exists (
                select  
                        /*+ unnest */
                        null
                from    final_load_table_l l
                where   l.loan_acct_nbr = y.loan_acct_nbr
                and     l.yr_mnth_cd    = y.yr_mnth_cd
        )
;

--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |             |      1 |        |  5059K(100)|  14580 |00:00:00.15 |   16330 |
|*  1 |  FILTER                             |             |      1 |        |            |  14580 |00:00:00.15 |   16330 |
|   2 |   NESTED LOOPS                      |             |      1 |   6283K| 10421   (8)|  14580 |00:00:00.11 |     335 |
|*  3 |    TABLE ACCESS FULL                | TABLE_A     |      1 |    769 |    10  (10)|    243 |00:00:00.01 |     297 |
|   4 |    COLLECTION ITERATOR PICKLER FETCH| MY_FUNCTION |    243 |   8168 |    14   (8)|  14580 |00:00:00.10 |      38 |
|*  5 |   INDEX UNIQUE SCAN                 | LT_PK       |  14580 |      1 |     1   (0)|      0 |00:00:00.02 |   15995 |
--------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( IS NULL)
   3 - filter("A"."YR_MNTH_CD"='202001')
   5 - access("L"."LOAN_ACCT_NBR"=:B1 AND "L"."YR_MNTH_CD"=:B2)


I’ve put in a hint to tell the optimizer to unnest the subquery – and it didn’t. Oracle does not ignore hints (unless they’re illegal, or out of context, or the optimizer never got to them, or you’ve found a bug) so why did Oracle appear to ignore this hint? There’s a really nice feature in 19.3 execution plans – you can request a hint report for a plan, and here’s the hint report for this query (ignore the bits about “find this” being an error):


Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 3 (U - Unused (1), E - Syntax error (2))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  find
         E -  this

   5 -  SEL$3
         U -  unnest / Invalid correlated predicates


I put in an /*+ unnest */ hint to unnest the subquery, and I’ve been told that the predicates are not valid. The only thing about them that could be invalid is that they come from a pipelined function that has returned an object type. The pipelined function does not behave exactly like a table. But this gives me a clue about forcing the unnest to happen – hide the pipelined function inside a non-mergeable view.


select
        /*+ find this 2 */
        v.*
from    (
        select  /*+ no_merge */
                y.* 
        from 
                table_a a, 
                table(my_function(
                        a.loan_acct_nbr,
                        a.yr_mnth_cd,
                        a.start_dt,
                        a.maturity_dt,
                        a.number_of_terms
                )) y
        where 
                a.yr_mnth_cd = '202001'
        )       v
where   not exists (
                select
                        /*+ unnest */
                        null
                from    final_load_table_l l
                where   l.loan_acct_nbr = v.loan_acct_nbr
                and     l.yr_mnth_cd    = v.yr_mnth_cd
        )
/


------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |             |      1 |        | 10628 (100)|  14580 |00:00:00.12 |     387 |       |       |          |
|*  1 |  HASH JOIN RIGHT ANTI                |             |      1 |   6283K| 10628  (10)|  14580 |00:00:00.12 |     387 |  1878K|  1878K| 2156K (0)|
|   2 |   INDEX FAST FULL SCAN               | LT_PK       |      1 |  10000 |     6  (17)|  10000 |00:00:00.01 |      91 |       |       |          |
|   3 |   VIEW                               |             |      1 |   6283K| 10371   (8)|  14580 |00:00:00.11 |     296 |       |       |          |
|   4 |    NESTED LOOPS                      |             |      1 |   6283K| 10371   (8)|  14580 |00:00:00.10 |     296 |       |       |          |
|*  5 |     TABLE ACCESS FULL                | TABLE_A     |      1 |    769 |    10  (10)|    243 |00:00:00.01 |     296 |       |       |          |
|   6 |     COLLECTION ITERATOR PICKLER FETCH| MY_FUNCTION |    243 |   8168 |    13   (0)|  14580 |00:00:00.10 |       0 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("L"."LOAN_ACCT_NBR"="V"."LOAN_ACCT_NBR" AND "L"."YR_MNTH_CD"="V"."YR_MNTH_CD")
   5 - filter("A"."YR_MNTH_CD"='202001')

I’ve wrapped the basic join between table_a and the table function in an inline view called (unimaginatively) v, and I’ve added a /*+ no_merge */ hint to that inline view. So the main query becomes a select from a non-mergeable view with a “not exists” subquery applied to a couple of view columns. And Oracle thinks that that’s okay – and my unnest hint encouraged the optimizer to use a hash anti-join.

So here’s an option for the owner of the problem – but with a few outstanding questions: will a rewrite of their query in this form do the same, will the performance of the select be good enough, and will the subsequent “insert as select” keep the same driving plan.

There’s one significant detail to worry about: the build table in this hash (anti-)join is going to be constructed from 50M (load_acct_bfr, yr_mnth_cd) pairs – which means the workarea requirement is likely to be about 1.2GB for an optimial (i.e. in-memory) hash join; otherwise the join may spill to disc and do a lot of I/O – probably as a one-pass hash join.

(Did you notice,by the way, that the word storage appeared at operation 3 in the original plan?  That suggests a nice big Exadata box; however, there’s no storage predicate in the Predicate Information section for that operation and you would have thought that lease_type = ‘US_SSP’ would be pushed to storage, so maybe this is a ZFS Pillar backing a less exotic server.)

Conclusion

Some (if not all) types of correlated subqueries behave badly if the correlation predicates involve table functions. But you may be able to work around the issue by hiding part of the query, including the table function, inside a non-mergeable inline view before applying the subquery to the view.

Footnote

When I realised that the table function was causing a problem unnesting I remembered that I had written about a similar problem a few years ago – after searching for a little while I discovered a draft note that I had started in September 2015 but had not quite finished; so I’ll be finishing it off and publishing it some time in the next few days.

 

 

A decade in data, a decade to come

Rittman Mead Consulting - Fri, 2020-01-10 08:52

First of all I’d like to take the chance to wish everyone in our network, colleagues, customers and friends a very happy new year.

The last decade was an incredible one for the data industry.  I realised before writing this that 2010 was actually when I started at my first data management consultancy, so it’s given me the perfect chance to reflect on what I’ve witnessed in the last 10 years.  

Data Insights out of IT and into the business

One thing that stands out is the relevance of insights through data within business functions.  The reason why we do what we do at Rittman Mead is because we believe that data solutions will drive positive change for us all in the future.  Making it accessible, insightful and interesting to people who are doing great things for great companies is the goal for all of us.

During the last decade we saw the mass market adoption of line of business data discovery tools such as Tableau, Oracle Data Visualization, QlikSense & Microsoft PowerBI.  All wonderfully innovative products that have flown the flag for bringing insights through data into the business.  

Oracle DV Geo LayersWhat is Big Data?

Also, what has happened to the Big Data hype which was a huge back in 2010?

Ten years on do we think it lived up to its hype?  Do we even know what it means? On reflection was it as groundbreaking as we might have thought at the time?  This is an argument that could be had but might never end.

Public Cloud Cover

One thing is for sure - the adoption at scale of public Cloud, SaaS, Storage and what it seems is infinite amounts of processing power has been huge since 2010.  We've seen AWS and Microsoft Azure leading the way, we see Google and Oracle Cloud Infrastructure catching up.  We know there are pros, cons, knowns and unknowns when moving to Cloud hosted applications and platforms.  We know that one thing is for sure - come 2030 the Cloud will be bigger whether we like it or not.

Streams of Events

Finally the emergence of event driven architectures is as innovative as it is complex. Large growth companies like Uber have built their business model on the ability to process huge amounts of events at low latency in order to operate.  Think demand + supply and Uber’s dynamic pricing model.  Fortunately open source projects like Kafka and particularly organisations like Confluent, the company behind Kafka, have made it easier to integrate this functionality to businesses throughout the world.

Analysing stream events in realtime using Kafka KSQL

It wouldn’t be fair to say that the innovation in the data industry has always been used ethically in the last 10 years and this is something that we all have a part to play in going forwards.  The Facebook / Cambridge Analytica scandal certainly made everyone reflect on the importance of using data for positive and ethical outcomes rather than underhand and in-transparent tactics.

And whats next?

As we embark on a new decade, we at Rittman Mead are excited about what is to come.  Artificial Intelligence, Machine Learning and Augmented Analytics is going to hit the mainstream and will play a part in all of our lives.  But the good news is - there will always be a place in this market for people.  Data is necessary, sophisticated machines are necessary but it relies on the people for it to work whether that be building an algorithm or making a decision based on the outcome of the information that a particular algorithm has generated.  We will see more automation and this is a brilliant thing.  Robotic Process Automation (RPA) and Autonomous databases such as Oracle’s offering are designed to breed innovation, not replacement.

Come and meet us in February

For us it all starts in February at Oracle OpenWorld Europe.  We will be exhibiting and presenting here and there will be insightful, thought provoking and relatable content throughout the two days.  Augmented analytics & autonomous will be at the top of the agenda and rightly so.  What’s more, it’s completely free to attend. Registration details can be found here:

We’d love to meet with you if you do decide to come.  To make this easier (as we know there is a lot to cover on days like these) we have set up a calendar where you can book a meeting slot with one of our sales team or technical consultants.  We can talk about:

  • Oracle
  • AI/ML
  • What are we doing?
  • What are you doing?
  • Anything you like in an informal setting

Please feel free to book a time slot using the link below:

https://calendly.com/rittman-mead/oracle-openworld

Finally we wish everyone a very prosperous new decade - who knows where we will be by 2030!

Categories: BI & Warehousing

Home Shopping Retailer Turns Browsers into Buyers with Oracle Cloud

Oracle Press Releases - Fri, 2020-01-10 08:00
Press Release
Home Shopping Retailer Turns Browsers into Buyers with Oracle Cloud South African online retailer Home Choice remodels its planning processes and modernizes the shopping experience to drive sales and customer satisfaction

Redwood Shores, Calif.—Jan 10, 2020

Home Choice, one of South Africa’s largest home shopping retailers and financing providers, has deployed Oracle Retail Cloud in just 12 weeks to deliver the products and offers that turn browsers into buyers. Shoppers look to Home Choice to discover name-brand furniture, electronics, and appliances at competitive prices. Offering a wide range of easy finance options, the retailer engages customers in finding and purchasing housewares via its website, catalog, and call centers. With Oracle Retail, the Home Choice team is shifting from legacy, manual processes to a best in class planning and optimization solution that unifies processes and integrates data science into decision making to drive more sales and better customer satisfaction.  

“Home Choice is focused on delivering a compelling assortment of products that meets market demand across multiple geographies. The Oracle Retail Cloud Services allow us to adopt a more scientific approach to planning and markdowns quickly,” said Dirk Oberholster, chief information officer, Home Choice. “The flexibility of the solution also enables us to configure and extend capabilities to meet business requirements without touching the base code.”

In  Oracle’s “Setting the Bar: Global Customer Experience Trends 2019”, nearly 16,000 global consumers surveyed are seeking preferential treatment based on their relationship with  brand. Roughly half (48 percent) say that offers or discounts which are better than what anyone else can get based on their loyalty to that retailer are ‘absolutely essential’. With a single view of inventory, Home Choice will be able to understand how customers interact with the brand to make better buying decisions and deliver relevant offers that compel a passive shopper to purchase.

“Oracle Retail is empowering the Home Choice team to deliver the right products to the right location and channels at a compelling price using science and optimization,” said Mike Webster, senior vice president and general manager, Oracle Retail. “As Home Choice places the customer at the center of the value chain, Oracle will enable them to interact with customers on a more personalized level.”

Home Choice is adopting Oracle Retail Merchandise Financial Planning Cloud Service, Oracle Customer Experience (CX) Commerce, Oracle Retail Assortment Planning Cloud Service, Oracle Retail Science Platform Cloud Service, and Oracle Retail Offer Optimization Cloud Service. Home Choice is centralizing its planning process by adopting the best practices built into Oracle Retail Merchandise Financial Planning to drive inventory productivity and profit. Additionally, the modern architecture of Oracle CX Commerce leverages the power of data to curate engaging experiences that drive online conversion rates and in-store traffic.

In the first phase, the team deployed Oracle Retail Merchandise Financial Planning Cloud Service with support from Cognira, a Gold level member of Oracle PartnerNetwork (OPN). Simultaneously, Oracle Retail Consulting worked to finalize the implementation of Oracle CX Commerce. The company evaluated competing products and selected Oracle based on its deep retail functionality, cloud offerings, and proven ability to support growing businesses.

Contact Info
Kris Reeves
Oracle PR
+1.925.787.6744
kris.reeves@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Single Value Column Frequency Histogram Oracle 12c and later

Randolf Geist - Thu, 2020-01-09 09:58
It is hopefully in the meantime well known that Oracle has introduced in version 11g a new algorithm to gather statistics on a table that no longer requires sorting for determining the critical Number Of Distinct Values (NDV) figure - it instead uses a clever "approximate NDV" algorithm which always reads 100% of the table data and therefore in principle generates very accurate statistics. This new algorithm gets used only when the ESTIMATE_PERCENT parameter to the DBMS_STATS.GATHER*STATS calls is left at default or explicitly passed as "DBMS_STATS.AUTO_SAMPLE_SIZE". This new algorithm is also required in case other new features like "Incremental Statistics" should be used.

In 12c Oracle improved this algorithm allowing the generation of Frequency and the new Top Frequency histogram types in a single pass. The new Hybrid histogram type still requires a separate pass.

11g always required a separate pass per histogram to be created - no matter what type of histogram (in 11g there was only Frequency and Height-Balanced) - which resulted in a quite aggressive sampling used for that purpose to minimize the time and resource usage for those separate passes, typically just using 5,500 rows and only sized up in case there were many NULL values (which is a very small sample size for larger tables). Note that this aggressive sampling only applies to the new "approximate NDV" code path - if you specify any explicit ESTIMATE_PERCENT Oracle uses the old code (which requires sorting for determining the NDV figure) and therefore the separate passes required to generate histograms are based on the same sampling percentage as used for the basic table and column statistics - actually Oracle can create a Global Temporary Table in a separate pass covering the required data from several columns in this case to avoid repeatedly scanning the table again and again.

I've recently came across an edge case at a client that showed that the new code has a flaw in the special case of columns that only have a single value on table or (sub)partition level.

First of all in my opinion in this special case of a column having only a single value a (Frequency) histogram doesn't add any value - everything required can be determined from the basic column statistics anyway - low and high value are the same, NDV is 1 and the number of NULLs should be all that is needed for proper cardinality estimates on such columns.

Now the new code path seems to be quite happy to generate histograms on all kinds of columns with low number of distinct values, be it useful or not. Since starting with 12c these histograms should all be covered by the main pass - since they can be represented by Frequency histograms when using the default bucket size of 254 - arguably there isn't much overhead in creating them, so why bother.

However, there is a flaw in the code: When the column has just a single value, then the code for some (unknown) reason determines that it requires a separate pass to generate a histogram and doesn't make use of the information already gathered as part of the main pass - which should hold everything needed.

So Oracle runs a separate pass to gather information for this histogram. Usually this doesn't make much difference, but this separate pass is no longer using the 100% data but resorts to the aggressive sampling as outlined above - if applicable. So usually it might just take those 5,500 rows to create a Frequency histogram on this single value column.

But in the edge case of such a single valued column that is NULL for the majority rows, the code recognizes this and no longer uses the aggressive sampling. Instead - probably depending on the number of NULLs - it needs to read a larger proportion of the table to find some non-NULL data.

In the case of my client this was a very large table, had numerous of such special edge case columns (single valued, NULL for most of the rows) which resulted in dozens of non-sampled full table scans of this very large table taking several days to complete.

When enabling the DBMS_STATS specific tracing the behaviour can be reproduced on the latest available versions (19.3 in my case here) - I've used the following test case to test four different scenarios and how the code behaved:

set echo on serveroutput on size unlimited lines 800 long 800

select * from v$version;

exec dbms_stats.set_global_prefs('TRACE', 1048575)

-- Testcase 1: Single distinct value, almost all values are NULL except three rows
-- This scenario triggers a separate query to generate a histogram using no (!!) sampling
drop table test_stats purge;

create table test_stats
as
select /*+ NO_GATHER_OPTIMIZER_STATISTICS */
rownum as id,
case when rownum <= 3 then 1 else null end as almost_null
from
(select /*+ cardinality(1000) */ null from dual connect by level <= 1000) a,
(select /*+ cardinality(1000) */ null from dual connect by level <= 1000) b
;

exec dbms_stats.gather_table_stats(null, 'test_stats', method_opt => 'for columns size 1 id for columns size 254 almost_null')

-- Testcase 2: Single distinct value, but only three rows are NULL
-- This scenario triggers a separate query to generate a histogram but using the usual aggressive sampling (5500 rows)
drop table test_stats purge;

create table test_stats
as
select /*+ NO_GATHER_OPTIMIZER_STATISTICS */
rownum as id,
case when rownum > 3 then 1 else null end as almost_null
from
(select /*+ cardinality(1000) */ null from dual connect by level <= 1000) a,
(select /*+ cardinality(1000) */ null from dual connect by level <= 1000) b
;

exec dbms_stats.gather_table_stats(null, 'test_stats', method_opt => 'for columns size 1 id for columns size 254 almost_null')

-- Testcase 3: 11 distinct values, but only three rows are NULL
-- This scenario extracts the values directly from the ROWIDs returned from the main query, no separate query
drop table test_stats purge;

create table test_stats
as
select /*+ NO_GATHER_OPTIMIZER_STATISTICS */
rownum as id,
case when rownum > 3 then trunc(rownum / 100000) else null end as almost_null
from
(select /*+ cardinality(1000) */ null from dual connect by level <= 1000) a,
(select /*+ cardinality(1000) */ null from dual connect by level <= 1000) b
;

exec dbms_stats.gather_table_stats(null, 'test_stats', method_opt => 'for columns size 1 id for columns size 254 almost_null')

-- Testcase 4: Three distinct values in three rows and all other rows have NULL
-- This scenario extracts the values directly from the ROWIDs returned from the main query, no separate query
-- Applies to two distinct values, too
-- So a single distinct value looks like a special case that triggers a separate query
-- If this is with combination of almost all rows having NULLs this query doesn't use sampling
-- ! Big threat if the table is big !
drop table test_stats purge;

create table test_stats
as
select /*+ NO_GATHER_OPTIMIZER_STATISTICS */
rownum as id,
case when rownum <= 2 then rownum else null end as almost_null
from
(select /*+ cardinality(1000) */ null from dual connect by level <= 1000) a,
(select /*+ cardinality(1000) */ null from dual connect by level <= 1000) b
;

exec dbms_stats.gather_table_stats(null, 'test_stats', method_opt => 'for columns size 1 id for columns size 254 almost_null')

And the output from 19.3 looks like this - the critical parts highlighted in red and bold:

SQL*Plus: Release 11.2.0.1.0 Production on Mon Aug 19 17:07:42 2019

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Verbunden mit:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production


USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED SPID OPID CPID SADDR PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
CBO_TEST orcl19c DELLXPS13 394 51553 19.0.0.0.0 20190819 4192 59 6232:10348 00007FFDE6AE6478 00007FFDE6573A48



SQL>
SQL> select * from v$version;

BANNER BANNER_FULL BANNER_LEGACY CON_ID
-------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ----------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production 0
Version 19.3.0.0.0


SQL>
SQL> exec dbms_stats.set_global_prefs('TRACE', 1048575)

PL/SQL-Prozedur erfolgreich abgeschlossen.

SQL>
SQL> -- Testcase 1: Single distinct value, almost all values are NULL except three rows
SQL> -- This scenario triggers a separate query to generate a histogram using no (!!) sampling
SQL> drop table test_stats purge;
drop table test_stats purge
*
FEHLER in Zeile 1:
ORA-00942: Tabelle oder View nicht vorhanden


SQL>
SQL> create table test_stats
2 as
3 select /*+ NO_GATHER_OPTIMIZER_STATISTICS */
4 rownum as id,
5 case when rownum <= 3 then 1 else null end as almost_null
6 from
7 (select /*+ cardinality(1000) */ null from dual connect by level <= 1000) a,
8 (select /*+ cardinality(1000) */ null from dual connect by level <= 1000) b
9 ;

Tabelle wurde erstellt.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'test_stats', method_opt => 'for columns size 1 id for columns size 254 almost_null')
DBMS_STATS: Record gather table stats operation on table : test_stats
DBMS_STATS: job name:
DBMS_STATS: |--> Operation id: 223
DBMS_STATS: gather_table_stats:
DBMS_STATS: Start gather table stats -- tabname: test_stats
DBMS_STATS: job name:
DBMS_STATS: Started table CBO_TEST.TEST_STATS. at 19.08.19 17:07:43,293000000 +02:00. Execution phase: SERIAL (NON-CONCURRENT) stattype: DATA Reporting Mode: FALSE
DBMS_STATS: Specified granularity = AUTO, Fixed granularity = GLOBAL AND PARTITION
DBMS_STATS: parameters ** pfix: (parent pfix: ) ownname: tabname: test_stats partname: estimate_percent: 101 block_sample: FALSE method_opt: for columns size 1 id for columns size 254 almost_null degree: 32766 granularity: Z gIdxGran: cascade: NULL stattab: statid: statown: no_invalidate: NULL flush_colu: TRUE fxt: stattype: DATA start_time: 08-19-2019 17:07:43 gathering_group_stats: FALSE cms_only: FALSE force: FALSE options: Z executionPhase: SERIAL (NON-CONCURRENT) incremental: FALSE concurrent: FALSE loc_degree: loc_method_opt: for columns size 1 id for columns size 254 almost_null fxt_typ: 0 tobjn: 72916
DBMS_STATS: Preferences for table CBO_TEST.TEST_STATS
DBMS_STATS: ================================================================================
DBMS_STATS: SKIP_TIME -
DBMS_STATS: STATS_RETENTION -
DBMS_STATS: MON_MODS_ALL_UPD_TIME -
DBMS_STATS: SNAPSHOT_UPD_TIME -
DBMS_STATS: TRACE - 0
DBMS_STATS: DEBUG - 0
DBMS_STATS: SYS_FLAGS - 0
DBMS_STATS: SPD_RETENTION_WEEKS - 53
DBMS_STATS: CASCADE - DBMS_STATS.AUTO_CASCADE
DBMS_STATS: ESTIMATE_PERCENT - DBMS_STATS.AUTO_SAMPLE_SIZE
DBMS_STATS: DEGREE - NULL
DBMS_STATS: METHOD_OPT - FOR ALL COLUMNS SIZE AUTO
DBMS_STATS: NO_INVALIDATE - DBMS_STATS.AUTO_INVALIDATE
DBMS_STATS: GRANULARITY - AUTO
DBMS_STATS: PUBLISH - TRUE
DBMS_STATS: STALE_PERCENT - 10
DBMS_STATS: APPROXIMATE_NDV - TRUE
DBMS_STATS: APPROXIMATE_NDV_ALGORITHM - REPEAT OR HYPERLOGLOG
DBMS_STATS: ANDV_ALGO_INTERNAL_OBSERVE - FALSE
DBMS_STATS: INCREMENTAL - FALSE
DBMS_STATS: INCREMENTAL_INTERNAL_CONTROL - TRUE
DBMS_STATS: AUTOSTATS_TARGET - AUTO
DBMS_STATS: CONCURRENT - OFF
DBMS_STATS: JOB_OVERHEAD_PERC - 1
DBMS_STATS: JOB_OVERHEAD - -1
DBMS_STATS: GLOBAL_TEMP_TABLE_STATS - SESSION
DBMS_STATS: ENABLE_TOP_FREQ_HISTOGRAMS - 3
DBMS_STATS: ENABLE_HYBRID_HISTOGRAMS - 3
DBMS_STATS: TABLE_CACHED_BLOCKS - 1
DBMS_STATS: INCREMENTAL_LEVEL - PARTITION
DBMS_STATS: INCREMENTAL_STALENESS - ALLOW_MIXED_FORMAT
DBMS_STATS: OPTIONS - GATHER
DBMS_STATS: GATHER_AUTO - AFTER_LOAD
DBMS_STATS: STAT_CATEGORY - OBJECT_STATS, REALTIME_STATS
DBMS_STATS: SCAN_RATE - 0
DBMS_STATS: GATHER_SCAN_RATE - HADOOP_ONLY
DBMS_STATS: PREFERENCE_OVERRIDES_PARAMETER - FALSE
DBMS_STATS: AUTO_STAT_EXTENSIONS - OFF
DBMS_STATS: WAIT_TIME_TO_UPDATE_STATS - 15
DBMS_STATS: ROOT_TRIGGER_PDB - FALSE
DBMS_STATS: COORDINATOR_TRIGGER_SHARD - FALSE
DBMS_STATS: MAINTAIN_STATISTICS_STATUS - FALSE
DBMS_STATS: AUTO_TASK_STATUS - OFF
DBMS_STATS: AUTO_TASK_MAX_RUN_TIME - 3600
DBMS_STATS: AUTO_TASK_INTERVAL - 900
DBMS_STATS: reporting_man_log_task: target: "CBO_TEST"."TEST_STATS" objn: 72916 auto_stats: FALSE status: IN PROGRESS ctx.batching_coeff: 0
DBMS_STATS: Start construct analyze using SQL .. Execution Phase: SERIAL (NON-CONCURRENT) granularity: GLOBAL AND PARTITION global_requested: NULL pfix:
DBMS_STATS: (Baseline)
DBMS_STATS: STATISTIC LAST_QUERY SESSION_TOTAL
DBMS_STATS: ------------------------------ ---------- -------------
DBMS_STATS: Queries Parallelized 0 0
DBMS_STATS: DML Parallelized 0 0
DBMS_STATS: DDL Parallelized 0 0
DBMS_STATS: DFO Trees 0 0
DBMS_STATS: Server Threads 0 0
DBMS_STATS: Allocation Height 0 0
DBMS_STATS: Allocation Width 0 0
DBMS_STATS: Local Msgs Sent 0 0
DBMS_STATS: Distr Msgs Sent 0 0
DBMS_STATS: Local Msgs Recv'd 0 0
DBMS_STATS: Distr Msgs Recv'd 0 0
DBMS_STATS: DOP 0 0
DBMS_STATS: Slave Sets 0 0
DBMS_STATS: Start gather part -- ctx.conc_ctx.global_requested: NULL gran: GLOBAL AND PARTITION execution phase: 1
DBMS_STATS: APPROX_NDV_ALGORITHM chosen: AS in non-incremental
DBMS_STATS: Start gather_stats.. pfix: ownname: CBO_TEST tabname: TEST_STATS pname: spname: execution phase: 1
Specified DOP=1 blocks=1520 DOP used=1
Specified DOP=1 blocks=1520 DOP used=1
DBMS_STATS: Using approximate NDV pct=0
DBMS_STATS: NNV NDV AVG MMX HST EP RP NNNP IND CNDV HSTN HSTR COLNAME
DBMS_STATS: Y Y Y Y Y ID
DBMS_STATS: Y Y Y Y Y Y Y Y ALMOST_NULL
DBMS_STATS: APPROX_NDV_ALGORITHM chosen: AS in non-incremental
DBMS_STATS: Approximate NDV Options
DBMS_STATS: NDV,NIL,NIL,TOPN,NIL,NIL,RWID,U1,U254U
DBMS_STATS: Starting query at 19.08.19 17:07:43,386000000 +02:00
DBMS_STATS: select /*+ full(t) no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad */to_char(count("ID")),substrb(dump(min("ID"),16,0,64),1,240),substrb(dump(max("ID"),16,0,64),1,240),to_char(count("ALMOST_NULL")),substrb(dump(min("ALMOST_NULL"),16,0,64),1,240),substrb(dump(max("ALMOST_NULL"),16,0,64),1,240),count(rowidtochar(rowid)) from "CBO_TEST"."TEST_STATS" t /* NDV,NIL,NIL,TOPN,NIL,NIL,RWID,U1,U254U*/
DBMS_STATS: Ending query at 19.08.19 17:07:43,589000000 +02:00
DBMS_STATS: STATISTIC LAST_QUERY SESSION_TOTAL
DBMS_STATS: ------------------------------ ---------- -------------
DBMS_STATS: Queries Parallelized 0 0
DBMS_STATS: DML Parallelized 0 0
DBMS_STATS: DDL Parallelized 0 0
DBMS_STATS: DFO Trees 0 0
DBMS_STATS: Server Threads 0 0
DBMS_STATS: Allocation Height 0 0
DBMS_STATS: Allocation Width 0 0
DBMS_STATS: Local Msgs Sent 0 0
DBMS_STATS: Distr Msgs Sent 0 0
DBMS_STATS: Local Msgs Recv'd 0 0
DBMS_STATS: Distr Msgs Recv'd 0 0
DBMS_STATS: DOP 0 0
DBMS_STATS: Slave Sets 0 0
DBMS_STATS: Approximate NDV Result
DBMS_STATS:

0
1000000
1000000
6
1001984
1000000
3979802


1
Typ=2 Len=2: c1,2


2
Typ=2 Len=2: c4,2


3
3
0
1
3
6
1
AAARzUAAPAAAACBAAA,3,


4
Typ=2 Len=2: c1,2



5
Typ=2 Len=2: c1,2



DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min:
DBMS_STATS: max:
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL NULL NULL NULL NULL NULL NULL NULL 7 0 0
DBMS_STATS: min:
DBMS_STATS: max: Typ=2 Len=2: c1,2
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL NULL NULL NULL NULL NULL NULL NULL 7 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c1,2
DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max:
DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c4,2
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL 6 3 NULL NULL NULL 1 NULL 7 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c1,2
DBMS_STATS: start processing top n values for column "ALMOST_NULL"
DBMS_STATS: >> frequency histograms is not feasible
(dbms_lob.getlength(topn_values) <=
length(
)), skip!
DBMS_STATS: no histogram: setting density to 1/ndv (,000001)
DBMS_STATS: no histogram: setting density to 1/ndv (1)
DBMS_STATS: Iteration: 1 numhist: 1
DBMS_STATS: Iteration 1, percentage 100 nblks: 1520
DBMS_STATS: NNV NDV AVG MMX HST EP RP NNNP IND CNDV HSTN HSTR COLNAME
DBMS_STATS: Y ID
DBMS_STATS: Y Y Y Y ALMOST_NULL
Specified DOP=1 blocks=1520 DOP used=1
DBMS_STATS: Building Histogram for ALMOST_NULL
DBMS_STATS: bktnum=254, nnv=3, snnv=3, sndv=1, est_ndv=1, mnb=254
DBMS_STATS: Trying frequency histogram
DBMS_STATS: Starting query at 19.08.19 17:07:43,652000000 +02:00
DBMS_STATS: select substrb(dump(val,16,0,64),1,240) ep, cnt from (select /*+ no_expand_table(t) index_rs(t) no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad */"ALMOST_NULL" val,count(*) cnt from "CBO_TEST"."TEST_STATS" t where "ALMOST_NULL" is not null group by "ALMOST_NULL") order by val
DBMS_STATS: Evaluating frequency histogram for col: "ALMOST_NULL"
DBMS_STATS: number of values = 1, max # of buckects = 254, pct = 100, ssize = 3
DBMS_STATS: csr.hreq: 1 Histogram gathering flags: 1031
DBMS_STATS: Start fill_cstats - hybrid_enabled: TRUE
DBMS_STATS: ====================================================================================================
DBMS_STATS: Statistics from clist for CBO_TEST.TEST_STATS:
DBMS_STATS: ====================================================================================================
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 4,979802 1000000 0 1000000 1000000 1000000 ,000001 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c4,2
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL 1,000006 3 999997 3 1 1 ,16666666 1030 1 1
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c1,2
DBMS_STATS: Histograms:
DBMS_STATS: ---------------------------------------------------------------------------------------------------
DBMS_STATS: BVAL RPCNT EAVAL ENVAL EDVAL
DBMS_STATS: ---------------------------------------------------------------------------------------------------
DBMS_STATS: 3 C102 1 Typ=2 Len=2: c1,2
DBMS_STATS: Need Actual Values (DSC_EAVS)
DBMS_STATS: Histogram Type: FREQUENCY Data Type: 2
DBMS_STATS: Histogram Flags: 4100 Histogram Gathering Flags: 1030
DBMS_STATS: Incremental: FALSE Fix Control 13583722: 1
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: Number of rows in the table = 1000000, blocks = , average row length = 6, chain count = , scan rate = 0, sample size = 1000000, cstats.count = 3, cind = 3
DBMS_STATS: prepare reporting structures...
DBMS_STATS: gather_ext_col_stats 1: tbl_owner=CBO_TEST tbl_name=TEST_STATS col_name=ID part_name= granularity=GLOBAL AND PARTITION
DBMS_STATS: gather_ext_col_stats 1: tbl_owner=CBO_TEST tbl_name=TEST_STATS col_name=ALMOST_NULL part_name= granularity=GLOBAL AND PARTITION
DBMS_STATS: End of construct analyze using sql.
DBMS_STATS: Finished table CBO_TEST.TEST_STATS. at 19.08.19 17:07:43,761000000 +02:00
DBMS_STATS: reporting_man_update_task: objn: 72916 auto_stats: FALSE status: COMPLETED ctx.batching_coeff: 0

PL/SQL-Prozedur erfolgreich abgeschlossen.

SQL>
SQL> -- Testcase 2: Single distinct value, but only three rows are NULL
SQL> -- This scenario triggers a separate query to generate a histogram but using the usual aggressive sampling (5500 rows)
SQL> drop table test_stats purge;

Tabelle wurde geloscht.

SQL>
SQL> create table test_stats
2 as
3 select /*+ NO_GATHER_OPTIMIZER_STATISTICS */
4 rownum as id,
5 case when rownum > 3 then 1 else null end as almost_null
6 from
7 (select /*+ cardinality(1000) */ null from dual connect by level <= 1000) a,
8 (select /*+ cardinality(1000) */ null from dual connect by level <= 1000) b
9 ;

Tabelle wurde erstellt.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'test_stats', method_opt => 'for columns size 1 id for columns size 254 almost_null')
DBMS_STATS: Record gather table stats operation on table : test_stats
DBMS_STATS: job name:
DBMS_STATS: |--> Operation id: 224
DBMS_STATS: gather_table_stats:
DBMS_STATS: Start gather table stats -- tabname: test_stats
DBMS_STATS: job name:
DBMS_STATS: Started table CBO_TEST.TEST_STATS. at 19.08.19 17:07:44,511000000 +02:00. Execution phase: SERIAL (NON-CONCURRENT) stattype: DATA Reporting Mode: FALSE
DBMS_STATS: Specified granularity = AUTO, Fixed granularity = GLOBAL AND PARTITION
DBMS_STATS: parameters ** pfix: (parent pfix: ) ownname: tabname: test_stats partname: estimate_percent: 101 block_sample: FALSE method_opt: for columns size 1 id for columns size 254 almost_null degree: 32766 granularity: Z gIdxGran: cascade: NULL stattab: statid: statown: no_invalidate: NULL flush_colu: TRUE fxt: stattype: DATA start_time: 08-19-2019 17:07:44 gathering_group_stats: FALSE cms_only: FALSE force: FALSE options: Z executionPhase: SERIAL (NON-CONCURRENT) incremental: FALSE concurrent: FALSE loc_degree: loc_method_opt: for columns size 1 id for columns size 254 almost_null fxt_typ: 0 tobjn: 72920
DBMS_STATS: Preferences for table CBO_TEST.TEST_STATS
DBMS_STATS: ================================================================================
DBMS_STATS: SKIP_TIME -
DBMS_STATS: STATS_RETENTION -
DBMS_STATS: MON_MODS_ALL_UPD_TIME -
DBMS_STATS: SNAPSHOT_UPD_TIME -
DBMS_STATS: TRACE - 0
DBMS_STATS: DEBUG - 0
DBMS_STATS: SYS_FLAGS - 0
DBMS_STATS: SPD_RETENTION_WEEKS - 53
DBMS_STATS: CASCADE - DBMS_STATS.AUTO_CASCADE
DBMS_STATS: ESTIMATE_PERCENT - DBMS_STATS.AUTO_SAMPLE_SIZE
DBMS_STATS: DEGREE - NULL
DBMS_STATS: METHOD_OPT - FOR ALL COLUMNS SIZE AUTO
DBMS_STATS: NO_INVALIDATE - DBMS_STATS.AUTO_INVALIDATE
DBMS_STATS: GRANULARITY - AUTO
DBMS_STATS: PUBLISH - TRUE
DBMS_STATS: STALE_PERCENT - 10
DBMS_STATS: APPROXIMATE_NDV - TRUE
DBMS_STATS: APPROXIMATE_NDV_ALGORITHM - REPEAT OR HYPERLOGLOG
DBMS_STATS: ANDV_ALGO_INTERNAL_OBSERVE - FALSE
DBMS_STATS: INCREMENTAL - FALSE
DBMS_STATS: INCREMENTAL_INTERNAL_CONTROL - TRUE
DBMS_STATS: AUTOSTATS_TARGET - AUTO
DBMS_STATS: CONCURRENT - OFF
DBMS_STATS: JOB_OVERHEAD_PERC - 1
DBMS_STATS: JOB_OVERHEAD - -1
DBMS_STATS: GLOBAL_TEMP_TABLE_STATS - SESSION
DBMS_STATS: ENABLE_TOP_FREQ_HISTOGRAMS - 3
DBMS_STATS: ENABLE_HYBRID_HISTOGRAMS - 3
DBMS_STATS: TABLE_CACHED_BLOCKS - 1
DBMS_STATS: INCREMENTAL_LEVEL - PARTITION
DBMS_STATS: INCREMENTAL_STALENESS - ALLOW_MIXED_FORMAT
DBMS_STATS: OPTIONS - GATHER
DBMS_STATS: GATHER_AUTO - AFTER_LOAD
DBMS_STATS: STAT_CATEGORY - OBJECT_STATS, REALTIME_STATS
DBMS_STATS: SCAN_RATE - 0
DBMS_STATS: GATHER_SCAN_RATE - HADOOP_ONLY
DBMS_STATS: PREFERENCE_OVERRIDES_PARAMETER - FALSE
DBMS_STATS: AUTO_STAT_EXTENSIONS - OFF
DBMS_STATS: WAIT_TIME_TO_UPDATE_STATS - 15
DBMS_STATS: ROOT_TRIGGER_PDB - FALSE
DBMS_STATS: COORDINATOR_TRIGGER_SHARD - FALSE
DBMS_STATS: MAINTAIN_STATISTICS_STATUS - FALSE
DBMS_STATS: AUTO_TASK_STATUS - OFF
DBMS_STATS: AUTO_TASK_MAX_RUN_TIME - 3600
DBMS_STATS: AUTO_TASK_INTERVAL - 900
DBMS_STATS: reporting_man_log_task: target: "CBO_TEST"."TEST_STATS" objn: 72920 auto_stats: FALSE status: IN PROGRESS ctx.batching_coeff: 0
DBMS_STATS: Start construct analyze using SQL .. Execution Phase: SERIAL (NON-CONCURRENT) granularity: GLOBAL AND PARTITION global_requested: NULL pfix:
DBMS_STATS: (Baseline)
DBMS_STATS: STATISTIC LAST_QUERY SESSION_TOTAL
DBMS_STATS: ------------------------------ ---------- -------------
DBMS_STATS: Queries Parallelized 0 0
DBMS_STATS: DML Parallelized 0 0
DBMS_STATS: DDL Parallelized 0 0
DBMS_STATS: DFO Trees 0 0
DBMS_STATS: Server Threads 0 0
DBMS_STATS: Allocation Height 0 0
DBMS_STATS: Allocation Width 0 0
DBMS_STATS: Local Msgs Sent 0 0
DBMS_STATS: Distr Msgs Sent 0 0
DBMS_STATS: Local Msgs Recv'd 0 0
DBMS_STATS: Distr Msgs Recv'd 0 0
DBMS_STATS: DOP 0 0
DBMS_STATS: Slave Sets 0 0
DBMS_STATS: Start gather part -- ctx.conc_ctx.global_requested: NULL gran: GLOBAL AND PARTITION execution phase: 1
DBMS_STATS: APPROX_NDV_ALGORITHM chosen: AS in non-incremental
DBMS_STATS: Start gather_stats.. pfix: ownname: CBO_TEST tabname: TEST_STATS pname: spname: execution phase: 1
Specified DOP=1 blocks=1794 DOP used=1
Specified DOP=1 blocks=1794 DOP used=1
DBMS_STATS: Using approximate NDV pct=0
DBMS_STATS: NNV NDV AVG MMX HST EP RP NNNP IND CNDV HSTN HSTR COLNAME
DBMS_STATS: Y Y Y Y Y ID
DBMS_STATS: Y Y Y Y Y Y Y Y ALMOST_NULL
DBMS_STATS: APPROX_NDV_ALGORITHM chosen: AS in non-incremental
DBMS_STATS: Approximate NDV Options
DBMS_STATS: NDV,NIL,NIL,TOPN,NIL,NIL,RWID,U1,U254U
DBMS_STATS: Starting query at 19.08.19 17:07:44,527000000 +02:00
DBMS_STATS: select /*+ full(t) no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad */to_char(count("ID")),substrb(dump(min("ID"),16,0,64),1,240),substrb(dump(max("ID"),16,0,64),1,240),to_char(count("ALMOST_NULL")),substrb(dump(min("ALMOST_NULL"),16,0,64),1,240),substrb(dump(max("ALMOST_NULL"),16,0,64),1,240),count(rowidtochar(rowid)) from "CBO_TEST"."TEST_STATS" t /* NDV,NIL,NIL,TOPN,NIL,NIL,RWID,U1,U254U*/
DBMS_STATS: Ending query at 19.08.19 17:07:44,730000000 +02:00
DBMS_STATS: STATISTIC LAST_QUERY SESSION_TOTAL
DBMS_STATS: ------------------------------ ---------- -------------
DBMS_STATS: Queries Parallelized 0 0
DBMS_STATS: DML Parallelized 0 0
DBMS_STATS: DDL Parallelized 0 0
DBMS_STATS: DFO Trees 0 0
DBMS_STATS: Server Threads 0 0
DBMS_STATS: Allocation Height 0 0
DBMS_STATS: Allocation Width 0 0
DBMS_STATS: Local Msgs Sent 0 0
DBMS_STATS: Distr Msgs Sent 0 0
DBMS_STATS: Local Msgs Recv'd 0 0
DBMS_STATS: Distr Msgs Recv'd 0 0
DBMS_STATS: DOP 0 0
DBMS_STATS: Slave Sets 0 0
DBMS_STATS: Approximate NDV Result
DBMS_STATS:

0
1000000
1000000
6
1001984
1000000
3979802


1
Typ=2 Len=2: c1,2


2
Typ=2 Len=2: c4,2


3
999997
0
1
999997
1999994
1
AAARzYAAPAAAACBAAD,999997,


4
Typ=2 Len=2: c1,2



5
Typ=2 Len=2: c1,2



DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min:
DBMS_STATS: max:
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL NULL NULL NULL NULL NULL NULL NULL 7 0 0
DBMS_STATS: min:
DBMS_STATS: max: Typ=2 Len=2: c1,2
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL NULL NULL NULL NULL NULL NULL NULL 7 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c1,2
DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max:
DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c4,2
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL 1999994 999997 NULL NULL NULL 1 NULL 7 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c1,2
DBMS_STATS: start processing top n values for column "ALMOST_NULL"
DBMS_STATS: >> frequency histograms is not feasible
(dbms_lob.getlength(topn_values) <=
length(
)), skip!
DBMS_STATS: no histogram: setting density to 1/ndv (,000001)
DBMS_STATS: no histogram: setting density to 1/ndv (1)
DBMS_STATS: Iteration: 1 numhist: 1
DBMS_STATS: Iteration 1, percentage ,5500016500049500148500445501336504009512 nblks: 1794
DBMS_STATS: NNV NDV AVG MMX HST EP RP NNNP IND CNDV HSTN HSTR COLNAME
DBMS_STATS: Y ID
DBMS_STATS: Y Y Y Y ALMOST_NULL
Specified DOP=1 blocks=1794 DOP used=1
DBMS_STATS: Building Histogram for ALMOST_NULL
DBMS_STATS: bktnum=254, nnv=999997, snnv=5500, sndv=1, est_ndv=1, mnb=254
DBMS_STATS: Trying frequency histogram
DBMS_STATS: Starting query at 19.08.19 17:07:44,730000000 +02:00
DBMS_STATS: select substrb(dump(val,16,0,64),1,240) ep, cnt from (select /*+ no_expand_table(t) index_rs(t) no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad */"ALMOST_NULL" val,count(*) cnt from "CBO_TEST"."TEST_STATS" sample ( .5500016500) t where "ALMOST_NULL" is not null group by "ALMOST_NULL") order by val
DBMS_STATS: Evaluating frequency histogram for col: "ALMOST_NULL"
DBMS_STATS: number of values = 1, max # of buckects = 254, pct = ,5500016500049500148500445501336504009512, ssize = 5499
DBMS_STATS: csr.hreq: 1 Histogram gathering flags: 1031
DBMS_STATS: Start fill_cstats - hybrid_enabled: TRUE
DBMS_STATS: ====================================================================================================
DBMS_STATS: Statistics from clist for CBO_TEST.TEST_STATS:
DBMS_STATS: ====================================================================================================
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 4,979802 1000000 0 1000000 1000000 1000000 ,000001 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c4,2
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL 2,999994 999997 3 5499 1 1 ,00000050 1030 1 1
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c1,2
DBMS_STATS: Histograms:
DBMS_STATS: ---------------------------------------------------------------------------------------------------
DBMS_STATS: BVAL RPCNT EAVAL ENVAL EDVAL
DBMS_STATS: ---------------------------------------------------------------------------------------------------
DBMS_STATS: 5499 C102 1 Typ=2 Len=2: c1,2
DBMS_STATS: Need Actual Values (DSC_EAVS)
DBMS_STATS: Histogram Type: FREQUENCY Data Type: 2
DBMS_STATS: Histogram Flags: 4100 Histogram Gathering Flags: 1030
DBMS_STATS: Incremental: FALSE Fix Control 13583722: 1
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: Number of rows in the table = 1000000, blocks = , average row length = 8, chain count = , scan rate = 0, sample size = 1000000, cstats.count = 3, cind = 3
DBMS_STATS: prepare reporting structures...
DBMS_STATS: gather_ext_col_stats 1: tbl_owner=CBO_TEST tbl_name=TEST_STATS col_name=ID part_name= granularity=GLOBAL AND PARTITION
DBMS_STATS: gather_ext_col_stats 1: tbl_owner=CBO_TEST tbl_name=TEST_STATS col_name=ALMOST_NULL part_name= granularity=GLOBAL AND PARTITION
DBMS_STATS: End of construct analyze using sql.
DBMS_STATS: Finished table CBO_TEST.TEST_STATS. at 19.08.19 17:07:44,761000000 +02:00
DBMS_STATS: reporting_man_update_task: objn: 72920 auto_stats: FALSE status: COMPLETED ctx.batching_coeff: 0

PL/SQL-Prozedur erfolgreich abgeschlossen.

SQL>
SQL> -- Testcase 3: 11 distinct values, but only three rows are NULL
SQL> -- This scenario extracts the values directly from the ROWIDs returned from the main query, no separate query
SQL> drop table test_stats purge;

Tabelle wurde geloscht.

SQL>
SQL> create table test_stats
2 as
3 select /*+ NO_GATHER_OPTIMIZER_STATISTICS */
4 rownum as id,
5 case when rownum > 3 then trunc(rownum / 100000) else null end as almost_null
6 from
7 (select /*+ cardinality(1000) */ null from dual connect by level <= 1000) a,
8 (select /*+ cardinality(1000) */ null from dual connect by level <= 1000) b
9 ;

Tabelle wurde erstellt.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'test_stats', method_opt => 'for columns size 1 id for columns size 254 almost_null')
DBMS_STATS: Record gather table stats operation on table : test_stats
DBMS_STATS: job name:
DBMS_STATS: |--> Operation id: 225
DBMS_STATS: gather_table_stats:
DBMS_STATS: Start gather table stats -- tabname: test_stats
DBMS_STATS: job name:
DBMS_STATS: Started table CBO_TEST.TEST_STATS. at 19.08.19 17:07:45,448000000 +02:00. Execution phase: SERIAL (NON-CONCURRENT) stattype: DATA Reporting Mode: FALSE
DBMS_STATS: Specified granularity = AUTO, Fixed granularity = GLOBAL AND PARTITION
DBMS_STATS: parameters ** pfix: (parent pfix: ) ownname: tabname: test_stats partname: estimate_percent: 101 block_sample: FALSE method_opt: for columns size 1 id for columns size 254 almost_null degree: 32766 granularity: Z gIdxGran: cascade: NULL stattab: statid: statown: no_invalidate: NULL flush_colu: TRUE fxt: stattype: DATA start_time: 08-19-2019 17:07:45 gathering_group_stats: FALSE cms_only: FALSE force: FALSE options: Z executionPhase: SERIAL (NON-CONCURRENT) incremental: FALSE concurrent: FALSE loc_degree: loc_method_opt: for columns size 1 id for columns size 254 almost_null fxt_typ: 0 tobjn: 72921
DBMS_STATS: Preferences for table CBO_TEST.TEST_STATS
DBMS_STATS: ================================================================================
DBMS_STATS: SKIP_TIME -
DBMS_STATS: STATS_RETENTION -
DBMS_STATS: MON_MODS_ALL_UPD_TIME -
DBMS_STATS: SNAPSHOT_UPD_TIME -
DBMS_STATS: TRACE - 0
DBMS_STATS: DEBUG - 0
DBMS_STATS: SYS_FLAGS - 0
DBMS_STATS: SPD_RETENTION_WEEKS - 53
DBMS_STATS: CASCADE - DBMS_STATS.AUTO_CASCADE
DBMS_STATS: ESTIMATE_PERCENT - DBMS_STATS.AUTO_SAMPLE_SIZE
DBMS_STATS: DEGREE - NULL
DBMS_STATS: METHOD_OPT - FOR ALL COLUMNS SIZE AUTO
DBMS_STATS: NO_INVALIDATE - DBMS_STATS.AUTO_INVALIDATE
DBMS_STATS: GRANULARITY - AUTO
DBMS_STATS: PUBLISH - TRUE
DBMS_STATS: STALE_PERCENT - 10
DBMS_STATS: APPROXIMATE_NDV - TRUE
DBMS_STATS: APPROXIMATE_NDV_ALGORITHM - REPEAT OR HYPERLOGLOG
DBMS_STATS: ANDV_ALGO_INTERNAL_OBSERVE - FALSE
DBMS_STATS: INCREMENTAL - FALSE
DBMS_STATS: INCREMENTAL_INTERNAL_CONTROL - TRUE
DBMS_STATS: AUTOSTATS_TARGET - AUTO
DBMS_STATS: CONCURRENT - OFF
DBMS_STATS: JOB_OVERHEAD_PERC - 1
DBMS_STATS: JOB_OVERHEAD - -1
DBMS_STATS: GLOBAL_TEMP_TABLE_STATS - SESSION
DBMS_STATS: ENABLE_TOP_FREQ_HISTOGRAMS - 3
DBMS_STATS: ENABLE_HYBRID_HISTOGRAMS - 3
DBMS_STATS: TABLE_CACHED_BLOCKS - 1
DBMS_STATS: INCREMENTAL_LEVEL - PARTITION
DBMS_STATS: INCREMENTAL_STALENESS - ALLOW_MIXED_FORMAT
DBMS_STATS: OPTIONS - GATHER
DBMS_STATS: GATHER_AUTO - AFTER_LOAD
DBMS_STATS: STAT_CATEGORY - OBJECT_STATS, REALTIME_STATS
DBMS_STATS: SCAN_RATE - 0
DBMS_STATS: GATHER_SCAN_RATE - HADOOP_ONLY
DBMS_STATS: PREFERENCE_OVERRIDES_PARAMETER - FALSE
DBMS_STATS: AUTO_STAT_EXTENSIONS - OFF
DBMS_STATS: WAIT_TIME_TO_UPDATE_STATS - 15
DBMS_STATS: ROOT_TRIGGER_PDB - FALSE
DBMS_STATS: COORDINATOR_TRIGGER_SHARD - FALSE
DBMS_STATS: MAINTAIN_STATISTICS_STATUS - FALSE
DBMS_STATS: AUTO_TASK_STATUS - OFF
DBMS_STATS: AUTO_TASK_MAX_RUN_TIME - 3600
DBMS_STATS: AUTO_TASK_INTERVAL - 900
DBMS_STATS: reporting_man_log_task: target: "CBO_TEST"."TEST_STATS" objn: 72921 auto_stats: FALSE status: IN PROGRESS ctx.batching_coeff: 0
DBMS_STATS: Start construct analyze using SQL .. Execution Phase: SERIAL (NON-CONCURRENT) granularity: GLOBAL AND PARTITION global_requested: NULL pfix:
DBMS_STATS: (Baseline)
DBMS_STATS: STATISTIC LAST_QUERY SESSION_TOTAL
DBMS_STATS: ------------------------------ ---------- -------------
DBMS_STATS: Queries Parallelized 0 0
DBMS_STATS: DML Parallelized 0 0
DBMS_STATS: DDL Parallelized 0 0
DBMS_STATS: DFO Trees 0 0
DBMS_STATS: Server Threads 0 0
DBMS_STATS: Allocation Height 0 0
DBMS_STATS: Allocation Width 0 0
DBMS_STATS: Local Msgs Sent 0 0
DBMS_STATS: Distr Msgs Sent 0 0
DBMS_STATS: Local Msgs Recv'd 0 0
DBMS_STATS: Distr Msgs Recv'd 0 0
DBMS_STATS: DOP 0 0
DBMS_STATS: Slave Sets 0 0
DBMS_STATS: Start gather part -- ctx.conc_ctx.global_requested: NULL gran: GLOBAL AND PARTITION execution phase: 1
DBMS_STATS: APPROX_NDV_ALGORITHM chosen: AS in non-incremental
DBMS_STATS: Start gather_stats.. pfix: ownname: CBO_TEST tabname: TEST_STATS pname: spname: execution phase: 1
Specified DOP=1 blocks=1781 DOP used=1
Specified DOP=1 blocks=1781 DOP used=1
DBMS_STATS: Using approximate NDV pct=0
DBMS_STATS: NNV NDV AVG MMX HST EP RP NNNP IND CNDV HSTN HSTR COLNAME
DBMS_STATS: Y Y Y Y Y ID
DBMS_STATS: Y Y Y Y Y Y Y Y ALMOST_NULL
DBMS_STATS: APPROX_NDV_ALGORITHM chosen: AS in non-incremental
DBMS_STATS: Approximate NDV Options
DBMS_STATS: NDV,NIL,NIL,TOPN,NIL,NIL,RWID,U1,U254U
DBMS_STATS: Starting query at 19.08.19 17:07:45,448000000 +02:00
DBMS_STATS: select /*+ full(t) no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad */to_char(count("ID")),substrb(dump(min("ID"),16,0,64),1,240),substrb(dump(max("ID"),16,0,64),1,240),to_char(count("ALMOST_NULL")),substrb(dump(min("ALMOST_NULL"),16,0,64),1,240),substrb(dump(max("ALMOST_NULL"),16,0,64),1,240),count(rowidtochar(rowid)) from "CBO_TEST"."TEST_STATS" t /* NDV,NIL,NIL,TOPN,NIL,NIL,RWID,U1,U254U*/
DBMS_STATS: Ending query at 19.08.19 17:07:45,651000000 +02:00
DBMS_STATS: STATISTIC LAST_QUERY SESSION_TOTAL
DBMS_STATS: ------------------------------ ---------- -------------
DBMS_STATS: Queries Parallelized 0 0
DBMS_STATS: DML Parallelized 0 0
DBMS_STATS: DDL Parallelized 0 0
DBMS_STATS: DFO Trees 0 0
DBMS_STATS: Server Threads 0 0
DBMS_STATS: Allocation Height 0 0
DBMS_STATS: Allocation Width 0 0
DBMS_STATS: Local Msgs Sent 0 0
DBMS_STATS: Distr Msgs Sent 0 0
DBMS_STATS: Local Msgs Recv'd 0 0
DBMS_STATS: Distr Msgs Recv'd 0 0
DBMS_STATS: DOP 0 0
DBMS_STATS: Slave Sets 0 0
DBMS_STATS: Approximate NDV Result
DBMS_STATS:

0
1000000
1000000
6
1001984
1000000
3979802


1
Typ=2 Len=2: c1,2


2
Typ=2 Len=2: c4,2


3
999997
0
11
999997
1899998
11

AAARzZAAPAAAAVaAFp,100000,AAARzZAAPAAAANAAAb,100000,AAARzZAAPAAAAKMAEf,100000,AAARzZAAPAAAAHYAIj,100000,AAARzZAAPAAAASnABA,100000,AAARzZAAPAAAAbBAGO,100000,AAARzZAAPAAAAYOABl,100000,AAARzZAAPAAAAElAEP,100000,AAARzZAAPAAAAPzAFE,100000,AAARzZAAPAAAACBAAD,99996,AAARzZAAPAAAAd1ACK,1,


4
Typ=2 Len=1: 80


5
Typ=2 Len=2: c1,b



DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min:
DBMS_STATS: max:
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL NULL NULL NULL NULL NULL NULL NULL 7 0 0
DBMS_STATS: min:
DBMS_STATS: max: Typ=2 Len=2: c1,b
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL NULL NULL NULL NULL NULL NULL NULL 7 0 0
DBMS_STATS: min: Typ=2 Len=1: 80
DBMS_STATS: max: Typ=2 Len=2: c1,b
DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max:
DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c4,2
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL 1899998 999997 NULL NULL NULL 11 NULL 7 0 0
DBMS_STATS: min: Typ=2 Len=1: 80
DBMS_STATS: max: Typ=2 Len=2: c1,b
DBMS_STATS: start processing top n values for column "ALMOST_NULL"
DBMS_STATS: Parsing topn values..
DBMS_STATS: Extracted 11 rowid-freq pairs.
DBMS_STATS: topn sql (len: 744):
DBMS_STATS: +++ select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad */ substrb(dump("ALMOST_NULL",16,0,64),1,240) d_1773304655, rowidtochar(rowid) rwid from "CBO_TEST"."TEST_STATS" t where rowid in (chartorowid('AAARzZAAPAAAACBAAD'),chartorowid('AAARzZAAPAAAAElAEP'),chartorowid('AAARzZAAPAAAAHYAIj'),chartorowid('AAARzZAAPAAAAKMAEf'),chartorowid('AAARzZAAPAAAANAAAb'),chartorowid('AAARzZAAPAAAAPzAFE'),chartorowid('AAARzZAAPAAAASnABA'),chartorowid('AAARzZAAPAAAAVaAFp'),chartorowid('AAARzZAAPAAAAYOABl'),chartorowid('AAARzZAAPAAAAbBAGO'),chartorowid('AAARzZAAPAAAAd1ACK')) order by "ALMOST_NULL"
DBMS_STATS: start executing top n value query..
DBMS_STATS: Starting query at 19.08.19 17:07:45,667000000 +02:00
DBMS_STATS: select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad */ substrb(dump("ALMOST_NULL",16,0,64),1,240) d_1773304655, rowidtochar(rowid) rwid from "CBO_TEST"."TEST_STATS" t where rowid in (chartorowid('AAARzZAAPAAAACBAAD'),chartorowid('AAARzZAAPAAAAElAEP'),chartorowid('AAARzZAAPAAAAHYAIj'),chartorowid('AAARzZAAPAAAAKMAEf'),chartorowid('AAARzZAAPAAAANAAAb'),chartorowid('AAARzZAAPAAAAPzAFE'),chartorowid('AAARzZAAPAAAASnABA'),chartorowid('AAARzZAAPAAAAVaAFp'),chartorowid('AAARzZAAPAAAAYOABl'),chartorowid('AAARzZAAPAAAAbBAGO'),chartorowid('AAARzZAAPAAAAd1ACK')) order by "ALMOST_NULL"
DBMS_STATS: removal_count: -243 total_nonnull_rows: 999997 mnb: 254
DBMS_STATS: hist_type in exec_get_topn: 1024 ndv:11 mnb:254
DBMS_STATS: Evaluating frequency histogram for col: "ALMOST_NULL"
DBMS_STATS: number of values = 11, max # of buckects = 254, pct = 100, ssize = 999997
DBMS_STATS: csr.hreq: 1 Histogram gathering flags: 1031
DBMS_STATS: done_hist in process_topn: TRUE csr.ccnt: 1
DBMS_STATS: Mark column "ALMOST_NULL" as top N computed
DBMS_STATS: no histogram: setting density to 1/ndv (,000001)
DBMS_STATS: Skip topn computed column "ALMOST_NULL" numhist: 0
DBMS_STATS: Start fill_cstats - hybrid_enabled: TRUE
DBMS_STATS: ====================================================================================================
DBMS_STATS: Statistics from clist for CBO_TEST.TEST_STATS:
DBMS_STATS: ====================================================================================================
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 4,979802 1000000 0 1000000 1000000 1000000 ,000001 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c4,2
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL 2,899998 999997 3 999997 11 11 ,00000050 1286 1 1
DBMS_STATS: min: Typ=2 Len=1: 80
DBMS_STATS: max: Typ=2 Len=2: c1,b
DBMS_STATS: Histograms:
DBMS_STATS: ---------------------------------------------------------------------------------------------------
DBMS_STATS: BVAL RPCNT EAVAL ENVAL EDVAL
DBMS_STATS: ---------------------------------------------------------------------------------------------------
DBMS_STATS: 99996 80 0 Typ=2 Len=1: 80
DBMS_STATS: 199996 C102 1 Typ=2 Len=2: c1,2
DBMS_STATS: 299996 C103 2 Typ=2 Len=2: c1,3
DBMS_STATS: 399996 C104 3 Typ=2 Len=2: c1,4
DBMS_STATS: 499996 C105 4 Typ=2 Len=2: c1,5
DBMS_STATS: 599996 C106 5 Typ=2 Len=2: c1,6
DBMS_STATS: 699996 C107 6 Typ=2 Len=2: c1,7
DBMS_STATS: 799996 C108 7 Typ=2 Len=2: c1,8
DBMS_STATS: 899996 C109 8 Typ=2 Len=2: c1,9
DBMS_STATS: 999996 C10A 9 Typ=2 Len=2: c1,a
DBMS_STATS: 999997 C10B 10 Typ=2 Len=2: c1,b
DBMS_STATS: Need Actual Values (DSC_EAVS)
DBMS_STATS: Histogram Type: FREQUENCY Data Type: 2
DBMS_STATS: Histogram Flags: 4100 Histogram Gathering Flags: 1286
DBMS_STATS: Incremental: FALSE Fix Control 13583722: 1
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: Number of rows in the table = 1000000, blocks = , average row length = 8, chain count = , scan rate = 0, sample size = 1000000, cstats.count = 13, cind = 13
DBMS_STATS: prepare reporting structures...
DBMS_STATS: gather_ext_col_stats 1: tbl_owner=CBO_TEST tbl_name=TEST_STATS col_name=ID part_name= granularity=GLOBAL AND PARTITION
DBMS_STATS: gather_ext_col_stats 1: tbl_owner=CBO_TEST tbl_name=TEST_STATS col_name=ALMOST_NULL part_name= granularity=GLOBAL AND PARTITION
DBMS_STATS: End of construct analyze using sql.
DBMS_STATS: Finished table CBO_TEST.TEST_STATS. at 19.08.19 17:07:45,683000000 +02:00
DBMS_STATS: reporting_man_update_task: objn: 72921 auto_stats: FALSE status: COMPLETED ctx.batching_coeff: 0

PL/SQL-Prozedur erfolgreich abgeschlossen.

SQL>
SQL> -- Testcase 4: Three distinct values in three rows and all other rows have NULL
SQL> -- This scenario extracts the values directly from the ROWIDs returned from the main query, no separate query
SQL> -- Applies to two distinct values, too
SQL> -- So a single distinct value looks like a special case that triggers a separate query
SQL> -- If this is with combination of almost all rows having NULLs this query doesn't use sampling
SQL> -- ! Big threat if the table is big !
SQL> drop table test_stats purge;

Tabelle wurde geloscht.

SQL>
SQL> create table test_stats
2 as
3 select /*+ NO_GATHER_OPTIMIZER_STATISTICS */
4 rownum as id,
5 case when rownum <= 2 then rownum else null end as almost_null
6 from
7 (select /*+ cardinality(1000) */ null from dual connect by level <= 1000) a,
8 (select /*+ cardinality(1000) */ null from dual connect by level <= 1000) b
9 ;

Tabelle wurde erstellt.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'test_stats', method_opt => 'for columns size 1 id for columns size 254 almost_null')
DBMS_STATS: Record gather table stats operation on table : test_stats
DBMS_STATS: job name:
DBMS_STATS: |--> Operation id: 226
DBMS_STATS: gather_table_stats:
DBMS_STATS: Start gather table stats -- tabname: test_stats
DBMS_STATS: job name:
DBMS_STATS: Started table CBO_TEST.TEST_STATS. at 19.08.19 17:07:46,276000000 +02:00. Execution phase: SERIAL (NON-CONCURRENT) stattype: DATA Reporting Mode: FALSE
DBMS_STATS: Specified granularity = AUTO, Fixed granularity = GLOBAL AND PARTITION
DBMS_STATS: parameters ** pfix: (parent pfix: ) ownname: tabname: test_stats partname: estimate_percent: 101 block_sample: FALSE method_opt: for columns size 1 id for columns size 254 almost_null degree: 32766 granularity: Z gIdxGran: cascade: NULL stattab: statid: statown: no_invalidate: NULL flush_colu: TRUE fxt: stattype: DATA start_time: 08-19-2019 17:07:46 gathering_group_stats: FALSE cms_only: FALSE force: FALSE options: Z executionPhase: SERIAL (NON-CONCURRENT) incremental: FALSE concurrent: FALSE loc_degree: loc_method_opt: for columns size 1 id for columns size 254 almost_null fxt_typ: 0 tobjn: 72922
DBMS_STATS: Preferences for table CBO_TEST.TEST_STATS
DBMS_STATS: ================================================================================
DBMS_STATS: SKIP_TIME -
DBMS_STATS: STATS_RETENTION -
DBMS_STATS: MON_MODS_ALL_UPD_TIME -
DBMS_STATS: SNAPSHOT_UPD_TIME -
DBMS_STATS: TRACE - 0
DBMS_STATS: DEBUG - 0
DBMS_STATS: SYS_FLAGS - 0
DBMS_STATS: SPD_RETENTION_WEEKS - 53
DBMS_STATS: CASCADE - DBMS_STATS.AUTO_CASCADE
DBMS_STATS: ESTIMATE_PERCENT - DBMS_STATS.AUTO_SAMPLE_SIZE
DBMS_STATS: DEGREE - NULL
DBMS_STATS: METHOD_OPT - FOR ALL COLUMNS SIZE AUTO
DBMS_STATS: NO_INVALIDATE - DBMS_STATS.AUTO_INVALIDATE
DBMS_STATS: GRANULARITY - AUTO
DBMS_STATS: PUBLISH - TRUE
DBMS_STATS: STALE_PERCENT - 10
DBMS_STATS: APPROXIMATE_NDV - TRUE
DBMS_STATS: APPROXIMATE_NDV_ALGORITHM - REPEAT OR HYPERLOGLOG
DBMS_STATS: ANDV_ALGO_INTERNAL_OBSERVE - FALSE
DBMS_STATS: INCREMENTAL - FALSE
DBMS_STATS: INCREMENTAL_INTERNAL_CONTROL - TRUE
DBMS_STATS: AUTOSTATS_TARGET - AUTO
DBMS_STATS: CONCURRENT - OFF
DBMS_STATS: JOB_OVERHEAD_PERC - 1
DBMS_STATS: JOB_OVERHEAD - -1
DBMS_STATS: GLOBAL_TEMP_TABLE_STATS - SESSION
DBMS_STATS: ENABLE_TOP_FREQ_HISTOGRAMS - 3
DBMS_STATS: ENABLE_HYBRID_HISTOGRAMS - 3
DBMS_STATS: TABLE_CACHED_BLOCKS - 1
DBMS_STATS: INCREMENTAL_LEVEL - PARTITION
DBMS_STATS: INCREMENTAL_STALENESS - ALLOW_MIXED_FORMAT
DBMS_STATS: OPTIONS - GATHER
DBMS_STATS: GATHER_AUTO - AFTER_LOAD
DBMS_STATS: STAT_CATEGORY - OBJECT_STATS, REALTIME_STATS
DBMS_STATS: SCAN_RATE - 0
DBMS_STATS: GATHER_SCAN_RATE - HADOOP_ONLY
DBMS_STATS: PREFERENCE_OVERRIDES_PARAMETER - FALSE
DBMS_STATS: AUTO_STAT_EXTENSIONS - OFF
DBMS_STATS: WAIT_TIME_TO_UPDATE_STATS - 15
DBMS_STATS: ROOT_TRIGGER_PDB - FALSE
DBMS_STATS: COORDINATOR_TRIGGER_SHARD - FALSE
DBMS_STATS: MAINTAIN_STATISTICS_STATUS - FALSE
DBMS_STATS: AUTO_TASK_STATUS - OFF
DBMS_STATS: AUTO_TASK_MAX_RUN_TIME - 3600
DBMS_STATS: AUTO_TASK_INTERVAL - 900
DBMS_STATS: reporting_man_log_task: target: "CBO_TEST"."TEST_STATS" objn: 72922 auto_stats: FALSE status: IN PROGRESS ctx.batching_coeff: 0
DBMS_STATS: Start construct analyze using SQL .. Execution Phase: SERIAL (NON-CONCURRENT) granularity: GLOBAL AND PARTITION global_requested: NULL pfix:
DBMS_STATS: (Baseline)
DBMS_STATS: STATISTIC LAST_QUERY SESSION_TOTAL
DBMS_STATS: ------------------------------ ---------- -------------
DBMS_STATS: Queries Parallelized 0 0
DBMS_STATS: DML Parallelized 0 0
DBMS_STATS: DDL Parallelized 0 0
DBMS_STATS: DFO Trees 0 0
DBMS_STATS: Server Threads 0 0
DBMS_STATS: Allocation Height 0 0
DBMS_STATS: Allocation Width 0 0
DBMS_STATS: Local Msgs Sent 0 0
DBMS_STATS: Distr Msgs Sent 0 0
DBMS_STATS: Local Msgs Recv'd 0 0
DBMS_STATS: Distr Msgs Recv'd 0 0
DBMS_STATS: DOP 0 0
DBMS_STATS: Slave Sets 0 0
DBMS_STATS: Start gather part -- ctx.conc_ctx.global_requested: NULL gran: GLOBAL AND PARTITION execution phase: 1
DBMS_STATS: APPROX_NDV_ALGORITHM chosen: AS in non-incremental
DBMS_STATS: Start gather_stats.. pfix: ownname: CBO_TEST tabname: TEST_STATS pname: spname: execution phase: 1
Specified DOP=1 blocks=1520 DOP used=1
Specified DOP=1 blocks=1520 DOP used=1
DBMS_STATS: Using approximate NDV pct=0
DBMS_STATS: NNV NDV AVG MMX HST EP RP NNNP IND CNDV HSTN HSTR COLNAME
DBMS_STATS: Y Y Y Y Y ID
DBMS_STATS: Y Y Y Y Y Y Y Y ALMOST_NULL
DBMS_STATS: APPROX_NDV_ALGORITHM chosen: AS in non-incremental
DBMS_STATS: Approximate NDV Options
DBMS_STATS: NDV,NIL,NIL,TOPN,NIL,NIL,RWID,U1,U254U
DBMS_STATS: Starting query at 19.08.19 17:07:46,292000000 +02:00
DBMS_STATS: select /*+ full(t) no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad */to_char(count("ID")),substrb(dump(min("ID"),16,0,64),1,240),substrb(dump(max("ID"),16,0,64),1,240),to_char(count("ALMOST_NULL")),substrb(dump(min("ALMOST_NULL"),16,0,64),1,240),substrb(dump(max("ALMOST_NULL"),16,0,64),1,240),count(rowidtochar(rowid)) from "CBO_TEST"."TEST_STATS" t /* NDV,NIL,NIL,TOPN,NIL,NIL,RWID,U1,U254U*/
DBMS_STATS: Ending query at 19.08.19 17:07:46,448000000 +02:00
DBMS_STATS: STATISTIC LAST_QUERY SESSION_TOTAL
DBMS_STATS: ------------------------------ ---------- -------------
DBMS_STATS: Queries Parallelized 0 0
DBMS_STATS: DML Parallelized 0 0
DBMS_STATS: DDL Parallelized 0 0
DBMS_STATS: DFO Trees 0 0
DBMS_STATS: Server Threads 0 0
DBMS_STATS: Allocation Height 0 0
DBMS_STATS: Allocation Width 0 0
DBMS_STATS: Local Msgs Sent 0 0
DBMS_STATS: Distr Msgs Sent 0 0
DBMS_STATS: Local Msgs Recv'd 0 0
DBMS_STATS: Distr Msgs Recv'd 0 0
DBMS_STATS: DOP 0 0
DBMS_STATS: Slave Sets 0 0
DBMS_STATS: Approximate NDV Result
DBMS_STATS:

0
1000000
1000000
6
1001984
1000000
3979802


1
Typ=2 Len=2: c1,2


2
Typ=2 Len=2: c4,2


3
2
0
2
2
4
2
AAARzaAAPAAAACBAAB,1,AAARzaAAPAAAACBAAA,1,


4
Typ=2 Len=2: c1,2



5
Typ=2 Len=2: c1,3



DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min:
DBMS_STATS: max:
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL NULL NULL NULL NULL NULL NULL NULL 7 0 0
DBMS_STATS: min:
DBMS_STATS: max: Typ=2 Len=2: c1,3
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL NULL NULL NULL NULL NULL NULL NULL 7 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c1,3
DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max:
DBMS_STATS: process_raw_crec() for 1
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 3979802 1000000 NULL NULL NULL 1001984 NULL 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c4,2
DBMS_STATS: process_raw_crec() for 2
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL 4 2 NULL NULL NULL 2 NULL 7 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c1,3
DBMS_STATS: start processing top n values for column "ALMOST_NULL"
DBMS_STATS: Parsing topn values..
DBMS_STATS: Extracted 2 rowid-freq pairs.
DBMS_STATS: topn sql (len: 438):
DBMS_STATS: +++ select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad */ substrb(dump("ALMOST_NULL",16,0,64),1,240) d_1773304655, rowidtochar(rowid) rwid from "CBO_TEST"."TEST_STATS" t where rowid in (chartorowid('AAARzaAAPAAAACBAAA'),chartorowid('AAARzaAAPAAAACBAAB')) order by "ALMOST_NULL"
DBMS_STATS: start executing top n value query..
DBMS_STATS: Starting query at 19.08.19 17:07:46,464000000 +02:00
DBMS_STATS: select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad */ substrb(dump("ALMOST_NULL",16,0,64),1,240) d_1773304655, rowidtochar(rowid) rwid from "CBO_TEST"."TEST_STATS" t where rowid in (chartorowid('AAARzaAAPAAAACBAAA'),chartorowid('AAARzaAAPAAAACBAAB')) order by "ALMOST_NULL"
DBMS_STATS: removal_count: -252 total_nonnull_rows: 2 mnb: 254
DBMS_STATS: hist_type in exec_get_topn: 1024 ndv:2 mnb:254
DBMS_STATS: Evaluating frequency histogram for col: "ALMOST_NULL"
DBMS_STATS: number of values = 2, max # of buckects = 254, pct = 100, ssize = 2
DBMS_STATS: csr.hreq: 1 Histogram gathering flags: 1031
DBMS_STATS: done_hist in process_topn: TRUE csr.ccnt: 1
DBMS_STATS: Mark column "ALMOST_NULL" as top N computed
DBMS_STATS: no histogram: setting density to 1/ndv (,000001)
DBMS_STATS: Skip topn computed column "ALMOST_NULL" numhist: 0
DBMS_STATS: Start fill_cstats - hybrid_enabled: TRUE
DBMS_STATS: ====================================================================================================
DBMS_STATS: Statistics from clist for CBO_TEST.TEST_STATS:
DBMS_STATS: ====================================================================================================
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ID 4,979802 1000000 0 1000000 1000000 1000000 ,000001 0 0 0
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c4,2
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: COL_NAME ACL SNNV NV SNNVDV NDV SNDV DENS HGATH HIND CCNT
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: ALMOST_NULL 1,000004 2 999998 2 2 2 ,25 1286 1 1
DBMS_STATS: min: Typ=2 Len=2: c1,2
DBMS_STATS: max: Typ=2 Len=2: c1,3
DBMS_STATS: Histograms:
DBMS_STATS: ---------------------------------------------------------------------------------------------------
DBMS_STATS: BVAL RPCNT EAVAL ENVAL EDVAL
DBMS_STATS: ---------------------------------------------------------------------------------------------------
DBMS_STATS: 1 C102 1 Typ=2 Len=2: c1,2
DBMS_STATS: 2 C103 2 Typ=2 Len=2: c1,3
DBMS_STATS: Need Actual Values (DSC_EAVS)
DBMS_STATS: Histogram Type: FREQUENCY Data Type: 2
DBMS_STATS: Histogram Flags: 4100 Histogram Gathering Flags: 1286
DBMS_STATS: Incremental: FALSE Fix Control 13583722: 1
DBMS_STATS: ----------------------------------------------------------------------------------------------------
DBMS_STATS: Number of rows in the table = 1000000, blocks = , average row length = 6, chain count = , scan rate = 0, sample size = 1000000, cstats.count = 4, cind = 4
DBMS_STATS: prepare reporting structures...
DBMS_STATS: gather_ext_col_stats 1: tbl_owner=CBO_TEST tbl_name=TEST_STATS col_name=ID part_name= granularity=GLOBAL AND PARTITION
DBMS_STATS: gather_ext_col_stats 1: tbl_owner=CBO_TEST tbl_name=TEST_STATS col_name=ALMOST_NULL part_name= granularity=GLOBAL AND PARTITION
DBMS_STATS: End of construct analyze using sql.
DBMS_STATS: Finished table CBO_TEST.TEST_STATS. at 19.08.19 17:07:46,464000000 +02:00
DBMS_STATS: reporting_man_update_task: objn: 72922 auto_stats: FALSE status: COMPLETED ctx.batching_coeff: 0

PL/SQL-Prozedur erfolgreich abgeschlossen.

SQL>


So this test case shows that in case of a column having more than a single value the code happily extracts the required information using the ROWIDs collected during the main pass and doesn't require a separate pass (Test cases 3 and 4), but in the special case of a single valued column it bails out and runs a separate query typically using sampling, except for the edge case of a single valued column having NULLs in most of the rows (Test cases 1 and 2).

I've discussed this issue with Nigel Bayliss, the optimizer product manager at Oracle, and he agreed that it looks like a bug - and created a corresponding bug (Bug 30205756 - FREQUENCY HISTOGRAM NOT CREATED USING FULL ROW SAMPLE, which is probably not marked as public), just in case you come across this issue and want to provide this reference to Oracle Support.

Retailers Find Future Top Customers with Data Science from Oracle

Oracle Press Releases - Thu, 2020-01-09 08:00
Press Release
Retailers Find Future Top Customers with Data Science from Oracle First-party data, enriched with one of the world’s largest data marketplaces, helps retailers identify prospective customers who share similar tastes to best existing buyers

Redwood Shores, Calif.—Jan 9, 2020

Oracle is helping retailers find their top future customers using data science. A new offering from Oracle Retail, Consumer Insights aids retailers in understanding the characteristics of their best customers then extends those traits to find similar potential customers among the petabytes of third-party consumer data in Oracle Data Cloud. This enables retailers to optimize customer acquisition campaigns with more relevant, targeted products and offers. 

Gaining new customers is a top priority for retail marketers, with the cost growing every year. In the recent holiday season, a new survey showed that 77 percent of retailers planned to increase their spend in this area. But are they reaching the right prospective buyers?

With Oracle Retail Consumer Insights, retailers can achieve a deep understanding of their existing customers through enriched attributes and advanced data science. For example, do the most profitable athletic gear customers also purchase particular brands of soft drinks or deodorant or snack foods? Is there a common denominator in the type of vehicle they drive or restaurants they frequent or vacations they take?

By enriching first-party data retailers have on their existing customers with purchase data and other characteristics that happen outside the retailer’s vantage point, Consumer Insights can cluster together attributes and actions and identify new segments that would be otherwise unknown. Retailers can then use this information to find similar buyers to target with offers that are highly relevant to their lifestyle and tastes.

The data in Oracle Data Cloud represents profile-linked transaction-level sales data and a rich set of other demographic, geographic, and interest attributes from Oracle Data Cloud. Through this new solution, that third-party data can now be coupled with first-party data retailers have on customers, omnichannel touch-points, inventory movements, promotional response, and much more. 

“The value of data can’t be found in zeros and ones, but in human connections to the interests, experiences, and behavior of current and potential customers,” said Cecilia Mao, vice president of product, Oracle Data Cloud. “When you know that your customers are also more likely to buy at the grocery store, brand affinity and hobbies, you can build more accurate models to find your best potential customers, then use multiple channels to reach them at scale.”

Applying predictive and prescriptive analytics to this data, retailers can connect, analyze, experiment, and explore new customer segments, knowing that advanced decision science is under the hood. Consumer Insights evaluates “what if” analysis and explores and finds prospects in a continuously iterative process to get the criteria right and identify the most appropriate customer segment. Once correctly identified, retailers can take action by launching campaigns, promotions, or advertising, with the option to activate using Oracle Data Cloud’s industry-wide connections.

“When it comes to grabbing the attention of potential customers, every second and moment matters,” said Jeff Warren, vice president, Oracle Retail. “Armed with intelligent data based on real customer attributes and behaviors, retailers will truly be able to put the needs and likes of shoppers at the center of their new customer acquisition strategy.”

Visit Oracle (booth #4837) at NRF 2020 Vision Retail’s Big Show (January 12-14) at Jacob K. Javits Convention Center, New York City, to see Oracle Retail Consumer Insights live. While in the booth, see the full Oracle Retail Insights Suite and demo the new Retail Home dashboards to see why Oracle is the modern platform for retail.

Contact Info
Kaitlin Ambrogio
Oracle PR
+1.781.434.9555
kaitlin.ambrogio@oracle.com
Shasta Dentone (Smith)
Oracle Data Cloud PR
+1.503.560.0756
shasta.smith@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle Data Cloud

Oracle Data Cloud helps marketers use data to capture consumer attention and drive results. Used by 199 of AdAge’s 200 largest advertisers, our Audience, Context and Measurement solutions extend across the top media platforms and a global footprint of more than 100 countries. We give marketers the data and tools needed for every stage of the marketing journey, from audience planning to pre-bid brand safety, contextual relevance, viewability confirmation, fraud protection, and ROI measurement. Oracle Data Cloud combines the leading technologies and talent from Oracle’s acquisitions of AddThis, BlueKai, Crosswise, Datalogix, Grapeshot, and Moat.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Shasta Dentone (Smith)

  • +1.503.560.0756

Blue Nile Shines Bright with Oracle Cloud Applications

Oracle Press Releases - Thu, 2020-01-09 07:00
Press Release
Blue Nile Shines Bright with Oracle Cloud Applications World’s Leading Online Jeweler Says “Yes” to Oracle to Help Create Customer Experiences as Unique as Each Diamond

Redwood Shores, Calif.—Jan 9, 2020

Blue Nile, a leading online diamond jeweler, has chosen Oracle Cloud Applications to support its commitment to providing customers with more choices, straightforward information, and legendary service. With Oracle, Blue Nile has been able to streamline financial operations and delight customers by providing an experience as personal as the diamond they’re purchasing.

Founded in 1999 with the idea that the diamond and engagement ring business was ready for innovation, Blue Nile is now the #1 online diamond jeweler in the world for engagement and wedding rings, boasting an inventory of over 200,000 diamonds. To maintain its tradition of innovation and constantly provide new ways for customers to create the perfect pieces for every occasion, Blue Nile needed an integrated suite of applications that would help it personalize customer experiences at scale and improve operational efficiency. After careful evaluation, Blue Nile selected Oracle.

“At Blue Nile, everything we do is dictated by our customer-first philosophy and that includes the technology we select to run our business,” said Andre Woolery, senior director of brand marketing, Blue Nile. “To preserve our focus on exceptional customer service as our business continues to grow, we needed to be able to seamlessly connect all parts of our organization. Oracle Cloud Applications is not only helping us bring all our teams together, but just as importantly, it is helping us really use data to our advantage. This will make a huge difference to both the experience we offer customers and the efficiency of our core business processes.”

With Oracle Customer Experience (CX) and Oracle Enterprise Performance Management (EPM) Cloud, Blue Nile is now nimbler and more precise with its marketing tactics, and is able to drive accurate and agile financial and operational planning across the entire organization. Oracle Responsys, part of Oracle CX, has helped Blue Nile hone its marketing program to be more personalized and timelier for each customer interaction across online and traditional channels including its website, call center, and showroom. In addition, Oracle EPM Cloud has helped Blue Nile make its planning and budgeting process more efficient by seamlessly integrating data and processes with Blue Nile’s core ERP and operational systems.

“Just like diamonds, every customer is unique and this means brands need to take a data-first approach to managing each customer interaction in order to truly personalize the experience,” said Rob Tarkoff, executive vice president, Oracle CX and Oracle Data Cloud. “Blue Nile is a perfect example of a retail brand that has taken advantage of technology to rethink a traditional industry, and the results it has achieved speak for themselves. We look forward to working closely with the Blue Nile team to help them continue to put the customer first.”

Oracle CX empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences, and outcomes, Oracle CX helps customers reduce IT complexity, deliver innovative customer experiences, and achieve predictable and tangible business results. Oracle EPM Cloud is the only complete and connected cloud EPM solution that gives customers the agility needed to outperform in today’s constantly evolving business landscape.

Be sure to visit Oracle’s booth #4837 at the NRF Big Show, January 12-14, 2020.

Contact Info
Kimberly Guillon
Oracle
209.601.9152
kim.guillon@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Visit Oracle CX on SmarterCX, Twitter, LinkedIn and Facebook.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kimberly Guillon

  • 209.601.9152

Pages

Subscribe to Oracle FAQ aggregator