Feed aggregator

Oracle Entity Framework Core 3.x

Tom Kyte - Tue, 2020-02-11 18:04
When will the https://www.nuget.org/packages/Oracle.EntityFrameworkCore/ support dotnet core sdk 3.1 ?
Categories: DBA Blogs

Truncate Partition is very slow

Tom Kyte - Tue, 2020-02-11 18:04
Hi Tom We got a table which is partitioned on Range (date), Each partition holds approximately 5 Million records, there are 60 partitions. There are 4 Indexes on this table and they are Global Indexes. To remove old data we truncate partition. ...
Categories: DBA Blogs

Scheduling Queries

Tom Kyte - Tue, 2020-02-11 18:04
Hi Connor, Chris, Could you please have a look at below scenario related to dbms_scheduler program and job setu: <code> -------------- Start Use Case Setup -------------- -----------------------------------------------------------------------...
Categories: DBA Blogs

Labs for Multitenant

Tom Kyte - Tue, 2020-02-11 18:04
Hi All: I have a question on the Multitenant Fundamentals hands on session scheduled for Thu, Feb 13, 2020 16:00-17:00 UTC: Will these labs be making use of VMs based on https://cloudmarketplace.oracle.com/marketplace/listing/69658839? * I'm a...
Categories: DBA Blogs

Oracle Sequence expiry

Tom Kyte - Tue, 2020-02-11 18:04
I have few sequences which are about to expires and I need notification over emails when last_number increases. Can you suggest how to do it.
Categories: DBA Blogs

ORA-30926 not raised in merge statement for non-deterministic set of rows

Tom Kyte - Tue, 2020-02-11 18:04
Hi Connor, Chris et al, Could you please kindly help me better understand when ORA-30926 is to be raised for non-deterministic set of rows on input? Up to 12.1 it worked as a charm , regardless of sort order and number duplicate rows on input O...
Categories: DBA Blogs

ORA-06553: PLS-306: wrong number or types of arguments when selecting object type instance

Tom Kyte - Tue, 2020-02-11 18:04
We're experiencing this when testing upgrade from Ora 12.1 to 19. What's wrong with the code?
Categories: DBA Blogs

Unable to find or delete BIN$ Tables

Tom Kyte - Tue, 2020-02-11 18:04
While I was examining the space used on my free ADW i found lots of tables with a prefix of BIN$ in my DATA tablespace. I found these being referenced in the user_segments The recyclebin is empty, and I cant find the tables using a select on the u...
Categories: DBA Blogs

Long Running PROC takes over 7 hours

Tom Kyte - Tue, 2020-02-11 18:04
Hi Tom, I have a PROC that will be run infrequently. It will be used to delete rules from a table. There are 75,000 rules and over 14,000 will be deleted in the first run of the proc with another 1200 potentially deleted. I need to check if ...
Categories: DBA Blogs

Node switchover data loss

Tom Kyte - Tue, 2020-02-11 18:04
Hi , We faced one issue, I have explained the scenario > Our table has multiple rows with primary key as number for an entity For ex: assuming employeeid for an employee > We have a status column which suggests calculation status of the current...
Categories: DBA Blogs

Python 3.8.1 Linux Install Without Root

Bobby Durrett's DBA Blog - Tue, 2020-02-11 11:40

I wanted to install the latest Python on Linux in a low powered user’s home directory and not have the install interfere with the Python that comes with Linux. I wanted my own local copy of Python that I could install Python packages to without interfering with anything else. I was willing to use root to install the needed Linux packages, but everything else was done with my low powered user.

I did a yum update on my Oracle Enterprise Linux 7.7 install before I started to get everything on the latest version. As root I used yum to install the Linux packages I needed to install Python 3.8.1 from source:

[root@pythonvm ~]# yum install gcc openssl-devel bzip2-devel libffi-devel tcl* tk* -y
Loaded plugins: langpacks, ulninfo
mysql-connectors-community                                                   | 2.5 kB  00:00:00
mysql-tools-community                                                        | 2.5 kB  00:00:00
mysql57-community                                                            | 2.5 kB  00:00:00
ol7_UEKR4                                                                    | 2.5 kB  00:00:00
ol7_latest                                                                   | 2.7 kB  00:00:00
Package gcc-4.8.5-39.0.3.el7.x86_64 already installed and latest version
Package 1:openssl-devel-1.0.2k-19.0.1.el7.x86_64 already installed and latest version
Package bzip2-devel-1.0.6-13.el7.x86_64 already installed and latest version
Package libffi-devel-3.0.13-18.el7.x86_64 already installed and latest version
Package 1:tcl-devel-8.5.13-8.el7.x86_64 already installed and latest version
Package 1:tcl-8.5.13-8.el7.x86_64 already installed and latest version
Package tcl-pgtcl-2.0.0-5.el7.x86_64 already installed and latest version
Package 1:tk-devel-8.5.13-6.el7.x86_64 already installed and latest version
Package 1:tk-8.5.13-6.el7.x86_64 already installed and latest version
Nothing to do

I must have already installed these on this VM. I did the rest of these steps as my low powered user “bobby”. The next thing I did was download the latest Python source, the 3.8.1 version. I had to use the -k option to get around certificate issues:

[bobby@pythonvm ~]$ curl -k -O https://www.python.org/ftp/python/3.8.1/Python-3.8.1.tgz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 22.8M  100 22.8M    0     0  7830k      0  0:00:02  0:00:02 --:--:-- 7828k

For some reason I had to do this twice. The first download failed. Next I gunzipped and untarred this archive under my home directory:

tar zxfv Python-3.8.1.tgz

Python-3.8.1/Objects/sliceobject.c
Python-3.8.1/Objects/listobject.c
Python-3.8.1/Objects/typeslots.inc
Python-3.8.1/Objects/weakrefobject.c
Python-3.8.1/Objects/unicodeobject.c
Python-3.8.1/Objects/complexobject.c
Python-3.8.1/Objects/picklebufobject.c
Python-3.8.1/Objects/odictobject.c
Python-3.8.1/Objects/genobject.c
[bobby@pythonvm ~]$

I created a directory called “python” under my home directory to use as the top-level directory for my Python install. This directory tree will hold the binaries and any Python packages that I install.

[bobby@pythonvm ~]$ mkdir python

After changing directory to where I untarred the source files I configured the Python make and install to use the directory I just created as a “prefix”:


[bobby@pythonvm ~]$ cd Python-3.8.1

[bobby@pythonvm Python-3.8.1]$ pwd
/home/bobby/Python-3.8.1

./configure --prefix=/home/bobby/python

config.status: creating Misc/python-embed.pc
config.status: creating Misc/python-config.sh
config.status: creating Modules/ld_so_aix
config.status: creating pyconfig.h
creating Modules/Setup.local
creating Makefile


If you want a release build with all stable optimizations active (PGO, etc),
please run ./configure --enable-optimizations

Then I did the make and make altinstall from the same directory:

make
...
renaming build/scripts-3.8/idle3 to build/scripts-3.8/idle3.8
renaming build/scripts-3.8/2to3 to build/scripts-3.8/2to3-3.8
/usr/bin/install -c -m 644 ./Tools/gdb/libpython.py python-gdb.py
gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall    -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration  -I./Include/internal  -I. -I./Include    -DPy_BUILD_CORE -o Programs/_testembed.o ./Programs/_testembed.c
gcc -pthread     -Xlinker -export-dynamic -o Programs/_testembed Programs/_testembed.o libpython3.8.a -lcrypt -lpthread -ldl  -lutil -lm   -lm
sed -e "s,@EXENAME@,/home/bobby/python/bin/python3.8," < ./Misc/python-config.in >python-config.py
LC_ALL=C sed -e 's,\$(\([A-Za-z0-9_]*\)),\$\{\1\},g' < Misc/python-config.sh >python-config
[bobby@pythonvm Python-3.8.1]$

make altinstall
...
Looking in links: /tmp/tmpdrnzr3vb
Collecting setuptools
Collecting pip
Installing collected packages: setuptools, pip
Successfully installed pip-19.2.3 setuptools-41.2.0
[bobby@pythonvm Python-3.8.1]$

So, now Python 3.8.1 is installed in /home/bobby/python but I want to put the bin directory in the path so I can run python or pip and by default have them be this version. The install created python and pip as python3.8 and pip3.8 but I created links to them so that I could access them without 3.8 at the end of their names:

[bobby@pythonvm Python-3.8.1]$ cd /home/bobby/python/bin
[bobby@pythonvm bin]$ ls -al
total 16704
drwxr-xr-x. 2 bobby bobby     4096 Feb 10 15:36 .
drwxrwxr-x. 6 bobby bobby       52 Feb 10 15:36 ..
-rwxrwxr-x. 1 bobby bobby      109 Feb 10 15:36 2to3-3.8
-rwxrwxr-x. 1 bobby bobby      249 Feb 10 15:36 easy_install-3.8
-rwxrwxr-x. 1 bobby bobby      107 Feb 10 15:36 idle3.8
-rwxrwxr-x. 1 bobby bobby      231 Feb 10 15:36 pip3.8
-rwxrwxr-x. 1 bobby bobby       92 Feb 10 15:36 pydoc3.8
-rwxr-xr-x. 1 bobby bobby 17075312 Feb 10 15:35 python3.8
-rwxr-xr-x. 1 bobby bobby     3095 Feb 10 15:36 python3.8-config

[bobby@pythonvm bin]$ ln -s python3.8 python
[bobby@pythonvm bin]$ ln -s pip3.8 pip

Then I added/home/bobby/python/bin to the front of the path to keep these new python and pip links ahead of the python links or binaries in the rest of the path.

[bobby@pythonvm bin]$ cd
[bobby@pythonvm ~]$ vi .bashrc

export ORACLE_HOME=/home/oracle/app/oracle/product/12.1.0/client_1
export PATH=$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=/home/oracle/app/oracle/product/12.1.0/client_1/lib
export PATH=/home/bobby/python/bin:$PATH

~
~
~
~
~
~
".bashrc" 17L, 451C written     

I logged out and in as bobby and tried python and pip to see that they were the correct version:

[bobby@pythonvm ~]$ python
Python 3.8.1 (default, Feb 10 2020, 15:33:01)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39.0.3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
[bobby@pythonvm ~]$ pip list
Package    Version
---------- -------
pip        19.2.3
setuptools 41.2.0
WARNING: You are using pip version 19.2.3, however version 20.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[bobby@pythonvm ~]$   

Finally, I updated pip just to show that we can update something in this new install. Then I installed numpy to see if I could install a new package.

[bobby@pythonvm ~]$ pip install --upgrade pip
Collecting pip
  Downloading https://files.pythonhosted.org/packages/54/0c/d01aa759fdc501a58f431eb594a17495f15b88da142ce14b5845662c13f3/pip-20.0.2-py2.py3-none-any.whl (1.4MB)
     |¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 1.4MB 863kB/s
Installing collected packages: pip
  Found existing installation: pip 19.2.3
    Uninstalling pip-19.2.3:
      Successfully uninstalled pip-19.2.3
Successfully installed pip-20.0.2

[bobby@pythonvm ~]$ pip list
Package    Version
---------- -------
pip        20.0.2
setuptools 41.2.0

[bobby@pythonvm ~]$ pip install numpy
Collecting numpy
  Downloading numpy-1.18.1-cp38-cp38-manylinux1_x86_64.whl (20.6 MB)
     |¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 20.6 MB 62 kB/s
Installing collected packages: numpy
Successfully installed numpy-1.18.1
[bobby@pythonvm ~]$ pip list
Package    Version
---------- -------
numpy      1.18.1
pip        20.0.2
setuptools 41.2.0

This is a quick and dirty post, but I want to remember these steps for myself and it might be helpful for someone who uses Redhat/Centos/OEL.

Bobby

Categories: DBA Blogs

Equinix Taps Oracle Exadata to Help Power Global Interconnection Platform

Oracle Press Releases - Tue, 2020-02-11 07:00
Press Release
Equinix Taps Oracle Exadata to Help Power Global Interconnection Platform Data center and cloud interconnection services rely on Oracle Exadata and Oracle Database

Redwood City, Calif.—Feb 11, 2020

To help power one of the most dynamic data center and global interconnection platforms in the world, Equinix relies on Oracle Exadata Database Machine as a core Oracle Database transaction engine to run its growing business. With a global footprint of more than 200 International Business Exchange™ (IBX®) data centers spanning across more than 50 markets on five continents, Platform Equinix® contains the highest share of the world's public cloud on-ramps and one of the most physically and virtually interconnected ecosystems in the world.

Equinix currently serves nearly 10,000 customers globally, including the world’s largest cloud providers, Fortune 500 enterprises and Global 2000 companies. Equinix has experienced between an estimated 3X and 10X performance improvement for transaction processing and data warehousing analytics, respectively, since implementing Oracle Exadata to support its quote-to-cash business support systems running on Oracle Database and Oracle Applications. With Oracle Exadata, thousands of Equinix users around the world now access their business support systems with ultra-low IO latencies.

Prior to selecting Oracle Exadata, Equinix had implemented a do-it-yourself infrastructure environment comprised of Oracle Database with servers combined with storage and networking products from multiple vendors. With Oracle Exadata, Equinix is achieving up to 24X in estimated greater performance for complex database queries, 4X faster analytics, and 3X faster data replication.

By moving to Oracle Exadata, Equinix is able to run its applications an estimated 30 to 40 percent faster with some workloads showing more than 100 percent improvement. Equinix also eliminated an estimated 12 hours of service downtime for database server patching per year and further reduced application patching downtime by 90 percent.

“Our investment in Oracle Exadata has proven valuable and has exceeded our expectations,” said Milind Wagle, executive vice president and global chief information officer, Equinix. “Not only is Oracle a strategic alliance partner with Equinix, we also consider it an important solution provider trusted to run our mission-critical applications around the world.”

“It’s a strong validation of our strategy to see Equinix, one of the world’s largest providers of interconnected data centers, relying on Oracle Exadata to ensure its customers benefit from superior cloud and infrastructure performance on a 24/7 basis, no matter where they are in the world,” said Juan Loaiza, executive vice president, Mission-Critical Database Technologies, Oracle. “Oracle Exadata powers the Oracle Autonomous Database, Exadata Cloud Service, and Gen 2 Exadata Cloud at Customer, and is the premiere platform for running Oracle Database workloads in the cloud and on-premises.”

In addition to experiencing a reduction in patching requirements, Equinix has also benefitted from net incremental capabilities as a result of Oracle co-engineering, including Automatic Indexing, AI-based Cloud-Scale Performance Management and Hybrid Columnar Compression.

Equinix is implementing Oracle Exadata in a cloud-adjacent architecture environment. This interconnection architecture enables its global user base to access various applications across multiple cloud providers utilizing the exceptional performance of Oracle Database deployed on Exadata systems located in Equinix IBX data center locations and interconnected via Equinix Cloud Exchange Fabric™ (ECX Fabric™).

Oracle and Equinix Global Collaboration

Equinix continues their collaboration as a Gold level member of Oracle PartnerNetwork (OPN) with Oracle Cloud services spanning PaaS, IaaS and SaaS, offering Equinix IBX data centers worldwide. Oracle Cloud Infrastructure FastConnect is also available at Equinix, providing them with high-performance, private access to Oracle Cloud. Joint enterprise customers can also leverage Equinix Network Edge Services to migrate Oracle applications and databases quickly and cost-effectively from legacy cloud providers to Oracle Cloud Infrastructure.

Contact Info
Victoria Brown
Oracle
+1.650.850.2009
victoria.brown@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Victoria Brown

  • +1.650.850.2009

Taking VMware Tanzu Mission Control for a test drive this time creating a k8s cluster on AWS

Pas Apicella - Tue, 2020-02-11 04:12
Previously I blogged about how to use VMware Tanzu Mission Control (TMC) to attach to kubernetes clusters and in that example we used a GCP GKE cluster. That blog entry exists here

Taking VMware Tanzu Mission Control for a test drive
http://theblasfrompas.blogspot.com/2020/02/taking-tanzu-mission-control-for-test.html

In this example we will use the "Create Cluster" button to create a new k8s cluster on AWS that will be managed by TMC for it's entire lifecycle.

Steps

Note: Before getting started you need to create a "Cloud Provider Account" and that is done using AWS as shown below. You can create one or more connected cloud provider accounts. Adding accounts allows you to start using VMware TMC to create clusters, add data protection, and much more



1. Click on the "Clusters" on the left hand navigation bar

2. In the right hand corner click the button "New Cluster" and select your cloud provider account on AWS as shown below


3. Fill in the details of your new cluster as shown below ensuring you select the correct AWS region where your cluster will be created.



4. Click Next

5. In the next screen I am just going to select a Development control plane



6. Click Next

7. Edit the default-node-pool and add 2 worker nodes instead of just 1 as shown below



8. Click "Create"

9. This will take you to a screen where your cluster will create. This can take at least 20 minutes so be patient. Progress is shown as per below



10. If we switch over to AWS console we will start to see some running instances and other cloud components being created as shown in the images below




11. Eventually the cluster will create and you are taken to a summary screen for your cluster. It will take a few minutes for all "Agent and extensions health" to show up green so refresh the page serval times until all shows up green as per below.

Note: This can take up to 10 minutes so be patient




12. So to access this cluster using "kubectl" use the button "Access this Cluster" in the top right hand corner and it will take you to a screen as follows. Click the "Download kubeconfig file" and the "Tanzu Mission Control CLI" as you will need both those files and save them locally



13. make the "tmc" CLI executable and save to your $PATH as shown below

$ chmod +x tmc
$ sudo mv tmc /usr/local/bin

14. Access cluster using "kubectl" as follows
  
$ kubectl --kubeconfig=./kubeconfig-pas-aws-cluster.yml get namespaces
NAME STATUS AGE
default Active 19m
kube-node-lease Active 19m
kube-public Active 19m
kube-system Active 19m
vmware-system-tmc Active 17m

Note: You will be taken to a web page to authenticate and once that's done your good to go as shown below


15. You can view the pods created to allows access from the TMC agent as follows
  
$ kubectl --kubeconfig=./kubeconfig-pas-aws-cluster.yml get pods --namespace=vmware-system-tmc
NAME READY STATUS RESTARTS AGE
agent-updater-7b47c659d-8h2mh 1/1 Running 0 25m
agentupdater-workload-1581415620-csz5p 0/1 Completed 0 35s
data-protection-769994df65-6cgfh 1/1 Running 0 24m
extension-manager-657b467c-k4fkl 1/1 Running 0 25m
extension-updater-c76785dc9-vnmdl 1/1 Running 0 25m
inspection-extension-79dcff47f6-7lm5r 1/1 Running 0 24m
intent-agent-7bdf6c8bd4-kgm46 1/1 Running 0 24m
policy-sync-extension-8648685fc7-shn5g 1/1 Running 0 24m
policy-webhook-78f5699b76-bvz5f 1/1 Running 1 24m
policy-webhook-78f5699b76-td74b 1/1 Running 0 24m
sync-agent-84f5f8bcdc-mrc9p 1/1 Running 0 24m

So if you got this far you now have attached a cluster and created a cluster from scratch all from VMware TMC and that's just the start.

Soon I will show to add some policies to our cluster now we have them under management

More Information

Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos
https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

VMware Tanzu Mission Control
https://cloud.vmware.com/tanzu-mission-control
Categories: Fusion Middleware

Taking VMware Tanzu Mission Control for a test drive

Pas Apicella - Mon, 2020-02-10 19:53
You may or may not have heard of Tanzu Mission Control (TMC) part of the new VMware Tanzu offering which will help you build, run and manage modern apps. To find out more about Tanzu Mission Control here is the Blog link on that.

https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

In this blog I show you how easily you can use TMC to monitor your existing k8s clusters. Keep in mind TMC can also create k8s clusters for you but here we will use the "Attach Cluster" part of TMC. Demo as follows

1. Of course you will need access account on TMC which for this demo I already have. Once logged in you will see a home screen as follows



2. In the right hand corner there is a "Attach Cluster" button click this to attach an existing cluster to TMC. Enter some cluster details , in this case I am attaching to a k8s cluster on GKE and giving it a name "pas-gke-cluster".


3. Click the "Register" button which takes you to a screen which allows you to install the VMware Tanzu Mission Control agent. This is simply done by using "kubectl apply ..." on your k8s cluster which allows an agent to communicate back to TMC itself. Everything is created in a namespace called "vmware-system-tmc"



4. Once you have run the "kubectl apply .." on your cluster you can verify the status of the pods and other components installed as follows

$ kubectl get all --namespace=vmware-system-tmc

Or you could just check the status of the various pods as shown below and assume everything else was created ok
  
$ kubectl get pods --namespace=vmware-system-tmc
NAME READY STATUS RESTARTS AGE
agent-updater-67bb5bb9c6-khfwh 1/1 Running 0 74m
agentupdater-workload-1581383460-5dsx9 0/1 Completed 0 59s
data-protection-657d8bf96c-v627g 1/1 Running 0 73m
extension-manager-857d46c6c-zfzbj 1/1 Running 0 74m
extension-updater-6ddd9858cf-lr88r 1/1 Running 0 74m
inspection-extension-789bb48b6-mnlqj 1/1 Running 0 73m
intent-agent-cfb49d788-cq8tk 1/1 Running 0 73m
policy-sync-extension-686c757989-jftjc 1/1 Running 0 73m
policy-webhook-5cdc7b87dd-8shlp 1/1 Running 0 73m
policy-webhook-5cdc7b87dd-fzz6s 1/1 Running 0 73m
sync-agent-84bd6c7bf7-rtzcn 1/1 Running 0 73m

5. Now at this point click on "Verify Connection" button to confirm the agent in your k8s cluster is able to communicate with TMC

6. Now let's search for out cluster on the "Clusters" page as shown below



7. Click on "pas-gke-cluster" and you will be taken to an Overview page as shown below. Ensure all green tick boxes are in place this may take a few minutes so refresh the page as needed



8. So this being an empty cluster I will create a deployment with 2 pods so we can see how TMC shows this workload in the UI. These "kubectl commands" should work on any cluster as the image is on Docker Hub

$ kubectl run pbs-deploy --image=pasapples/pbs-demo-image --replicas=2 --port=8080
$ kubectl expose deployment pbs-deploy --type=LoadBalancer --port=80 --target-port=8080 --name=pbs-demo-service

9. Test the workload (Although this isn't really required)

$ echo "http://`kubectl get svc pbs-demo-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`/customers/1"
http://104.197.202.165/customers/1

$ http http://104.197.202.165/customers/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Tue, 11 Feb 2020 01:43:26 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "customer": {
            "href": "http://104.197.202.165/customers/1"
        },
        "self": {
            "href": "http://104.197.202.165/customers/1"
        }
    },
    "name": "pas",
    "status": "active"
}

10. Back on the TMC UI click on workloads. You should see our deployment as per below


11. Click on the deployment "pbs-deploy" to see the status of the pods created as part of the deployment replica set plus the YAML of the deployment itself


12. Of course this is just scratching the surface but from the other tabs you can see the cluster nodes, namespaces and other information as required not just for your workloads but also for the cluster itself




One thing to note here is when I attach a cluster as shown in this demo the life cycle of the cluster, for example upgrades, can't be managed / performed by TMC. In the next post I will show how "Create Cluster" will actually be able to control the life cycle of the cluster as well as this time TMC will actually create the cluster for us.

Stay tuned!!!

More Information

Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos
https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

VMware Tanzu Mission Control
https://cloud.vmware.com/tanzu-mission-control
Categories: Fusion Middleware

AZ-104 | Microsoft Azure Certification Exam

Online Apps DBA - Mon, 2020-02-10 07:18

New Version of AZ-103 is now AZ-104 Recently, Microsoft has announced a new version of the AZ-103 exam. This new version, apparently known as AZ-104 which will be introduced in late March/early April. For people who have already started preparing for AZ-103, the good news is that they will still be able to take this […]

The post AZ-104 | Microsoft Azure Certification Exam appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How SQL Server MVCC compares to Oracle and PostgreSQL

Yann Neuhaus - Sun, 2020-02-09 13:42
By Franck Pachot

.
Microsoft SQL Server has implemented MVCC in 2005, which has been proven to be the best approach for transaction isolation (the I in ACID) in OLTP. But are you sure that writers do not block readers with READ_COMMITTED_SNAPSHOT? I’ll show here that some reads are still blocked by locked rows, contrary to the precursors of MVCC like PostgreSQL and Oracle.

For this demo, I run SQL Server 2019 RHEL image on docker in an Oracle Cloud compute running OEL7.7 as explained in the previous post. If you don’t have the memory limit mentioned, you can simply run:

docker run -d -e "ACCEPT_EULA=Y" -e 'MSSQL_PID=Express' -p 1433:1433 -e 'SA_PASSWORD=**P455w0rd**' --name mssql mcr.microsoft.com/mssql/rhel/server:2019-latest
time until docker logs mssql | grep -C10 "Recovery is complete." ; do sleep 1 ; done

Test scenario description

Here is what I’ll run in a first session:

  1. create a DEMO database
  2. (optional) set MVCC with Read Commited Snapshot isolation level
  3. create a DEMO table with two rows. One with “a”=1 and one with “a”=2
  4. (optional) build an index on column “a”
  5. update the first line where “a”=1


cat > session1.sql <<'SQL'
drop database if exists DEMO;
create database DEMO;
go
use DEMO;
go
-- set MVCC to read snapshot rather than locked current --
-- alter database DEMO set READ_COMMITTED_SNAPSHOT on;
go
drop table if exists DEMO;
create table DEMO(id int primary key, a int not null, b int);
begin transaction;
insert into DEMO values(1,1,1);
insert into DEMO values(2,2,2);
commit;
go
select * from DEMO;
go
-- index to read only rows that we want to modify --
-- create index DEMO_A on DEMO(a);
go
begin transaction;
update DEMO set b=b+1 where a=1;
go
SQL

I’ll run it in the background (you can also run it in another terminal) where it waits 60 seconds before quitting:

( cat session1.sql ; sleep 60 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &

[root@instance-20200208-1719 ~]# ( cat session1.sql ; sleep 60 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &
[1] 27759
[root@instance-20200208-1719 ~]# Feb 09 17:05:43 drop database if exists DEMO;
Feb 09 17:05:43 create database DEMO;
Feb 09 17:05:43
Feb 09 17:05:43 use DEMO;
Feb 09 17:05:43
Feb 09 17:05:43 Changed database context to 'DEMO'.
Feb 09 17:05:43 -- set MVCC to read snapshot rather than locked current --
Feb 09 17:05:43 -- alter database DEMO set READ_COMMITTED_SNAPSHOT on;
Feb 09 17:05:43
Feb 09 17:05:43 drop table if exists DEMO;
Feb 09 17:05:43 create table DEMO(id int primary key, a int not null, b int);
Feb 09 17:05:43 begin transaction;
Feb 09 17:05:43 insert into DEMO values(1,1,1);
Feb 09 17:05:43 insert into DEMO values(2,2,2);
Feb 09 17:05:43 commit;
Feb 09 17:05:43
Feb 09 17:05:43
Feb 09 17:05:43 (1 rows affected)
Feb 09 17:05:43
Feb 09 17:05:43 (1 rows affected)
Feb 09 17:05:43 select * from DEMO;
Feb 09 17:05:43
Feb 09 17:05:43 id          a           b
Feb 09 17:05:43 ----------- ----------- -----------
Feb 09 17:05:43           1           1           1
Feb 09 17:05:43           2           2           2
Feb 09 17:05:43
Feb 09 17:05:43 (2 rows affected)
Feb 09 17:05:43 -- index to read only rows that we want to modify --
Feb 09 17:05:43 -- create index DEMO_A on DEMO(a);
Feb 09 17:05:43
Feb 09 17:05:43 begin transaction;
Feb 09 17:05:43 update DEMO set b=b+1 where a=1;
Feb 09 17:05:43
Feb 09 17:05:43
Feb 09 17:05:43 (1 rows affected)
SQL Server default

While this session has locked the first row I’ll run the following, reading the same row that is currently locked by the other transaction:

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO | ts
-- read access the row that is not locked
select * from DEMO where a=2;
go

This hangs until the first transaction is canceled:

[root@instance-20200208-1719 ~]# docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO | ts
-- read access the row that is not locked
 select * from DEMO where a=2;
go
Feb 09 17:06:42
Feb 09 17:06:42
Feb 09 17:06:42
Feb 09 17:06:42 Sqlcmd: Warning: The last operation was terminated because the user pressed CTRL+C.
Feb 09 17:06:42
Feb 09 17:06:42 -- read access the row that is not locked
Feb 09 17:06:42  select * from DEMO where a=2;
Feb 09 17:06:42
Feb 09 17:06:42 id          a           b
Feb 09 17:06:42 ----------- ----------- -----------
Feb 09 17:06:42           2           2           2
Feb 09 17:06:42
Feb 09 17:06:42 (1 rows affected)

The “Sqlcmd: Warning: The last operation was terminated because the user pressed CTRL+C” message is fron the first session and only then my foreground session was able to continue. This is the worst you can encounter with the default isolation level in SQL Server where writes and reads are blocking each other even when not touching the same row (I read the a=2 row and only the a=1 one was locked). The reason for this is that I have no index for this predicate and I have to read all rows in order to find mine:

set showplan_text on ;
go
select * from DEMO where a=2;
go

go
Feb 09 17:07:24 set showplan_text on ;
Feb 09 17:07:24
select * from DEMO where a=2;
go
Feb 09 17:07:30 select * from DEMO where a=2;
Feb 09 17:07:30
Feb 09 17:07:30 StmtText
Feb 09 17:07:30 -------------------------------
Feb 09 17:07:30 select * from DEMO where a=2;
Feb 09 17:07:30
Feb 09 17:07:30 (1 rows affected)
Feb 09 17:07:30 StmtText
Feb 09 17:07:30 ---------------------------------------------------------------------------------------------------------------------------------------------------
Feb 09 17:07:30   |--Clustered Index Scan(OBJECT:([DEMO].[dbo].[DEMO].[PK__DEMO__3213E83F2AD8547F]), WHERE:([DEMO].[dbo].[DEMO].[a]=CONVERT_IMPLICIT(int,[@1],0)))
Feb 09 17:07:30
Feb 09 17:07:30 (1 rows affected)

Now, in order to avoid this situation, I’ll run the same but with an index on column “a”.
It was commented out in the session1.sql script and then I just re-ren everything without those comments:

( sed -e '/create index/s/--//' session1.sql ; sleep 60 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &

I’m running the same, now with a 3 seconds timeout so that I don’t have to wait for my background session to terminate:

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3 | ts
-- read access the row that is not locked
select * from DEMO where a=2;
go

[root@instance-20200208-1719 ~]# docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3 | ts
-- read access the row that is not locked
 select * from DEMO where a=2;
 go
Feb 09 17:29:25 -- read access the row that is not locked
Feb 09 17:29:25  select * from DEMO where a=2;
Feb 09 17:29:25
Feb 09 17:29:25 Timeout expired

Here I’m blocked again like in the previous scenario because the index was not used.
I can force the index access with an hint:

-- read access the row that is not locked forcing index access
select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
go

-- read access the row that is not locked forcing index access
 select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
 go
Feb 09 17:29:30 -- read access the row that is not locked forcing index access
Feb 09 17:29:30  select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
Feb 09 17:29:30
Feb 09 17:29:30 id          a           b
Feb 09 17:29:30 ----------- ----------- -----------
Feb 09 17:29:30           2           2           2
Feb 09 17:29:30
Feb 09 17:29:30 (1 rows affected)

This didn’t wait because the index access didn’t have to to to the locked row.

However, when I read the same row that is concurently locked I have to wait:

-- read access the row that is locked
select * from DEMO where a=1;
go

 -- read access the row that is locked
 select * from DEMO where a=1;
 go
Feb 09 17:29:34  -- read access the row that is locked
Feb 09 17:29:34  select * from DEMO where a=1;
Feb 09 17:29:34
Feb 09 17:29:34 Timeout expired

Here is the confirmation that the index was used only with the hint:

set showplan_text on ;
go
select * from DEMO where a=2;
go
select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
go

Feb 09 17:29:50 set showplan_text on ;
Feb 09 17:29:50
 select * from DEMO where a=2;
 go
Feb 09 17:29:50  select * from DEMO where a=2;
Feb 09 17:29:50
Feb 09 17:29:50 StmtText
Feb 09 17:29:50 --------------------------------
Feb 09 17:29:50  select * from DEMO where a=2;
Feb 09 17:29:50
Feb 09 17:29:50 (1 rows affected)
Feb 09 17:29:50 StmtText
Feb 09 17:29:50 --------------------------------------------------------------------------------------------------------------------------
Feb 09 17:29:50   |--Clustered Index Scan(OBJECT:([DEMO].[dbo].[DEMO].[PK__DEMO__3213E83F102B4054]), WHERE:([DEMO].[dbo].[DEMO].[a]=(2)))
Feb 09 17:29:50
Feb 09 17:29:50 (1 rows affected)
 select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
 go
Feb 09 17:29:52  select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
Feb 09 17:29:52
Feb 09 17:29:52 StmtText
Feb 09 17:29:52 -----------------------------------------------------
Feb 09 17:29:52  select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
Feb 09 17:29:52
Feb 09 17:29:52 (1 rows affected)
Feb 09 17:29:52 StmtText                                                                                                                                                
Feb 09 17:29:52 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Feb 09 17:29:52   |--Nested Loops(Inner Join, OUTER REFERENCES:([DEMO].[dbo].[DEMO].[id]))                                                                              
Feb 09 17:29:52        |--Index Seek(OBJECT:([DEMO].[dbo].[DEMO].[DEMO_A]), SEEK:([DEMO].[dbo].[DEMO].[a]=(2)) ORDERED FORWARD)                                         
Feb 09 17:29:52        |--Clustered Index Seek(OBJECT:([DEMO].[dbo].[DEMO].[PK__DEMO__3213E83F102B4054]), SEEK:([DEMO].[dbo].[DEMO].[id]=[DEMO].[dbo].[DEMO].[id]) LOOKUP ORDERED FORWARD)
Feb 09 17:29:52
Feb 09 17:29:52 (3 rows affected)

So, with de the default isolation level and index access, we can read a row that is not locked. The last query was blocked for the SELECT * FROM DEMO WHERE A=1 because we are in the legacy, and default, mode where readers are blocked by writers.

SQL Server MVCC

In order to improve this situation, Microsoft has implemented MVCC. With it, we do not need to read the current version of the rows (which requires waiting when it is concurrently modified) because the past version of the rows are stored in TEMPDB and we can read a past snapshot of it. Typically, with READ COMMITED SNAPSHOT isolation level, we read a snapshot as-of the point-in-time our query began. 
In general, we need to read all rows from a consistent point in time. This can be the one where our query started, and then while the query is running, a past version may be reconstructed to remove concurrent changes. Or, when there is no MVCC to rebuild this snapshot, this consistent point can only be the one when our query is completed. This means that while we read rows, we must lock them to be sure that they stay the same until the end of our query. Of course, even with MVCC there are cases where we want to read the latest value and then we will lock with something like a SELECT FOR UPDATE. But that’s not the topic here.

I’ll run the same test as the first one, but now have the database with READ_COMMITTED_SNAPSHOT on:

( sed -e '/READ_COMMITTED/s/--//' session1.sql ; sleep 120 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &

[root@instance-20200208-1719 ~]# ( sed -e '/READ_COMMITTED/s/--//' session1.sql ; sleep 120 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &
[1] 38943
[root@instance-20200208-1719 ~]# Feb 09 18:21:19 drop database if exists DEMO;
Feb 09 18:21:19 create database DEMO;
Feb 09 18:21:19
Feb 09 18:21:19 use DEMO;
Feb 09 18:21:19
Feb 09 18:21:19 Changed database context to 'DEMO'.
Feb 09 18:21:19 -- set MVCC to read snapshot rather than locked current --
Feb 09 18:21:19  alter database DEMO set READ_COMMITTED_SNAPSHOT on;
Feb 09 18:21:19
Feb 09 18:21:19 drop table if exists DEMO;
Feb 09 18:21:19 create table DEMO(id int primary key, a int not null, b int);
Feb 09 18:21:19 begin transaction;
Feb 09 18:21:19 insert into DEMO values(1,1,1);
Feb 09 18:21:19 insert into DEMO values(2,2,2);
Feb 09 18:21:19 commit;
Feb 09 18:21:19
Feb 09 18:21:19
Feb 09 18:21:19 (1 rows affected)
Feb 09 18:21:19
Feb 09 18:21:19 (1 rows affected)
Feb 09 18:21:19 select * from DEMO;
Feb 09 18:21:19
Feb 09 18:21:19 id          a           b
Feb 09 18:21:19 ----------- ----------- -----------
Feb 09 18:21:19           1           1           1
Feb 09 18:21:19           2           2           2
Feb 09 18:21:19
Feb 09 18:21:19 (2 rows affected)
Feb 09 18:21:19 -- index to read only rows that we want to modify --
Feb 09 18:21:19 -- create index DEMO_A on DEMO(a);
Feb 09 18:21:19
Feb 09 18:21:19 begin transaction;
Feb 09 18:21:19 update DEMO set b=b+1 where a=1;
Feb 09 18:21:19
Feb 09 18:21:19
Feb 09 18:21:19 (1 rows affected)

And then running the same scenario:

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3 | ts
-- read access the row that is not locked
select * from DEMO where a=2;
go
-- read access the row that is locked
select * from DEMO where a=1;
go
-- write access on the row that is not locked
delete from DEMO where a=2;
go

[root@instance-20200208-1719 ~]# docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3 | ts

-- read access the row that is not locked
select * from DEMO where a=2;
go
Feb 09 18:21:36 -- read access the row that is not locked
Feb 09 18:21:36 select * from DEMO where a=2;
Feb 09 18:21:36
Feb 09 18:21:36 id          a           b
Feb 09 18:21:36 ----------- ----------- -----------
Feb 09 18:21:36           2           2           2
Feb 09 18:21:36
Feb 09 18:21:36 (1 rows affected)

-- read access the row that is locked
select * from DEMO where a=1;
go
Feb 09 18:21:47 -- read access the row that is locked
Feb 09 18:21:47 select * from DEMO where a=1;
Feb 09 18:21:47
Feb 09 18:21:47 id          a           b
Feb 09 18:21:47 ----------- ----------- -----------
Feb 09 18:21:47           1           1           1
Feb 09 18:21:47
Feb 09 18:21:47 (1 rows affected)

-- write access on the row that is not locked
delete from DEMO where a=2;
go
Feb 09 18:22:01 -- write access on the row that is not locked
Feb 09 18:22:01 delete from DEMO where a=2;
Feb 09 18:22:01
Feb 09 18:22:01 Timeout expired

Ok, that’s better. I confirm that readers are not blocked by writers. But the modification on “A”=2 was blocked. This is not a writer-writer situation because we are not modifying the row that is locked by the other session. Here, I have no index on “A” and then the delete statement must first read the table and had to read this locked row. And obviously, this read is blocked. It seems that DML must read the current version of the row even when MVCC is available. That means that reads can be blocked by writes when those reads are in a writing transaction.

Last test on SQL Server: the same, with MVCC, and the index on “A”

( sed -e '/READ_COMMITTED/s/--//' -e '/create index/s/--//' session1.sql ; sleep 120 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &

[root@instance-20200208-1719 ~]# ( sed -e '/READ_COMMITTED/s/--//' -e '/create index/s/--//' session1.sql ; sleep 120 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &
[1] 40320
[root@instance-20200208-1719 ~]#
[root@instance-20200208-1719 ~]# Feb 09 18:30:15 drop database if exists DEMO;
Feb 09 18:30:15 create database DEMO;
Feb 09 18:30:15
Feb 09 18:30:15 use DEMO;
Feb 09 18:30:15
Feb 09 18:30:15 Changed database context to 'DEMO'.
Feb 09 18:30:15 -- set MVCC to read snapshot rather than locked current --
Feb 09 18:30:15  alter database DEMO set READ_COMMITTED_SNAPSHOT on;
Feb 09 18:30:15
Feb 09 18:30:15 drop table if exists DEMO;
Feb 09 18:30:15 create table DEMO(id int primary key, a int not null, b int);
Feb 09 18:30:15 begin transaction;
Feb 09 18:30:15 insert into DEMO values(1,1,1);
Feb 09 18:30:15 insert into DEMO values(2,2,2);
Feb 09 18:30:15 commit;
Feb 09 18:30:15
Feb 09 18:30:15
Feb 09 18:30:15 (1 rows affected)
Feb 09 18:30:15
Feb 09 18:30:15 (1 rows affected)
Feb 09 18:30:15 select * from DEMO;
Feb 09 18:30:15
Feb 09 18:30:15 id          a           b
Feb 09 18:30:15 ----------- ----------- -----------
Feb 09 18:30:15           1           1           1
Feb 09 18:30:15           2           2           2
Feb 09 18:30:15
Feb 09 18:30:15 (2 rows affected)
Feb 09 18:30:15 -- index to read only rows that we want to modify --
Feb 09 18:30:15  create index DEMO_A on DEMO(a);
Feb 09 18:30:15
Feb 09 18:30:15 begin transaction;
Feb 09 18:30:15 update DEMO set b=b+1 where a=1;
Feb 09 18:30:15
Feb 09 18:30:15
Feb 09 18:30:15 (1 rows affected)

Here is my full scenario to see where it blocks:

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3
-- read access the row that is not locked
select * from DEMO where a=2;
go
-- read access the row that is locked
select * from DEMO where a=1;
go
-- write access on the row that is not locked
delete from DEMO where a=2;
go
-- write access on the row that is locked
delete from DEMO where a=1;
go

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3
-- read access the row that is not locked
select * from DEMO where a=2;
go
-- read access the row that is not locked
select * from DEMO where a=2;

id          a           b
----------- ----------- -----------
          2           2           2

(1 rows affected)

-- read access the row that is locked
select * from DEMO where a=1;
go
-- read access the row that is locked
select * from DEMO where a=1;

id          a           b
----------- ----------- -----------
          1           1           1

(1 rows affected)

-- write access on the row that is not locked
delete from DEMO where a=2;
go
-- write access on the row that is not locked
delete from DEMO where a=2;


(1 rows affected)

-- write access on the row that is locked
delete from DEMO where a=1;
go
-- write access on the row that is locked
delete from DEMO where a=1;

Timeout expired

Finally, the only blocking situation here is when I want to write on the same row. The index access reduces the risk of being blocked.

In summary, we can achieve the best concurrency with READ_COMMITTED_SNAPSHOT isolation level, and ensuring that we read only the rows we will update, with proper indexing and maybe hinting. This is, in my opinion, very important to know because we rarely cover those situations during integration tests. But they can happen quickly in production with high load.

PostgreSQL

Let’s do the same with PostgreSQL which is natively MVCC:

cat > session1.sql <<'SQL'
drop database if exists DEMO;
create database DEMO;
\c demo
drop table if exists DEMO;
create table DEMO(id int primary key, a int not null, b int);
begin transaction;
insert into DEMO values(1,1,1);
insert into DEMO values(2,2,2);
commit;
select * from DEMO;
begin transaction;
update DEMO set b=b+1 where a=1;
SQL

No specific settings, and no index created here.

( cat session1.sql ; sleep 120 ; echo "commit;") | psql -e | ts &

-bash-4.2$ ( cat session1.sql ; sleep 120 ; echo "commit;") | psql -e | ts &
[1] 31125
-bash-4.2$
-bash-4.2$ Feb 09 18:42:48 drop database if exists DEMO;
Feb 09 18:42:48 DROP DATABASE
Feb 09 18:42:48 create database DEMO;
Feb 09 18:42:49 CREATE DATABASE
Feb 09 18:42:49 You are now connected to database "demo" as user "postgres".
Feb 09 18:42:49 drop table if exists DEMO;
NOTICE:  table "demo" does not exist, skipping
Feb 09 18:42:49 DROP TABLE
Feb 09 18:42:49 create table DEMO(id int primary key, a int not null, b int);
Feb 09 18:42:49 CREATE TABLE
Feb 09 18:42:49 begin transaction;
Feb 09 18:42:49 BEGIN
Feb 09 18:42:49 insert into DEMO values(1,1,1);
Feb 09 18:42:49 INSERT 0 1
Feb 09 18:42:49 insert into DEMO values(2,2,2);
Feb 09 18:42:49 INSERT 0 1
Feb 09 18:42:49 commit;
Feb 09 18:42:49 COMMIT
Feb 09 18:42:49 select * from DEMO;
Feb 09 18:42:49  id | a | b
Feb 09 18:42:49 ----+---+---
Feb 09 18:42:49   1 | 1 | 1
Feb 09 18:42:49   2 | 2 | 2
Feb 09 18:42:49 (2 rows)
Feb 09 18:42:49
Feb 09 18:42:49 begin transaction;
Feb 09 18:42:49 BEGIN
Feb 09 18:42:49 update DEMO set b=b+1 where a=1;
Feb 09 18:42:49 UPDATE 1

While the transaction updating the first row is in the background, I run the following readers and writers:

psql demo | ts
set statement_timeout=3000;
-- read access the row that is not locked
select * from DEMO where a=2;
-- read access the row that is locked
select * from DEMO where a=1;
-- write access on the row that is not locked
delete from DEMO where a=2;
-- write access on the row that is locked
delete from DEMO where a=1;

-bash-4.2$ psql demo | ts
set statement_timeout=3000;
Feb 09 18:43:00 SET
-- read access the row that is not locked
select * from DEMO where a=2;
Feb 09 18:43:08  id | a | b
Feb 09 18:43:08 ----+---+---
Feb 09 18:43:08   2 | 2 | 2
Feb 09 18:43:08 (1 row)
Feb 09 18:43:08
-- read access the row that is locked
select * from DEMO where a=1;
Feb 09 18:43:16  id | a | b
Feb 09 18:43:16 ----+---+---
Feb 09 18:43:16   1 | 1 | 1
Feb 09 18:43:16 (1 row)
Feb 09 18:43:16
-- write access on the row that is not locked
delete from DEMO where a=2;
Feb 09 18:43:24 DELETE 1
-- write access on the row that is locked
delete from DEMO where a=1;

ERROR:  canceling statement due to statement timeout
CONTEXT:  while deleting tuple (0,1) in relation "demo"

Nothing is blocked except, of course, when modifying the row that is locked.

Oracle Database

One of the many things I’ve learned from Tom Kyte when I was reading AskTom regularly is how to build the simplest test cases. And with Oracle there is no need to run multiple sessions to observe multiple transactions concurrency. I can do it with an autonomous transaction in one session and one advantage is that I can share a dbfiddle example:

Here, deadlock at line 14 means that only the “delete where a=1” encountered a blocking situation with “update where a=1”. All previous statements, select on any row and update of other rows, were executed without conflict.

A DML statement has two phases: one to find the rows and the second one to modify them. A DELETE or UPDATE in Oracle and Postgres runs the first in snapshot mode: non-blocking MVCC. The second must, of course, modify the current version. This is a very complex mechanism because it may require a retry (restart) when the current version does not match the consistent snapshot that was used for filtering. Both PostgreSQL and Oracle can ensure this write consistency without the need to block the reads. SQL Server has implemented MVCC more recently and provides non-blocking reads only for the SELECT reads. But a read can still be in blocking situation for the query phase of an update statement.

Cet article How SQL Server MVCC compares to Oracle and PostgreSQL est apparu en premier sur Blog dbi services.

Basic Replication -- 13 : Some Interesting SYS tables

Hemant K Chitale - Sun, 2020-02-09 08:45
I found an interesting SQL in the AWR report from my previous blog post.

What do you think this SQL statement does ?

DELETE FROM SYS.MVREF$_STMT_STATS WHERE REFRESH_ID = :B2 AND MV_OBJ# = :B1

Here are some interesting objects (I don't know which Oracle release they started appearing in) :

SQL> l
1 select object_name, object_type
2 from dba_objects
3 where owner = 'SYS'
4 and object_name like 'MVREF$%'
5* order by 2,1
SQL> /

OBJECT_NAME OBJECT_TYPE
------------------------------ -----------------------
MVREF$_STATS_SEQ SEQUENCE
MVREF$_CHANGE_STATS TABLE
MVREF$_RUN_STATS TABLE
MVREF$_STATS TABLE
MVREF$_STATS_PARAMS TABLE
MVREF$_STATS_SYS_DEFAULTS TABLE
MVREF$_STMT_STATS TABLE

7 rows selected.

SQL>


Right now, the SYS.MVREF$_STMT_STATS table appears to be empty.
SQL> desc SYS.MVREF$_STMT_STATS
Name Null? Type
----------------------------------------- -------- ----------------------------
MV_OBJ# NOT NULL NUMBER
REFRESH_ID NOT NULL NUMBER
STEP NOT NULL NUMBER
SQLID NOT NULL VARCHAR2(14)
STMT NOT NULL CLOB
EXECUTION_TIME NOT NULL NUMBER
EXECUTION_PLAN SYS.XMLTYPE STORAGE BINARY

SQL>


It would be interesting to know how Oracle is using this and the other MVREF$% tables.
SYS.MVREF$_CHANGE_STATS obviously captures DML operations

This SYS.MVREF$_RUN_STATS captures the last refresh operation (*does it only capture the last operation ?*) And what does SYS.MVREF$_STATS capture :

SQL> l
1 select *
2 from SYS.MVREF$_RUN_STATS
3* where MVIEWS='"HEMANT"."MV_1"'
SQL> /

RUN_OWNER_USER# REFRESH_ID NUM_MVS_TOTAL NUM_MVS_CURRENT MVIEWS BASE_TABLES METHOD ROLLBACK P R PURGE_OPTION
--------------- ---------- ------------- --------------- ------------------ ------------ ------ -------- - - ------------
PARALLELISM HEAP_SIZE A N O NUMBER_OF_FAILURES START_TIME END_TIME ELAPSED_TIME LOG_SETUP_TIME
----------- ---------- - - - ------------------ -------------------------- -------------------------- ------------ --------------
LOG_PURGE_TIME C TXNFLAG ON_COMMIT_FLAG
-------------- - ---------- --------------
106 245 1 1 "HEMANT"."MV_1" Y N 1
0 0 Y N N 0 09-FEB-20 09.55.33.000000 09-FEB-20 09.55.49.000000 16 1
PM PM
9 Y 0 0


SQL>
SQL> l
1 select mviews, count(*) from sys.mvref$_run_Stats group by mviews
2* order by 1
SQL> /

MVIEWS COUNT(*)
------------------------------------------ ----------
"HEMANT"."MV_1" 1
"HEMANT"."MV_2" 8
"HEMANT"."MV_DEPT", "HEMANT"."MV_EMP" 1
"HEMANT"."MV_FAST_NOT_POSSIBLE" 1
"HEMANT"."MV_OF_SOURCE" 1
"HEMANT"."NEW_MV" 2
"HEMANT"."NEW_MV_2_1" 1
"HEMANT"."NEW_MV_2_2" 2
"HR"."HR_MV_ON_COMMIT" 1
"HR"."MY_LARGE_REPLICA" 1

10 rows selected.

SQL>
SQL> desc sys.mvref$_run_stats
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
RUN_OWNER_USER# NOT NULL NUMBER
REFRESH_ID NOT NULL NUMBER
NUM_MVS_TOTAL NOT NULL NUMBER
NUM_MVS_CURRENT NOT NULL NUMBER
MVIEWS VARCHAR2(4000)
BASE_TABLES VARCHAR2(4000)
METHOD VARCHAR2(4000)
ROLLBACK_SEG VARCHAR2(4000)
PUSH_DEFERRED_RPC CHAR(1)
REFRESH_AFTER_ERRORS CHAR(1)
PURGE_OPTION NUMBER
PARALLELISM NUMBER
HEAP_SIZE NUMBER
ATOMIC_REFRESH CHAR(1)
NESTED CHAR(1)
OUT_OF_PLACE CHAR(1)
NUMBER_OF_FAILURES NUMBER
START_TIME TIMESTAMP(6)
END_TIME TIMESTAMP(6)
ELAPSED_TIME NUMBER
LOG_SETUP_TIME NUMBER
LOG_PURGE_TIME NUMBER
COMPLETE_STATS_AVAILABLE CHAR(1)
TXNFLAG NUMBER
ON_COMMIT_FLAG NUMBER

SQL> desc sys.mvref$_stats
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
MV_OBJ# NOT NULL NUMBER
REFRESH_ID NOT NULL NUMBER
ATOMIC_REFRESH NOT NULL CHAR(1)
REFRESH_METHOD VARCHAR2(30)
REFRESH_OPTIMIZATIONS VARCHAR2(4000)
ADDITIONAL_EXECUTIONS VARCHAR2(4000)
START_TIME TIMESTAMP(6)
END_TIME TIMESTAMP(6)
ELAPSED_TIME NUMBER
LOG_SETUP_TIME NUMBER
LOG_PURGE_TIME NUMBER
INITIAL_NUM_ROWS NUMBER
FINAL_NUM_ROWS NUMBER
NUM_STEPS NUMBER
REFMET NUMBER
REFFLG NUMBER

SQL>
SQL> select mv_obj#, count(*)
2 from sys.mvref$_stats
3 group by mv_obj#
4 /

MV_OBJ# COUNT(*)
---------- ----------
73223 1
73170 1
73065 1
73244 1
73079 8
73094 1
73197 2
73113 2
73188 1
73167 1
73110 1

11 rows selected.

SQL>
SQL> desc sys.mvref$_stats_params
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
MV_OWNER NOT NULL VARCHAR2(128)
MV_NAME NOT NULL VARCHAR2(128)
COLLECTION_LEVEL NOT NULL NUMBER
RETENTION_PERIOD NOT NULL NUMBER

SQL> select count(*)
2 from sys.mvref$_stats_params;

COUNT(*)
----------
0

SQL> desc sys.mvref$_stats_sys_defaults
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
COLLECTION_LEVEL NOT NULL NUMBER
RETENTION_PERIOD NOT NULL NUMBER

SQL> select * from sys.mvref$_stats_sys_defaults
2 /

COLLECTION_LEVEL RETENTION_PERIOD
---------------- ----------------
1 31

SQL>



Oracle has been introducing some more "internal" tables to trace MView Refresh operations.


Categories: DBA Blogs

Basic Replication -- 12 : MV Refresh Captured in AWR

Hemant K Chitale - Sun, 2020-02-09 08:40
Building on the example of an Index having been created on a Materialized View  in my previous blog post in this series, I've captured some information from the AWR report in 19c when this code is executed :

SQL> delete source_table_1;

72454 rows deleted.

SQL> insert into source_table_1 select object_id, owner, object_name from source_table_2;

72366 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_mview.refresh('MV_1');

PL/SQL procedure successfully completed.


(Note that "MV_OF_SOURCE" is not dependent on SOURCE_TABLE_1 and as really had no rows to refresh, did not cause any load).

Some information in the AWR Report (note that this is 19.3) :

SQL ordered by Elapsed Time             DB/Inst: ORCLCDB/ORCLCDB  Snaps: 54-55
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
-> %Total - Elapsed Time as a percentage of Total DB time
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 108.1% of Total DB Time (s): 30
-> Captured PL/SQL account for 85.2% of Total DB Time (s): 30

Elapsed Elapsed Time
Time (s) Executions per Exec (s) %Total %CPU %IO SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
16.1 1 16.09 53.5 12.8 21.6 2uusn1kyhm9h8
Module: SQL*Plus
PDB: ORCLPDB1
BEGIN dbms_mview.refresh('MV_1'); END;

8.7 1 8.66 28.8 5.3 13.6 8chh7ksnytb52
PDB: ORCLPDB1
delete from "HEMANT"."MLOG$_SOURCE_TABLE_1" where snaptime$$ <= :1

4.5 1 4.55 15.1 17.3 75.6 57ctmbtabx1rw
Module: SQL*Plus
PDB: ORCLPDB1
BEGIN dbms_mview.refresh('MV_OF_SOURCE'); END;

4.0 1 3.96 13.2 37.2 26.1 dsyxhpb9annru
Module: SQL*Plus
PDB: ORCLPDB1
delete source_table_1

3.7 144,820 0.00 12.3 36.7 8.3 9ucb4uxnvzxc8
Module: SQL*Plus
PDB: ORCLPDB1
INSERT /*+ NO_DST_UPGRADE_INSERT_CONV IDX(0) */ INTO "HEMANT"."MLOG$_SOURCE_TABL
E_1" (dmltype$$,old_new$$,snaptime$$,change_vector$$,xid$$,"OBJECT_ID") VALUES (
:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,:x,:1)

3.5 1 3.52 11.7 19.7 45.9 dxnyhyq7sqf8j
PDB: ORCLPDB1
DELETE FROM "HEMANT"."MV_1" SNAP$ WHERE "OBJ_ID" IN (SELECT * FROM (SELECT MLOG$
."OBJECT_ID" "OBJ_ID" FROM "HEMANT"."MLOG$_SOURCE_TABLE_1" MLOG$ WHERE "SNAPTIME
$$" > :1 AND ("DMLTYPE$$" != 'I')) AS OF SNAPSHOT(:B_SCN) )

3.3 1 3.25 10.8 45.2 .6 9n1gw9vpj9248
Module: SQL*Plus
PDB: ORCLPDB1
insert into source_table_1 select object_id, owner, object_name from source_tabl
e_2

2.3 2 1.14 7.6 18.4 77.4 94z4z19ygx34a
Module: SQL*Plus
PDB: ORCLPDB1
begin sys.dbms_irefstats.run_sa(:1, :2, :3, :4, :5, :6); end;

2.1 1 2.11 7.0 19.1 21.6 a2sctn32qtwnf
PDB: ORCLPDB1
/* MV_REFRESH (MRG) */ MERGE INTO "HEMANT"."MV_1" "SNA$" USING (SELECT * FROM (S
ELECT CURRENT$."OBJ_ID",CURRENT$."OBJ_OWNER",CURRENT$."OBJ_NAME" FROM (SELECT "S
OURCE_TABLE_1"."OBJECT_ID" "OBJ_ID","SOURCE_TABLE_1"."OWNER" "OBJ_OWNER","SOURCE
_TABLE_1"."OBJECT_NAME" "OBJ_NAME" FROM "SOURCE_TABLE_1" "SOURCE_TABLE_1") CURRE

1.7 1 1.67 5.6 50.3 43.5 btqubgr940awu
Module: sqlplus@oracle-19c-vagrant (TNS V1-V3)
PDB: CDB$ROOT
BEGIN dbms_workload_repository.create_snapshot(); END;

1.3 1 1.33 4.4 27.3 .0 ggaxdw7tpmqjb
PDB: ORCLPDB1
update "HEMANT"."MLOG$_SOURCE_TABLE_1" set snaptime$$ = :1 where snaptime$$ > t
o_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')

0.9 89 0.01 3.1 1.7 98.6 3un99a0zwp4vd
PDB: ORCLPDB1
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,type#,flags,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and
p_obj#=obj#(+) order by order#

0.5 183 0.00 1.6 6.0 98.3 2sxqgx5hx76qr
PDB: ORCLPDB1
select /*+ rule */ bucket, endpoint, col#, epvalue, epvalue_raw, ep_repeat_count
, endpoint_enc from histgrm$ where obj#=:1 and intcol#=:2 and row#=:3 order by b
ucket

0.5 2 0.23 1.5 15.0 70.0 6tbg6ydrx9jmm
Module: SQL*Plus
PDB: ORCLPDB1
begin dbms_irefstats.purge_stats_mv_rp(in_time => :1, in_objnum => :2, in_r
etention_period => :3); end;

0.4 9 0.04 1.3 15.4 69.2 g1s379sraujaq
Module: SQL*Plus
PDB: ORCLPDB1
DELETE FROM SYS.MVREF$_STMT_STATS WHERE REFRESH_ID = :B2 AND MV_OBJ# = :B1

0.4 2 0.20 1.3 16.4 76.8 8szmwam7fysa3
Module: SQL*Plus
PDB: ORCLPDB1
insert into wri$_adv_objspace_trend_data select timepoint, space_usage, space_a
lloc, quality from table(dbms_space.object_growth_trend(:1, :2, :3, :4, NULL, N
ULL, NULL, 'FALSE', :5, 'FALSE'))

0.4 59 0.01 1.3 9.5 97.3 03guhbfpak0w7
PDB: CDB$ROOT
select /*+ index(idl_ub1$ i_idl_ub11) */ piece#,length,piece from idl_ub1$ where
obj#=:1 and part=:2 and version=:3 order by piece#

0.3 2 0.15 1.0 11.0 .0 a8xypykqc348c
PDB: ORCLPDB1
BEGIN dbms_stats_internal.advisor_setup_obj_filter(:tid, :rid, 'EXECUTE', FAL
SE); END;

0.3 2 0.15 1.0 8.7 .0 avf5k3k0x0cxn
PDB: ORCLPDB1
insert into stats_advisor_filter_obj$ (rule_id, obj#, flag
s, type) select :rule_id, obj#, :flag_include, :type_expanded
from stats_advisor_filter_obj$ where type = :type_priv
and (bitand(flags, :flag_orcl_owned) = 0 or :get_orcl_objects = 'T')


Quite interesting that there are large number of operations that occur.

Unlike a Trace File, the AWR does not report SQL operations as a chronologically-ordered sequence.  In this case, they are ordered by Elapsed Time per operation.

Also, remember that PL/SQL calls will include the time for "child" SQL calls, so you will encounter double-counting if you add up the figures (e.g. the "dbms_mview.refresh('MV_1');" call included a number of SQL calls --- technically you can identify them only if you *trace* the session making this PL/SQL call.  However, since there was no other activity in this database, almost everything that happened appears in this AWR extract.

The actual calls "delete source_table_1;" and "insert into source_table_1 select object_id, owner, object_name from source_table_2;" were issued *before* the "exec dbms_mview.refresh('MV_1');" and are are not "child" calls.  The child calls that do appear in the AWR are not necessarily in the same chronological order of their execution.

The interesting "child" calls from the "dbms_mview.refresh" call that I would like to point out are :

delete from "HEMANT"."MLOG$_SOURCE_TABLE_1" where snaptime$$ <= :1

INSERT /*+ NO_DST_UPGRADE_INSERT_CONV IDX(0) */ INTO "HEMANT"."MLOG$_SOURCE_TABL
E_1" (dmltype$$,old_new$$,snaptime$$,change_vector$$,xid$$,"OBJECT_ID") VALUES (
:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,:x,:1)

DELETE FROM "HEMANT"."MV_1" SNAP$ WHERE "OBJ_ID" IN (SELECT * FROM (SELECT MLOG$
."OBJECT_ID" "OBJ_ID" FROM "HEMANT"."MLOG$_SOURCE_TABLE_1" MLOG$ WHERE "SNAPTIME
$$" > :1 AND ("DMLTYPE$$" != 'I')) AS OF SNAPSHOT(:B_SCN) )

/* MV_REFRESH (MRG) */ MERGE INTO "HEMANT"."MV_1" "SNA$" USING (SELECT * FROM (S
ELECT CURRENT$."OBJ_ID",CURRENT$."OBJ_OWNER",CURRENT$."OBJ_NAME" FROM (SELECT "S
OURCE_TABLE_1"."OBJECT_ID" "OBJ_ID","SOURCE_TABLE_1"."OWNER" "OBJ_OWNER","SOURCE
_TABLE_1"."OBJECT_NAME" "OBJ_NAME" FROM "SOURCE_TABLE_1" "SOURCE_TABLE_1") CURRE


In my next post, I'll share some other findings after I found something interesting in the AWR report.


Categories: DBA Blogs

Zero Downtime Migration: Database & Applications (EBS R12) To Oracle Cloud Using Hybrid Data Guard

Online Apps DBA - Sun, 2020-02-09 07:37

Most of the Customers when migrating Application and Database want minimal downtime. Hybrid Dataguard for DB Tier and RSYNC for Apps Tier, let’s you sync both database & application constantly front On-Premise to Oracle Cloud. When you are ready to move to the Cloud, all you need is to failover Database to Oracle Cloud, configure […]

The post Zero Downtime Migration: Database & Applications (EBS R12) To Oracle Cloud Using Hybrid Data Guard appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[1Z0-997] Traffic Management In Oracle Cloud Infrastructure (OCI)

Online Apps DBA - Sun, 2020-02-09 06:20

Traffic management in OCI is a critical component of the Domain Name System (DNS) that lets us configure Routing policies for serving intelligent responses to DNS queries. Check out K21 Academy’s blog post at https://k21academy.com/1z099712 that covers: • Overview On Traffic Management In OCI • Traffic Steering Use Cases • OCI Traffic Management Policies • […]

The post [1Z0-997] Traffic Management In Oracle Cloud Infrastructure (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator