Feed aggregator

Query Plan Change Diagnosis Example

Bobby Durrett's DBA Blog - Tue, 2020-01-28 15:02

This week I investigated an issue with a query that was suddenly a lot slower in one test environment than another. It runs about 2 seconds in the good case as well as in production. But it was now running more than 10 seconds which is painfully slow through the GUI and impacting the team’s ability to finish a software release. I was able to figure out that the query was running a worse plan and get it to run the good plan and helped the team finish their testing.

I thought I would write this blog post documenting the tools that I used and my reasoning while trying to resolve the issue. This post may be redundant with some of my other posts because they document similar tools and approaches, but I thought it could not hurt to put together a complete example. I considered just writing about one part of the process to focus in on that, but I thought I would write a longer post instead that shows the overall process with details along the way. This blog is my experience as an Oracle Database Administrator/Architect, and I work for an individual company so my experiences may not apply to everyone else’s situation. You may work somewhere that has licensed different versions or features of the Oracle database so your choices would be different. But I still can put out there what I am doing and let others apply it to their situation as they can. Also, there are lots of very sharp and hardworking Oracle people out there that write about these same topics and have contributed their own tools to the Oracle community so I wonder about the value of putting my tools on GitHub and writing about how I use them. Why not use their tools and their approaches? Maybe they will work better? Sure, but I know that my approach and tools work well for me. Also most of the ideas I use came from books, talks, and posts by the same sharp people that I have in mind so my approach really reflects my translation of ideas that are out there in the Oracle community with additions of my own so it is not a choice of me versus them. Anyway, I thought I would work through this example on this post so others can review it and find value where they can. Inevitably I will get a lot out of writing it down myself and it will help me remember what I was thinking at this time when I read this post in the future. Hopefully others will benefit as well.

Here is what I knew when I first looked at the problem. A coworker emailed me the SQL for the problem query and verified that the tables in both test environments were about the same size. Sometimes our test environments can be much smaller than production and different from each other in terms of data volume. Usually, when I get a performance question, I ask for two things: the name of the database and the date and time that the issue stopped and started, including time zone. In this case I knew the database name but not the time frame. But I figured that they had been working on it during the current day, so I just got an AWR report for that day from 8 am to present assuming they worked roughly in working hours.

Fortunately, the AWR report had the problem query as the top query so I did not have trouble finding it. It helped that I already knew the text of the SQL statement. The main thing I got from the AWR report at first is the SQL_ID value. I already had the SQL text, but it helps to get the SQL_ID to plug into other scripts I used.

Trimmed down version of AWR report showing problem SQL_ID 4n5ssud3fgsyq

My next step was to run my sqlstat.sql script to see if the problem query, 4n5ssud3fgsyq, has changed plans. Not sure why I did not use the updated version, sqlstat3.sql. I manually updated the query with the SQL_ID value:

where ss.sql_id = '4n5ssud3fgsyq'

and then I looked for plan changes in the output:


SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms
------------- --------------- ------------------------- ---------------- ------------------
4n5ssud3fgsyq        65249283 08-JAN-20 05.00.05.769 AM               12         47.4083333
4n5ssud3fgsyq        65249283 08-JAN-20 06.00.25.733 AM               21         44.0260952
4n5ssud3fgsyq        65249283 08-JAN-20 08.00.05.250 AM               16         54.5971875
4n5ssud3fgsyq        65249283 08-JAN-20 11.00.04.846 AM               22         165.703818
4n5ssud3fgsyq        65249283 08-JAN-20 12.00.24.960 PM               23          30.143913
4n5ssud3fgsyq        65249283 08-JAN-20 02.00.05.436 PM                1            298.585
4n5ssud3fgsyq        65249283 08-JAN-20 04.00.49.696 PM                3         323.666667
4n5ssud3fgsyq      2414495514 08-JAN-20 05.00.14.441 PM                4          14582.158
4n5ssud3fgsyq      2414495514 08-JAN-20 06.00.34.711 PM                6           9716.264
4n5ssud3fgsyq      2414495514 08-JAN-20 11.00.16.958 PM                8         10078.6125
4n5ssud3fgsyq      2414495514 09-JAN-20 12.00.36.802 AM               20         10876.6251
4n5ssud3fgsyq      2414495514 09-JAN-20 01.00.56.759 AM               10         10751.3919
4n5ssud3fgsyq      2414495514 09-JAN-20 02.00.16.085 AM               13          10297.929
4n5ssud3fgsyq      2414495514 09-JAN-20 03.00.36.804 AM                1          14824.517

I use the PLAN_HASH_VALUE as a name for the plan. I verified that the good plan, 65249283, was run on the other test server that was fine and that it was used in production. So, I had to figure out why the plan changed to 2414495514 on this one test server.

At some point in this process I paused and considered using a SQL Profile to just force the plan back to the good plan. I have written a number of posts about SQL Profiles but this did not seem like a situation where one made sense. Why not spend a couple hours trying to figure out why the plan changed? If I use a SQL Profile, then I will have to migrate the profile to production. And if the query changes the tiniest bit in a future release the SQL Profile will no longer be valid. I have one situation where we have a partitioned table with some almost empty partitions and queries that access the table with the partitioning column passed into the where clause as bind variables. I have been using SQL Profiles to handle that situation on this particular set of production and test databases, but it is a big pain. The last release I had to redo 7-9 SQL Profiles. I’m probably going to have to bite the bullet and find another way around it rather than updating profiles when a release modifies the affected queries. Probably will have to fake the stats on the empty partitions or see if they can be removed. I doubt I will be able to get development to take the partitioning columns out of bind variables. I guess some set of hints might work as well. But anyway, the point is that in this situation that this particular post is about I decided to dig into it and try to find the underlying problem instead of using a SQL Profile band aid.

Having decided to move forward with figuring out why the plan changed I next looked at the two plans. I used my getplans.sql script to extract the plans from the AWR based on SQL_ID and PLAN_HASH_VALUE. I just ran this when logged into the problem database using sqlplus:

@getplans 4n5ssud3fgsyq

I noticed that the slow plan did full table scans on two tables that the fast plan did index lookups on.

SQL_ID 4n5ssud3fgsyq
--------------------
...
Plan hash value: 65249283 - GOOD PLAN

----------------------------------------------------------
| Id  | Operation                             | Name     |
----------------------------------------------------------
...
|  16 |     TABLE ACCESS BY INDEX ROWID       | TABLE1   |
|  17 |      INDEX RANGE SCAN                 | TABLE1_2 |
...
|  47 |           TABLE ACCESS BY INDEX ROWID | TABLE2   |
|  48 |            INDEX UNIQUE SCAN          | TABLE2_0 |


Plan hash value: 2414495514 - BAD PLAN

----------------------------------------------------------
| Id  | Operation                             | Name     |
----------------------------------------------------------
...
|  16 |     TABLE ACCESS FULL                 | TABLE1   |
...						       
|  48 |           TABLE ACCESS FULL           | TABLE2 	 |
----------------------------------------------------------  
						       

I also looked at the Segments part of the AWR report and TABLE2 was at the top of the logical and physical reads reports.

Edited picture of AWR report. Top Table is TABLE2 in execution plans.

So, at this point in my attempt to understand why the plan changed I have the clue that the bad plan has full table scans, and that one of the two tables with the full scans has a substantial percentage of the overall I/O on the database. So, I had two ideas in mind. First, I wanted to check that the two tables had the same indexes as on the working system. Maybe the indexes got dropped during some development work. If the indexes were there, I could just look at the optimizer statistics for the two tables and compare them to what they have on the working test system and production.

I use my optimizer statistics scripts to look at indexes and information that the optimizer uses to choose the best plan. I know that there are other ways to get the same information, such as in a graphical program like Toad or OEM, but I like having scripts that give me exactly the information that I know is important. It is a little clunky to use but not difficult. The top level script is all.sql and I usually comment out the calls to scripts that produce an excessive amount of output. I probably should put all.sql out like this in GitHub:

@tablelist.sql
--@colpartstats.sql
@columnstats.sql
--@histograms.sql
@indexcolumns.sql
@indexstats.sql
--@indpartstats.sql
--@parthists.sql
@tablestats.sql
@tabpartstats.sql
@tabsubpartstats.sql

Seeing all the histogram information for every partition on a table with thousands of partitions is a lot of output and not really needed. If you have many subpartitions you might comment out tabsubpartstats.sql also. To use these scripts you just put your table owners and names in tablelist.sql and run all.sql. The script indexstats.sql listed out the indexes and statistics on them and indexcolumns.sql showed what columns the indexes were on. Unfortunately, in this case the indexes and their columns were the same for the two full scan tables on both test environments and on production. So the switch of the plan to full scans was not due to the lack of indexes. These tables were not partitioned so the tablestats.sql and columnstats.sql reports were the main ones I looked at to compare statistics. Maybe one of them had very out of date or even missing statistics. Unfortunately, the statistics on the two tables looked pretty similar on all three databases. So, my look at the indexes and statistics of the two tables with full scans on the bad plan did not explain the change in plan.

At some point in this process I decided to run the problem query to see if it gave me clues about the plan change. I am putting this post together remembering the steps I went through, but I may not have the exact chronological order. But I should be able to capture the main things I thought about in an order that makes sense. In order to test the query, I needed to come up with values for the bind variables. I used my bind2.sql script to extract some of the bind variable values on the problem test database. I edited bind2.sql to have the problem SQL_ID, 4n5ssud3fgsyq. I chose the most recent two sets of bind variable values to plug into the problem query. I extracted the query text using dumpsql.sql and used Toad to format it nicely. Then I plugged the formatted query into my test2.sql script. To use test2.sql you need to replace the select statement that is there on GitHub:

SELECT /*+gather_plan_statistics*/ * from dual where dummy <> ' ';

with your own query. But you have to make sure that the gather_plan_statistics hint stays in the select statement. There are different ways to handle bind variables and their values in a test script like this but in this case, I manually edited the test script replacing the bind variables with literals that I got from bind2.sql. I chose two sets because one set had nulls for a couple of values so I thought the nulls might cause the bad plan. There might be some odd choice of bind variable values on this test system that was causing the bad plan, or so I thought. But I ran the test2.sql script as described and it ran slow on the one test system for both sets of bind variable values and fast on the other.

My tests with the bind variable values showed that the problem was not related to those sets of bind variable values and that even with literals in place the problem query runs slow on the problem test database. The plan was not the same as the bad plan I had extracted for the original query, but it still had the two full scans. In the same way, my tests had index lookups on the two tables on the other test environment even with the literals replacing the bind variables. So, all this really proved so far was that the two sets of bind variable values that I chose did not affect the problem.

To go further I started looking at the output of test2.sql. The way I get the plan here shows estimated and actual rows. If I find discrepancies in estimated and actual row counts that could point me to where the problem lies. This query in test2.sql dumps out the plan in a format that shows estimated and actual rows:

select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS'));

Here is the edited output for the bad plan for the two full scan tables:

----------------------------------------------------------------------------------------
| Id  | Operation                    | Name    | Starts | E-Rows | A-Rows |   A-Time   |
----------------------------------------------------------------------------------------
...
|  16 |     TABLE ACCESS FULL        | TABLE1  |      2 |   1593K|   3186K|00:00:01.36 |
...
|  48 |           TABLE ACCESS FULL  | TABLE2  |      2 |   1628K|   3344K|00:00:15.13 |
----------------------------------------------------------------------------------------

But I am not sure what this is telling me. For TABLE1 the optimizer expected 1593K rows and got 3186K. It expected 1.5 million rows but got 3 million. Similarly, about 1.6 million and 3.3. But how does this help me figure out what the problem is? I looked at the good plan in the same way but was equally puzzled:

-------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Starts | E-Rows | A-Rows |
-------------------------------------------------------------------------------------
...
|  16 |     TABLE ACCESS BY INDEX ROWID       | TABLE1   |      0 |      1 |      0 |
|* 17 |      INDEX RANGE SCAN                 | TABLE1_2 |      0 |      1 |      0 |
...
|  47 |           TABLE ACCESS BY INDEX ROWID | TABLE2   |      0 |      1 |      0 |
|* 48 |            INDEX UNIQUE SCAN          | TABLE2_0 |      0 |      1 |      0 |
-------------------------------------------------------------------------------------

Here the estimated rows is 1 but actual is 0. But how do I use that to figure out what changed the plan on the one test environment? Then I got the key insight to solve this whole thing. It was not new to me, but I don’t do this every day and it was a good reminder and I wanted to blog about it. I considered blogging about just this one point and maybe I should edit this post down to just this idea. But then I wouldn’t put out the overall steps and tools I used to get to this point. The key insight was to use a full set of outline hints to force the bad plan and good plan to run in both environments and to compare the estimated and actual rows with each plan. I am not sure how this post will end but that is the most important point. Presumably the optimizer considered both plans and gave each of them a cost and chose the lowest cost plan to run. On our problem system the much slower plan has a lower cost than the faster one. The opposite is true on the working systems. What we want is to force the query to run the bad plan both places and see where the estimated rows differ. Then do the same for the good plan. A key idea that I learned about query tuning from people like Tom Kyte, Jonathan Lewis, and others is that the optimizer chooses a bad plan because of bad row estimates or bad estimates of the cost per row of certain operations. Most of the time it comes down to bad row estimates. So, which table is getting the bad row estimates and why? My way of finding it out in this case was to use outline hints to force the same plan to run on both databases and then use the test2.sql output to look at estimated and actual rows.

I was not sure how to extract the plans from the AWR and get the outline hints. I’m sure there is a way but in my hurry I ended up just using my plan.sql script to use EXPLAIN PLAN to get the plan and pull the outline hint from there. I checked the plans to make sure they had the same index versus full scan issues as I have been seeing. At the end of the output for plan.sql is an outline hint section that starts like this:

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA

You just copy and paste everything from /*+ through */ as your hint. I put this in my test2.sql script that I had used before with one set of the bind variable values as constants and I merged the gather_plan_statistics hint with the outline hint so the query started like this:

 SELECT /*+gather_plan_statistics 
      BEGIN_OUTLINE_DATA

This way the gather_plan_statistics hint collects the actual and estimated rows and the outline hint forces the plan to be used.

At first, I looked at the estimated and actual rows on the good and bad plans on both systems for the two tables that switched to full scans on the problem system.

Good DB Bad Plan Full Scan Tables

------------------------------------------------------------------------------------
| Id  | Operation                             | Name    | Starts | E-Rows | A-Rows |
------------------------------------------------------------------------------------
...
|  16 |     TABLE ACCESS FULL                 | TABLE1  |      1 |   1590K|   1593K|
...
|  48 |           TABLE ACCESS FULL           | TABLE2  |      1 |   1957K|   1957K|
------------------------------------------------------------------------------------

Bad DB Bad Plan Full Scan Tables

Plan hash value: 1397976319

------------------------------------------------------------------------------------
| Id  | Operation                             | Name    | Starts | E-Rows | A-Rows |
------------------------------------------------------------------------------------
...
|  16 |     TABLE ACCESS FULL                 | TABLE1  |      1 |   1593K|   1593K|
...
|  48 |           TABLE ACCESS FULL           | TABLE2  |      1 |   1628K|   1672K|
------------------------------------------------------------------------------------

Good DB Good Plan Full Scan Tables

-------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Starts | E-Rows | A-Rows |
-------------------------------------------------------------------------------------
...
|  16 |     TABLE ACCESS BY INDEX ROWID       | TABLE1   |     16 |      1 |     18 |
|* 17 |      INDEX RANGE SCAN                 | TABLE1_2 |     16 |      1 |     18 |
...
|  47 |           TABLE ACCESS BY INDEX ROWID | TABLE2   |     12 |      1 |     12 |
|* 48 |            INDEX UNIQUE SCAN          | TABLE2_0 |     12 |      1 |     12 |
-------------------------------------------------------------------------------------

Bad DB Good Plan Full Scan Tables

-------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Starts | E-Rows | A-Rows |
-------------------------------------------------------------------------------------
...
|  16 |     TABLE ACCESS BY INDEX ROWID       | TABLE1   |     16 |      1 |     18 |
|* 17 |      INDEX RANGE SCAN                 | TABLE1_2 |     16 |      2 |     18 |
...
|  47 |           TABLE ACCESS BY INDEX ROWID | TABLE2   |     12 |      1 |     11 |
|* 48 |            INDEX UNIQUE SCAN          | TABLE2_0 |     12 |      1 |     11 |
-------------------------------------------------------------------------------------

No obvious pattern shows up on these to me. It seems like the optimizer has the same estimates for these two tables on both databases for both plans. But then I got the second key idea: look at the other tables. It is typical when a plan switches to a full scan for there to be something wrong with that table’s stats. But nothing pointed to that in this situation. So, I got the idea of stepping back and looking at the other tables. That led to me finding the actual issue. Two tables that were not the ones with the full scans had different degrees of discrepancies between actual and estimated rows on the bad and good databases.

Good DB Bad Plan Other Tables

-------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Starts | E-Rows | A-Rows |
-------------------------------------------------------------------------------------
...
|  12 |        TABLE ACCESS BY INDEX ROWID    | TABLE3   |      1 |      3 |      6 |
|* 13 |         INDEX RANGE SCAN              | TABLE3_1 |      1 |      3 |      6 |
|  14 |      TABLE ACCESS BY INDEX ROWID      | TABLE4   |      6 |     55 |     11 |
|* 15 |       INDEX RANGE SCAN                | TABLE4_1 |      6 |     55 |     11 |
...
-------------------------------------------------------------------------------------

Bad DB Bad Plan Other Tables

-------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Starts | E-Rows | A-Rows |
-------------------------------------------------------------------------------------
...
|  12 |        TABLE ACCESS BY INDEX ROWID    | TABLE3   |      1 |     22 |      6 |
|* 13 |         INDEX RANGE SCAN              | TABLE3_1 |      1 |     22 |      6 |
|  14 |      TABLE ACCESS BY INDEX ROWID      | TABLE4   |      6 |    153 |     11 |
|* 15 |       INDEX RANGE SCAN                | TABLE4_1 |      6 |    153 |     11 |
...
-------------------------------------------------------------------------------------

Good DB Good Plan Other Tables

----------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Starts | E-Rows | A-Rows |
----------------------------------------------------------------------------------
...
|  12 |        TABLE ACCESS BY INDEX ROWID | TABLE3   |      1 |      3 |      6 |
|* 13 |         INDEX RANGE SCAN           | TABLE3_1 |      1 |      3 |      6 |
|  14 |      TABLE ACCESS BY INDEX ROWID   | TABLE4   |      6 |     55 |     11 |
|* 15 |       INDEX RANGE SCAN             | TABLE4_1 |      6 |     55 |     11 |

----------------------------------------------------------------------------------

Bad DB Good Plan Other Tables

----------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Starts | E-Rows | A-Rows |
----------------------------------------------------------------------------------
...
|  12 |        TABLE ACCESS BY INDEX ROWID | TABLE3   |      1 |     22 |      6 |
|* 13 |         INDEX RANGE SCAN           | TABLE3_1 |      1 |     22 |      6 |
|  14 |      TABLE ACCESS BY INDEX ROWID   | TABLE4   |      6 |    153 |     11 |
|* 15 |       INDEX RANGE SCAN             | TABLE4_1 |      6 |    153 |     11 |
...
----------------------------------------------------------------------------------

Looking at these two tables the bad database had significantly higher row estimates than the good database and the actual rows was the same on both. Errors in estimated rows multiply through the plan as joins are considered. So, at this point I thought it could be that the estimates on these tables (TABLE3 and TABLE4) led to the switch to full scans on TABLE1 and TABLE2. So, I went back to my optimizer statistics scripts and took TABLE1 and 2 out of tablelist.sql and put TABLE3 and 4 in and looked for differences. I found the difference in the columnstats.sql output. The NUM_BUCKETS value for the first column of TABLE3 and TABLE4 was 1 on the bad database and 254 or 255 on the good. So, column statistics had been gathered differently on these two tables on the bad database at some point. Maybe in the process of testing or developing on this system statistics were gathered in a non-standard way. The database in question uses Oracle’s default statistics job. (This is 11.2.0.3, HP Unix Itanium by the way). So, I thought I would try gathering statistics on the two tables with dbms_stats.gather_table_stats with default options because that would be similar to what the default stats job would do. I did something like this:

execute dbms_stats.gather_table_stats('MYSCHEMA','TABLE3');

execute dbms_stats.gather_table_stats('MYSCHEMA','TABLE4');

After gathering stats on the two tables I verified that the first column of each had a 254/5 bucket histogram. Then I just tested the problem query on both, and they ran the same fast plan. Then the development team tested through the application and verified that it was back to normal. I flushed the shared pool before turning it over to them in case the bad plan was cached.

This problem was a little tricky because the stats were wrong on two tables that were not the ones that switched to full scans. A lot of times the situation is simpler such as a table is missing statistics altogether. But I thought it would be helpful to work through a description of the process that I took to get to that point and the tools that I used. The key point is the one I put in bold above which is to look at the estimated and actual rows for the same plan on both databases and look for differences. Once I did that it was just a matter of finding the tables with the different estimates on the two dbs. That led me to look at their statistics and to regather them in a better way. It took a little time to figure this out, but it was nice to find the problem and do the least amount of change necessary to resolve the issue instead of just randomly trying things. Sometimes people will just gather stats on all the tables in the query. That probably would have worked here if they gathered them in the right way. Or you could just use a SQL Profile and forget trying to figure out why the new plan popped up. But it was worthwhile to understand why the plan changed in this case. For one thing I was concerned whether the same plan change would happen in production, so I wanted to know why it changed on the test environment. Knowing that the two tables had statistics gathered on them in a way that we would not use in production I have more confidence that prod is fine. There really is value in digging into why a query’s plan has changed but it does take some work. Often the bad plan is a result of a bad row estimate in some part of the plan and it makes sense to track down which table has the bad row estimate and dig into why the estimate was off. In many cases bad row estimates relate to how and when statistics were gathered on the problem table as was the case with our two tables in this situation.

Categories: DBA Blogs

Use srvctl stop home When Stopping Many Database Instances

Michael Dinh - Tue, 2020-01-28 12:13

=============================================================================
### Stops all Oracle clusterware resources that run from the Oracle home.
=============================================================================

Usage: srvctl stop home -oraclehome  -statefile  -node  [-stopoption ] [-force]
    -oraclehome              Oracle home path
    -statefile         Specify a file path for the 'srvctl stop home' command to store the state of the resources
    -node               Node name
    -stopoption      Stop options for the database. Examples of shutdown options are NORMAL, TRANSACTIONAL, IMMEDIATE, or ABORT.
    -force                         Force stop
    -help                          Print usage
[oracle@ol7-19-rac1 ~]$

=============================================================================
### Check ALL DB status running from same ORACLE_HOME for NODE
=============================================================================

[oracle@ol7-19-rac1 ~]$ srvctl status database -d cdbrac -v
Instance cdbrac1 is running on node ol7-19-rac1. Instance status: Open.
Instance cdbrac2 is running on node ol7-19-rac2. Instance status: Open.

[oracle@ol7-19-rac1 ~]$ srvctl status home -o $ORACLE_HOME -s $ORACLE_HOME/statushome.txt -node ol7-19-rac1
Database cdbrac is running on node ol7-19-rac1

[oracle@ol7-19-rac1 ~]$ cat $ORACLE_HOME/statushome.txt
db-cdbrac
[oracle@ol7-19-rac1 ~]$

=============================================================================
### STOP ALL DB running from same ORACLE_HOME for NODE
=============================================================================

[oracle@ol7-19-rac1 ~]$ srvctl stop home -o $ORACLE_HOME -s $ORACLE_HOME/stophome.txt -node ol7-19-rac1

[oracle@ol7-19-rac1 ~]$ cat $ORACLE_HOME/stophome.txt
db-cdbrac

[oracle@ol7-19-rac1 ~]$ srvctl status database -d cdbrac -v
Instance cdbrac1 is not running on node ol7-19-rac1
Instance cdbrac2 is running on node ol7-19-rac2. Instance status: Open.
[oracle@ol7-19-rac1 ~]$

=============================================================================
### START ALL DB running from same ORACLE_HOME for NODE
=============================================================================

[oracle@ol7-19-rac1 ~]$ srvctl start home -o $ORACLE_HOME -s $ORACLE_HOME/stophome.txt -node ol7-19-rac1

[oracle@ol7-19-rac1 ~]$ srvctl status database -d cdbrac -v
Instance cdbrac1 is running on node ol7-19-rac1. Instance status: Open.
Instance cdbrac2 is running on node ol7-19-rac2. Instance status: Open.
[oracle@ol7-19-rac1 ~]$

Top Ten Shoe Brands in The World

VitalSoftTech - Tue, 2020-01-28 10:12

Top shoe brands in the world provide a vibrant and ever-growing industry. Footwear is no longer about comfort and mobility. It has now become more central to a person’s style statement. It is more a representative of one’s class than something you wear to cover your feet. The international footwear industry is growing exponentially every […]

The post Top Ten Shoe Brands in The World appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Helps Utilities Identify and Fix Asset Issues Faster

Oracle Press Releases - Tue, 2020-01-28 07:30
Press Release
Oracle Helps Utilities Identify and Fix Asset Issues Faster Advanced analytics and out-of-the-box visualizations paint a clear picture of asset health so utilities can avoid disasters and improve uptime and longevity with less cost

DISTRIBUTECH, San Antonio, TX.—Jan 28, 2020

Every year, millions of avoidable asset failures go unnoticed until the damage is done. While the proliferation of smart sensors and devices are enabling assets to “talk” about their health, utilities have struggled to decode this barrage of new data to hear and understand them. With advanced analytics and pre-built data visualizations in Oracle Work and Asset Cloud Service (WACS), utilities can quickly reveal the story their assets are telling to drive faster, more accurate decisions at a fraction of the cost.

“With a new natural gas-fired plant coming online in 2023, it was critical that we had a modern asset management system in place to track our diverse assets,” said Steve Saunders, director of information technology, Seminole Electric. “Oracle WACS will enable Seminole to quickly visualize all of our assets so that we can focus on delivering the most reliable service to our members.”

Breaking down data silos, WACS brings together best-in-class capabilities for asset and operational device management, GIS and mobile field service in one platform. Unlike other solutions that only offer the star schema for reporting, WACS today offers 118 pre-defined visualizations and 57 base calculations out-of-the-box, with additional visualization options planned. With data displayed on easy to consume dashboards, utilities gain the ability to bypass the need for costly data manipulation and can get key intelligence into the hands of business stakeholders quickly. And with all the data coming from the same reporting structure, utilities can compare metrics and benchmark their performance against set goals and KPIs.

“Asset failures are consistently linked to costly and devastating issues for utilities,” said Dan Byrnes, SVP of product development, Oracle Utilities. “It’s critical that utilities do less guessing and more listening when it comes to the health of their assets. Being able to not only manage the massive amounts of data coming in but also visualize it in a way that elicits action is the holy grail in keeping assets performing effectively and safely. WACS enables utility stakeholders to make sense of their data right out-of-the-box—no business analyst degree needed.”

The pre-built visualizations allow users to slice data such as failure count or cost by elements like the asset type or manufacturer. Users can also choose from an extensive library of pre-built calculations such as Mean Time Between Failure or Asset Availability to quickly build their own visualizations sliced any way they want. With data also flowing freely into the system from smart, connected assets and devices, utilities can identify and easily visualize infrastructure weaknesses throughout the lifecycle of the asset. This includes asset acquisition, set-up, and performance—such as predictive maintenance, repair, and inspection activities—through to the purchasing and inventory of parts.

WACS empowers electric, gas, and water utilities of all sizes to cost-effectively measure and continually improve performance by providing centralized support of fixed and linear assets and a growing portfolio of intelligent devices, to help:

  • Prevent failure: by providing ways to identify risks of asset failure faster and easier; eliminating the day-to-day tasks of collecting data and maintaining separate systems of record, and; having the full breadth of asset performance data at the ready to make business and investment decisions.
  • Enhance intelligence: 118 prebuilt visualizations, nine data subject areas, and 57 base calculations, with more planned to be added regularly, make it easy for utilities to visualize and “hear” what their assets are saying. Utilities can also integrate data coming from their other investments, such as SCADA systems, to give a truly holistic view of their operations and asset health.
  • Take action: effectively without losing time and resources to investigate faults; create work orders based on the criticality of the asset and schedule the field visits and crews accordingly, and; manage and monitor work progress via dashboards.
  • Optimize performance: of an asset maintenance program by benchmarking performance. This enables utilities to accurately plan their investments based on asset conditions and to implement ISO 55000 plans quickly to generate results faster.
 

In a recent study, sixty-four percent of utilities noted that cloud computing is critical to their company’s future success, in large part due to the need to better manage an ever-growing pool of data coming from smart meters, IoT sensors, etc. Utilities realize that it is becoming too great a task and risk to manage this influx without a significant investment in tools that can capture and analyze data quickly. To learn more about how Oracle WACS is helping utilities manage this deluge of data, visit here.

Contact Info
Kris Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Waiting for direct write temp – but what file?

Mathias Magnusson - Tue, 2020-01-28 07:00

This is essentially a not for myself for the next time I forget. It seems to be like once every other year I figure this out. It is not too easy to google for so it usually takes a bit too long.

So you have identified the event “direct path read temp” or “Direct path write temp” as your main culprit in a SQL. Great, now let’s just see what temp file it is.

That is how I’ve started a few times and I end up not finding it. Let’s take a look at why it can be confusing at first.

Your select runs long and you take a look at v$session_wait for your SID. If you see “Direct path read temp” as the event being the one your session waits on most of the time, you may want to know what file it is to understand more about why.

It is easy enough, the documentation states:

  • P1 = File number
  • P2 = First dba
  • P3 = block cnt

P1-P3 are columns found in views such as v$session_wait giving parameters the describes the details of different wait events. The names are generic due to different events having different parameters.

If you try to look up the file number in v$data_files or v$temp_files you may find that there is no such file. In my case I was looking for file number 1038, with the database having just a handful datafilee and even fewer temp files. Even looking for all files in the CDB, I only found files ranging from 1-650.

It turns out that both temp files and data files are numbered from 1, meaning the database can have a temp file 5 and a datafile 5, but wait events need to be able to distinguish between them with having just one parameter.

After a lot of searching for things like “file number over 1000”, “missing file number oracle” and so forth I usually strike gold with some odd search term and find my way back to this post where the first comment is by Jonathan Lewis and as usual in a few words clearly explains the issue.

For a temp file you have to subtract the initialization parameter db_files from the number found in wait events. After that you have a file number you can look up in v$temp_files.

This post was just to make a post about this to make it easier to find than what the existing ones seems to be when you do not know the reason for this. Maybe it will mostly be used by me next time I forget why tempfiles does not seem to exist with the numbers reported by the events.

Utilities Forge Beyond SCADA with Oracle Advanced Distribution Management

Oracle Press Releases - Tue, 2020-01-28 07:00
Press Release
Utilities Forge Beyond SCADA with Oracle Advanced Distribution Management New SCADA applications give utilities a single platform to monitor, control and optimize grid assets and edge devices

DISTRIBUTECH, San Antonio, TX.—Jan 28, 2020

The proliferation of distributed energy resources (DERs) and new field data sources has made managing the energy grid significantly more complex. Helping utilities turn this challenge into an opportunity, Oracle Utilities Network Management System (NMS) now features advanced supervisory control and data acquisition (SCADA) capabilities. With the offering, grid operators gain real-time intelligence and control over all types of field devices and DERs without having to rely on distribution SCADA additions or upgrades.

Built on industry standards and integration best practices, the new SCADA applications are the result of an exclusive collaboration between Oracle Utilities and LiveData Utilities, a silver level member of Oracle PartnerNetwork (OPN). Based on LiveData’s Operational Technology Message Bus (OTMB) server platform, the jointly developed technology is delivered and supported within Oracle Utilities NMS, giving utilities an integrated, real-time grid management platform with a single user interface and point of contact. The offering can easily integrate data from existing SCADA platforms, field and IoT devices, DERs and grid-edge sensor points across a wide array of communications protocols.

“Whether a utility is dealing with high DER penetration or simply needing to expand its network control to field devices and sensors, Oracle NMS SCADA applications provide the ability to scale to give grid operators more control, greater flexibility and faster response times,” said Brad Williams, vice president of industry strategy at Oracle Utilities. “The offering addresses traditional real-time SCADA requirements and much more, from power line sensors to rooftop solar inverters, all with real-time speed. When every second counts in extreme events, this is a game-changer for utilities.”

To see the new solution in action, visit DistribuTECH booth #2809.

Data intelligence across any distribution grid asset

The SCADA capabilities in NMS represents a technological leap beyond legacy solutions. As a fully embedded, scalable component of the Oracle NMS platform, the OTMB communications adapter enables real-time communications to substations, distribution network assets and DERs, as well as grid devices and energy resources well beyond the SCADA network. As such, grid operators get instant visibility across their network, even without an existing distribution SCADA or distributed energy resource management system (DERMS) in place.

“This combined offering provides utilities the ability to monitor and control equipment and devices while reducing integration complexity for real-time operational technology and systems,” said Brad Harkavy, president, LiveData Utilities. “With this solution, utilities have a comprehensive toolkit to integrate best of breed OT solutions and to bridge the gap between the OT and IT worlds, creating a true convergence platform.”

The SCADA applications can also be applied to Oracle Utilities Distributed Energy Resource Management System (DERMS) solution. Used in tandem, utilities can monitor DERs in real-time and proactively optimize their distribution networks around the behavior of both utility and customer-owned resources such as rooftop PV, behind-the-meter energy storage and electric vehicles—all of which may or may not be connected to a utility’s legacy SCADA network.

Contact Info
Kris Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
Allison Salke
LiveData Utilities
+1.617.945.7898
press@livedatautilities.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About LiveData Utilities

LiveData Utilities (LDU) is the developer of electric Operational Technology Message Bus (OTMB) architecture. The company’s real-time OT solutions and embedded SCADA technology reduce system integration time, resources and the long-term cost of integration maintenance. In addition, LDU products and services future-proof a utility’s architecture by creating a standard OT Messages Bus interface that enables the integration of tomorrow’s systems and devices. 13 of the top 20 Investor owned Utilities (IOUs) use LDU solutions. Additional information is available at www.LiveDataUtilities.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Allison Salke

  • +1.617.945.7898

Telecom Fiji Revolutionizes Digital Experience with Oracle

Oracle Press Releases - Tue, 2020-01-28 07:00
Press Release
Telecom Fiji Revolutionizes Digital Experience with Oracle Telecom Fiji Limited teams up with Oracle Communications to modernize customer engagement and speed delivery of new products and services to customers

Redwood Shores, Calif. and Fiji—Jan 28, 2020

As the telecommunications industry rapidly advances toward a 5G world, digital disruptors are challenging communication service providers’ (CSPs) traditional business models and customer relationships. To address this challenge, Telecom Fiji Limited (Telecom) is looking to Oracle to better serve their customers, orchestrate memorable events, and develop new revenue streams.

Owned 100 percent by Fijians, Telecom is the leading South Pacific island provider of Internet and communications services to consumers, businesses and the Fiji government. The company is committed to its mission of delivering innovative services and improved customer experiences so the people of Fiji can benefit from the best of the world’s technology in their remote island location. Acting on these goals, the century-old service provider is embarking on an end-to-end digital transformation with Oracle from lead acquisition to revenue realization.

Telecom’s transformation journey involves full digital customer lifecycle processes from registering for services to zero-touch provisioning to digitally coordinated service delivery, all with real-time digital notifications. Customers benefit from convenient bundling options like combining voice and data, and any service issues that arise can be quickly resolved with self-care portals, chatbots, and live digital interaction with agents. Better visibility and orchestration across their field workers will also help Telecom reduce costs and improve customer service with reduced truck rolls and increased fleet and technician availability.

“We chose Oracle Communications because of Oracle’s international track record of reliable and innovative technology solutions,” said Charles Goundar, chief executive officer, Telecom. “Oracle’s entire ecosystem has been built by adopting best practices over time, which gives us standardized processes that work seamlessly with each other. With a platform based on best international practices, we will provide our customers with the very best in service delivery. That is what Fijians deserve.”

Telecom will deploy Oracle Communications Billing and Revenue Management, Oracle Communications Order and Service Management, Oracle Communications Network Charging and Control, Oracle Field Service Cloud, Oracle Live Experience Cloud, and Oracle Marketing Cloud as part of the Oracle Digital Experience for Communications suite, to enable a transformative digital customer experience coupled with communications-grade modern monetization and service fulfillment. Digital Experience for Communications has been proven to scale in real time across even the most challenging industries and business models. The solution is delivered on Oracle Private Cloud Appliance and Oracle Cloud Infrastructure, and will be implemented and managed by Oracle Communications Consulting, the trusted partner of hundreds of service providers around the world.

“As service providers transform their end-to-end processes, they’re seeking more connected customer experiences and ways to engage via new digital channels,” said Jason Rutherford, senior vice president and general manager of Oracle Communications. “Innovative service providers like Telecom are reimagining communications by adopting real-time, scalable solutions to bring new services to market and simplify and automate their business.”

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Amy Dalkoff
Hill+Knowlton Strategies
+1.312.255.3078
amy.dalkoff@hkstrategies.com
About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

To learn more about Oracle Communications industry solutions, visit: Oracle Communications LinkedIn, or join the conversation at Twitter @OracleComms.

Join Oracle Communications at Mobile World Congress booth in #2H40 to demo the Digital Experience for Communications suite.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Amy Dalkoff

  • +1.312.255.3078

SQL Server Tips: Path of the default trace file is null

Yann Neuhaus - Tue, 2020-01-28 03:01

In addition of my precedent blog about this subject “SQL Server Tips: Default trace enabled but no file is active…”, I add a new case where the default path of the trace file was empty.

The first step was to verify if the default trace is enabled with the command:

SELECT * FROM sys.configurations WHERE name=’default trace enable’

It is enabled, then I check the current running trace with the view sys.traces

SELECT * FROM sys.traces


As you can see, this time I have a trace but with a null in the Path for the trace file…

To correct this issue, the only way is to stop and reactive the trace in the configuration:

EXEC sp_configure 'show advanced options',1;
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sp_configure 'default trace enabled',0;
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sp_configure 'default trace enabled',1;
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sp_configure 'show advanced options',0;
GO
RECONFIGURE WITH OVERRIDE;
GO

Et voila, I have a trace file now…

Cet article SQL Server Tips: Path of the default trace file is null est apparu en premier sur Blog dbi services.

How to create signed url on google cloud storage?

Surachart Opun - Mon, 2020-01-27 08:54
A signed URL is a URL that provides limited permission and time to make a request. It's good to be used by someone who does not have a Google Account. I caught up reading on Google Cloud documents and finding how to do it. Assume I would like to share file on google cloud storage to my friend who does have a Google Account. Example: gs://mysurachartbucket/test.txt
[student@centos~]$ gsutil mb gs://mysurachartbucket
Creating gs://mysurachartbucket/...
[student@centos~]$ cat test.txt
TEST
[student@centos~]$  gsutil cp test.txt gs://mysurachartbucket/
Copying file://test.txt [Content-Type=text/plain]...
- [1 files][    5.0 B/    5.0 B]
Operation completed over 1 objects/5.0 B.
[student@centos~]$ gsutil ls gs://mysurachartbucket/test.txt
gs://mysurachartbucket/test.txtFirst of all, I need keystore-file from service account. So, To create service account and key file.

[student@centos~]$ gcloud iam service-accounts list
NAME                                    EMAIL                                                DISABLED
Compute Engine default service account  ********-compute@developer.gserviceaccount.com  False
[student@centos~]$ gcloud iam service-accounts create surachart
Created service account [surachart].
[student@centos~]$ gcloud iam service-accounts list
NAME                                    EMAIL                                                DISABLED
Compute Engine default service account  ********-compute@developer.gserviceaccount.com  False
                                        surachart@myproject.iam.gserviceaccount.com         False
[student@centos~]$ gcloud iam service-accounts keys create ~/surachart.json   --iam-account surachart@myproject.iam.gserviceaccount.com
created key [4d6b1bd*********08f966dd31] of type [json] as [/home/student/surachart.json] for [surachart@myproject.iam.gserviceaccount.com]Then, service account should be able to read file in bucket.
[student@centos~]$ gsutil acl ch  -u surachart@myproject.iam.gserviceaccount.com:R gs://mysurachartbucket/test.txt
Updated ACL on gs://mysurachartbucket/test.txtFinally, create signed url by using gsutil command.
[student@centos~]$ gsutil signurl -d 20m surachart.json gs://mysurachartbucket/test.txt
CommandException: The signurl command requires the pyopenssl library (try pip install pyopenssl or easy_install pyopenssl)
####As error that need to install pyopenssl.
[student@centos~]$ sudo pip install pyopenssl
[student@centos~]$ gsutil signurl -d 20m surachart.json gs://mysurachartbucket/test.txt
URL     HTTP Method Expiration Signed URL
gs://mysurachartbucket/test.txt GET 2020-01-27 21:34:08 https://storage.googleapis.com/mysurachartbucket/test.txt?x-goog-signature=99dbc749d2891eb1d9d22a5ccd03a81d4f0366380ff3bb0c34faf246d20677290778c6033a81fce43363709b244a882308b1c8590eaed409e1c8a0d4aca76cfec8537b1231e6b1f57************c6abaaacd128ac85f798edfb41bfa48d688897882be28cd1838520144ff197a5e84f499da914c2f8b309c32343011974a8f888163cba2a33c491fd858906bce2ad3cb5c5249c1e79127d200dccea553deafe7e1eb43a8b1527cb20e935c66129b0cad1683f01b6474a4c2940b92dd6daaa65da48fba7cbe94ed5881d46f268908735b2ad12ef2b1f7b0e79a2dd4a527cc611ea35718db96db&x-goog-algorithm=GOOG4-RSA-SHA256&x-goog-credential=surachart%40myproject.iam.gserviceaccount.com%2F20200127%2Fus%2Fstorage%2Fgoog4_request&x-goog-date=20200127T140408Z&x-goog-expires=1800&x-goog-signedheaders=hostThis signed url will expire in 20 minutes. Then send it to my friend.

Reference: 
https://cloud.google.com/storage/docs/access-control/signed-urls
https://cloud.google.com/storage/docs/gsutil/commands/signurl
Categories: DBA Blogs

ANSI flashback

Jonathan Lewis - Mon, 2020-01-27 03:45

I am seeing “traditional” Oracle SQL syntax being replaced by “ANSI”-style far more frequently than I used to – so I thought I’d just flag up another reminder that you shouldn’t be too surprised if you see odd little glitches showing up in ANSI style that don’t show up when you translate to traditional; so if your SQL throws an unexpected error (and if it’s only a minor effort to modify the code for testing purposes) it might be a good idea to see if the problem goes away when you switch styles. Today’s little glitch is one that showed up on the Oracle-l listserver 7 years ago running 11.2.0.3 but the anomaly still exists in 19c.

As so often happens it’s a problem that appears in one of the less commonly used Oracle features – in this case flashback queries. We’ll start by creating a table, then pausing for thought (Note: this code is little more than a cosmetic rewrite of the original posting on Oracle-l):


rem
rem     Script:         ansi_flashback_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1 
as
select  * 
from    all_objects
where   rownum <= 10000 -- > comment to avoid wordpress format issue
;

create table t2
as
select  *
from    t1
where   rownum <= 10 -- > comment to avoid wordpress format issue
;
 
prompt  =======================
prompt  Sleeping for 10 seconds
prompt  =======================

execute dbms_lock.sleep(10)

column current_scn new_value m_scn format a15

select  to_char(current_scn,'99999999999999') current_scn 
from    v$database
/

 
select 
        v1.object_name
from 
        t1 as of scn &m_scn v1
join 
        t2 as of scn &m_scn v2
on 
        v2.object_id = v1.object_id
/

I’ve created a couple of tables then introduced a 10 second sleep before checking the database SCN. The sleep is there because I want to be able to query the tables “as of SCN” and if I don’t pause for a little while (typically about 5 seconds) the code will probably raise Oracle error ORA-01466: unable to read data – table definition has changed.

The query I want to use references both table as of the same SCN, using “ANSI” syntax to do the join. The query behaves perfectly reasonably when run from SQL(Plus; the problem starts to appear when I try to embed the query as a cursor in a PL/SQL procedure. First I’ll copy the SQL exactly as it is (with substitution variable) into a procedure declaration. The variable will be replaced in both cases by an actual value before the procedure is created, as the subsequent check of user_source will show:


create or replace procedure p1( myscn in varchar2 ) as

        cursor c1 is 
                select  v1.object_name
                from 
                        t1 as of scn &m_scn v1
                join 
                        t2 as of scn &m_scn v2
                on 
                        v2.object_id = v1.object_id
        ;

        l_row c1%rowtype;

begin
        open c1;
        fetch c1 into l_row;
        dbms_output.put_line(l_row.object_name);
        close c1;
end;
/
 
select  text 
from    user_source
where   type = 'PROCEDURE'
and     name = 'P1'
order by 
        line
/

execute p1 ('0')

The procedure compiles successfully and the query against user_source shows it stored as follows (note, particularly, an actual value has been stored for the SCN):


procedure p1( myscn in varchar2 ) as

        cursor c1 is
                select  v1.object_name
                from
                        t1 as of scn  12670394063090 v1
                join
                        t2 as of scn  12670394063090 v2
                on
                        v2.object_id = v1.object_id
        ;

        l_row c1%rowtype;

begin
        open c1;
        fetch c1 into l_row;
        dbms_output.put_line(l_row.object_name);
        close c1;
end;

Next we recreate the procedure but replace the substitution variable with the name of the incoming formal parameter:


create or replace procedure p1( myscn in varchar2 ) as

        cursor c1 is
                select  v1.object_name
                from 
                        t1 as of scn myscn v1
                join 
                        t2 as of scn myscn v2
                on 
                        v2.object_id = v1.object_id
        ;

        l_row c1%rowtype;

begin
        open c1;
        fetch c1 into l_row;
        dbms_output.put_line(l_row.object_name);
        close c1;
end;
/
 
show errors

You’ll notice that instead of doing a test execution of the procedure I’ve called “show errors”. This is because the procedure won’t compile and reports “Warning: Procedure created with compilation errors” with the following output from the call to show errors:


Errors for PROCEDURE P1:

LINE/COL ERROR
-------- -----------------------------------------------------------------
3/9      PLS-00341: declaration of cursor 'C1' is incomplete or malformed
4/3      PL/SQL: ORA-00984: column not allowed here
4/3      PL/SQL: SQL Statement ignored
13/8     PL/SQL: Item ignored
17/2     PL/SQL: SQL Statement ignored
17/16    PLS-00320: the declaration of the type of this expression is
         incomplete or malformed

18/2     PL/SQL: Statement ignored
18/23    PLS-00320: the declaration of the type of this expression is
         incomplete or malformed

So we check to see if the same warning and list of errors appear if I switch to “traditional” Oracle syntax:


create or replace procedure p1( myscn in varchar2 ) as

        cursor c1 is
                select  v1.object_name
                from 
                        t1 as of scn myscn v1,
                        t2 as of scn myscn v2
                where 
                        v2.object_id = v1.object_id
        ;

        l_row c1%rowtype;

begin
        open c1;
        fetch c1 into l_row;
        dbms_output.put_line(l_row.object_name);
        close c1;
end;
/
 
execute p1 (&m_scn)

The answer is no. This version of the query is accepted by the PL/SQL compiler, and the call to execute it with the SCN supplied in the substitution variable produces the expected results.

Is there anything we can do to stick with ANSI style syntax? Almost invariably the answer will be yes. Here’s a little workaround in this case:

create or replace procedure p1( myscn in varchar2 ) as

        cursor c1 is
                select  v1.object_name
                from 
                        (select * from t1 as of scn myscn) v1
                join 
                        (select * from t2 as of scn myscn) v2
                on 
                        v2.object_id = v1.object_id
        ;

        l_row c1%rowtype;

begin
        open c1;
        fetch c1 into l_row;
        dbms_output.put_line(l_row.object_name);
        close c1;
end;
/
 
execute p1 (&m_scn)

We simply embed each “as of scn” clause inside an inline view and then join the views. If you enable the CBO (10053) trace before executing this version of the procedure you’ll find that the final “unparsed” SQL produced by the optimzer has, of course, been translated back into the traditional syntax.

Warning: it’s fairly likely that this workaround will do what you want, but it’s possible that in a few cases it may result in a different execution plan from the one you were expecting (or would get from traditional syntax).

 

Video : Decoupling to Improve Performance

Tim Hall - Mon, 2020-01-27 02:17

In today’s video we demonstrate how to cheat your way to looking like you’ve improved performance using decoupling.

This was based on the following article.

This came up in conversation a few days ago, so I thought it was worth resurrecting this demo. It doesn’t really matter what tech stack you use, the idea is still the same.

The star of today’s video is Logan Rosenstein, formerly of OTN, and now working for Zignal Labs, and author of Building Towers by Rolling Dice.

Cheers

Tim…

Video : Decoupling to Improve Performance was first posted on January 27, 2020 at 9:17 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Blowing Out Barriers

Floyd Teter - Sat, 2020-01-25 12:31
I was chatting with a new friend last weekend.  Turns out we're both in the enterprise apps business: I work for Oracle on HCM Cloud apps while he works at a small start-up building and selling a warehouse management system.

While we were chatting about the biz, we got into SaaS and Cloud.  My friend made a statement I really want to share:  "Cloud blew out all our barriers to entering this marketplace; we wouldn't be in business without Cloud."  When I pressed him on this, he made the following points:

  • Our entire development platform is cloud-based; no on-prem servers whatsoever
  • We track our bugs and manage our development cycles with JIRA
  • All our accounting, HR, and sales apps are SaaS with providers focused on SMB customers
  • We don't even have internal email - we have a Slack channel
  • Our product is SaaS-based and runs on two IaaS partner platforms.  Our SMB customers would not be able to afford the infrastructure required to run on prem.
  • Most of our user documentation is visual and chart-based.  Our off-shore team uses LucidChart to build that stuff. 
After hearing all this, I expressed that my mental image of his company was a small group of people in a local office sitting around a conference table with laptops and cell phones.  He corrected me:  5 people in North America, developers in Vietnam and documentation team in India - no brick and mortar offices anywhere.

I walked away from the conversation thinking: "Isn't this cool?  These guys have leveraged the cloud to build a product startup about to break into 9-digit revenue (in U.S. dollars) with zero infrastructure.  These guys get it."

This has always been the promise of the Cloud: do business without large capital investments.  It was just neat to experience yet another example of this promise fulfilled IRL.

Tell me again why you're running on-prem enterprise apps?

Oracle VM Server: OVMAPI_2005E and OVMEVT_003503D_000

Dietrich Schroff - Sat, 2020-01-25 07:16
In my OVM environment i was trying to delete a server from a pool and this did not work. The error i got signalled was:
OVMAPI_2005E "[ServerDbImpl] xx:yy:zz:... (oraclevm)" contains a component "xx:yy:zz:..." in error. Error event: server.cluster.failure., Summary: Server Cluster Failure, Description: Failed to leave cluster OVMEVT_003503D_000 Server reboot is required.. I rebootet the server: still the same error.
I rebootet the ovm manager: still the same error.

Then i realized, that there was a red cross at my server:

The solution was: Select events from the drop down for this server:

There i had to "Acknowledge" the events and after that i was able to remove the server without any further error...

Solved: Restore Of Standby Control File On Standby DB In ExaCS Failing With ERROR (ORA-600, RMAN-03002)

Online Apps DBA - Sat, 2020-01-25 06:59

Fixed: Restore Of Standby Control File On Standby DB In ExaCS Failing With Error (ORA-600, RMAN-03002) Are you looking for a fix to troubleshoot the “PSDRPC returns significant error 600 & RMAN-03002: failure of restore command” error while performing a restore of Standby Control File using RMAN on the Standby Database on Exadata Cloud Service […]

The post Solved: Restore Of Standby Control File On Standby DB In ExaCS Failing With ERROR (ORA-600, RMAN-03002) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

WITH Subquery

Jonathan Lewis - Thu, 2020-01-23 14:37

Here’s another anomaly that appears when you mix and match Oracle features. In this case it’s “With” subqueries (common table expressions / CTEs) and Active Dataguard (ADG) Standby databases. The problem appeared on the Oracle-l listserver and luckily for the OP another member of the list had seen it before and could point to a relevant MOS document id which explained the issue and supplied a workaround.

The OP had their standby database opened read-only for reporting and found the following oddity in the extended SQL trace file for one of their reports:


WAIT #140196872952648: nam='db file scattered read' ela= 1588 file#=4097 block#=579715946 blocks=128 obj#=-39778567 tim=17263910670242
WAIT #140196872952648: nam='db file scattered read' ela= 1495 file#=4097 block#=579715947 blocks=128 obj#=-39778567 tim=17263910672065
WAIT #140196872952648: nam='db file scattered read' ela= 1671 file#=4097 block#=579715948 blocks=128 obj#=-39778567 tim=17263910674042
WAIT #140196872952648: nam='db file scattered read' ela= 1094 file#=4097 block#=579715949 blocks=128 obj#=-39778567 tim=17263910675443

Before pointing out the oddity (if you haven’t spotted it already) I’ll just explain a few of the numbers thayt are a little unusual.

  • File# = 4097: the user has parameter db_files = 4096, so this is the first Temp file.
  • Block# = 579,715,946: the database is 120TB, and the temporary tablespace is a “bigfile” tablespace so it’s okay for the file to hold more than 579M blocks.
  • Obj# < 0: Negative object numbers is a characteristic of materialized CTEs: if you look at the execution plan a materialized CTE will be reported as a table with a name like  SYS_TEMP_FDA106F9_E259E68.  If you take the first hexadecimal number and treat is as a 32-bit signed integer you get the value that would be reported as the obj# in the trace file.  (Converting to decimal and subtract power(2,32) is one way of doing the arithmetic).
  • tim= nnnnnnnn:  this is the timestamp (usually in microseconds), and we can see intervals of roughly 1,400 to 2,000 microseconds between these lines.

So here’s the oddity: in this set of 4 consecutive waits we’re waiting for multiblock reads of 128 blocks – but each read starts one block after the previous read. It’s as if Oracle is reading 128 blocks and forgetting everything after the first one. And the timestamps are significant because they tell us that this isn’t a case of Oracle spending so much time between reads that the other blocks fall off the end of  the buffer cache before the query reaches them.

I think I’ve seen a pattern like this once before but it would have been quite a long time ago and I can’t find any notes I might have made about it (and it turns out that my previous experience was not relevant to this case). Fortunately another member of Oracle-l had also seen the pattern and supplied the solution through a reference to a MOS document that led to: Doc ID 2251339.1 With Subquery Factorization Temp Table Does Not Cache in Standby in 12.1.0.2.

It’s not a bug – Oracle is supposed to do this if you manage to materialize a CTE in a Read-only Standby database. I don’t understand exactly why there’s a problem but thanks to some feature of how consistent reads operate and block SCNs are generated when you populate the blocks of the global temporary table (GTT) that is your materialized CTE it’s possible for Oracle to produce the wrong results if it re-visits blocks that have been read into the cache from the GTT. So when you do a multiblock read during a tablescan of the GTT Oracle can use the first block it has read (presumably because it’s immediately pinned), but can’t use the remaining 127 – and so you get the odd pattern of consecutive blocks appearing at the start of consecutive multiblock reads.

This raises a couple of interesting (and nasty) questions.

  • First – does every 128 block read get read to the middle of the buffer cache, pushing another 128 blocks out of the buffer cache or does Oracle automatically read the blocks to the “cold” end of the LRU, minimising the impact on the rest of the cache; we hope it’s the latter.
  • Second – If I use a small fetch size while running my query might I find that I have to re-read the same block (with its 127 neghbours) many times because Oracle releases any pinned blocks at the end of each fetch and has to re-acquire the blocks on the next fetch.

If anyone wants to test the second question by running a query from SQL*Plus with extended trace enabled the following simple query should answer the question:

alter session set events '10046 trace name context forever, level 8';
set arraysize 2

with v1 as (select /*+ materialize */ * from all_objects)
select object_name from v1;

Workarounds

There is a workaround to the issue – you can add the hint /*+ inline */ to the query to ensure that the CTE is not materialized. There is a bit of a catch to this, though (on top of the fact that you might then need to have two slightly different versions of the code if you want to run the query on production and standby) – if Oracle places the subquery text inline the optimizer may manage to merge it into the rest of the query and come up with a bad execution plan. Again you can probably work around this threat by extending the hint to read: /*+ inline no_merge */. Even then the optimizer could decide it has better statistics about the “real” table columns that it might have lost when it materialized the subquery, so it could still produce a different execution plan from the materialized plan.

As an alternative (and somewhat more brutal) workaround you could set the hidden parameter “_with_subquery” to inline either at the session or system level, or in the startup parameter file.

 

Time-Series Prediction Beyond Test Data

Andrejus Baranovski - Thu, 2020-01-23 12:13
I was working on the assignment to build a large scale time-series prediction solution. I end up using a combination of approaches in the single solution — Prophet, ARIMA and LSTM Neural Network (running on top of Keras/TensorFlow). With Prophet (Serving Prophet Model with Flask — Predicting Future) and ARIMA it is straightforward to calculate a prediction for future dates, both provide a function to return prediction for a given future horizon. The same is not obvious with LSTM, if you are new — this will require a significant amount of time to research how to forecast true future dates (most of the examples are showing how to predict against test dataset only).

I found one good example though which I was following and it helped me to solve my task — A Quick Example of Time-Series Prediction Using Long Short-Term Memory (LSTM) Networks. In this post, I will show how to predict shampoo sales monthly data, mainly based on the code from the above example.

Read more - Time-Series Prediction Beyond Test Data.


PostgreSQL Meetup at CERN, Geneva, January 17, 2020

Yann Neuhaus - Thu, 2020-01-23 10:37
Registration

Last Friday (17/01/20) I had the opportunity to go to Geneva at CERN to attend a PostgreSQL meetup.
I’m really happy to write a blog about for a lot of reasons. First of all, it was about PostgreSQL (essential these days), also for the contributors as Oleg Bartunov and finally because of the organizers: Franck Pachot (old & soon new colleague by dbi-services) & Laeticia Avrot I met at the SOUG Day in Lausane.


After the registration process where a lot of people were waiting to get their badge, I followed the instructions given to reach the Building 503.  I was rapidly lost in this  labyrinth and I had to ask several times to find the conference room.

Unfortunately I was late and missed the “Welcome & opening speech” at 02pm. Worst, the first minutes of Gülçin Yıldırım’s session.
I took discreetly a seat closed to the door in the conference room. I was immediately impressed as already more than 100 persons were in.
It seems to be that there was a huge interest for this event.

So the first talk was about the evolution of Fault Tolerance in PostgreSQL.
Gülcin mainly focused on the evolution of fault tolerance capabilities in PostgreSQL throughout its versions.
Robustness, replication methods, failover & Switchover were the main topics.

The second talk was from Pavlo Golub , a PostgreSQL expert and developer at Cybertec (Software & database partner in Austria).
After some words about the existing scheduling tools in the Postgres community, he introduced a new Open Source tool from Cybertec, called pg_timetable.
He went very deeply in details & discussed some advanced topics like transaction support and cross-platform tasks.
At the end we had a successful demo.

Break

After this session, we had a break as it was 04pm. A super kings cake’s party (Galette des Rois) accompanied by excellent ciders were offered.
We could even choose either the one from Normandy or from Brittany.
I tasted both and appreciated.
By the way, I found the bean twice. I have to admit that this was a really good idea as the athmospere was relaxed and friendly.

At around 04:45pm, Anastasia Lubennikova started her session. A comparison between various backup/recovery solutions for PostgreSQL.
The goal  was to help us to choose the most appropriate tool. She covered several advanced topics in that area as: incremental backups, archive management, backup validation, retention policies.

The next session from Vik Fearing was a short one – Advent of Code Using PostgreSQL
During his talk, Vik showed some techniques to solve general-purpose programming challenges using just a single query with lateral joins.

Romuald Thion, a PostgreSQL teacher at the university of Lyon (France) followed: his talk was about some lessons learned
since he has progressively introduced PostgreSQL in its courses in 2018.
Since then,  400 students enrolled from second to fifth year use PostgreSQL in databases, web, security or system administration courses.

Finally and for the last session like an apotheosis, came on stage Oleg Bartunov, a PostgreSQL Major Contributor & developer.
“All You Need Is Postgres” was the title of his session. This could be soon a hymn.
Oleg talked a lot about his long Postgres professional experience since Postgres95, about all the projects he participated as GiST, GIN and SP-GiST, and also full text search, NoSQL features (hstore and jsonb). He talked also about astronomy and Supernovae.
This was really fascinating and we could listen to him for hours.

Conclusion

A Drinks & Food party concluded this meetup. It was a nice opportunity to meet all speakers. To discuss with them and also some customers or old colleagues (Pierre Boizot).
Learn and share, this is part of dbi-services spirit and matches our values!
One last thing, I would like to address special thanks to Franck Pachot & Laetitia Avrot for the good organization & overall the cool atmosphere during this meetup.
Hoping for next…

Cet article PostgreSQL Meetup at CERN, Geneva, January 17, 2020 est apparu en premier sur Blog dbi services.

New Feature: Restoring Deleted Compartments In Oracle Cloud (OCI)

Online Apps DBA - Thu, 2020-01-23 07:19

New feature: Restoring Deleted Compartments In Oracle Cloud (OCI) Previously, once the Compartments were deleted in OCI, they couldn’t be restored. With the recent update, now you can restore the deleted Compartments. Check out K21Academy’s post at https://k21academy.com/oci53 that covers: ▪Steps For Restoring The Deleted Compartments ▪What Changes Take Place In The Compartments After Restoring? […]

The post New Feature: Restoring Deleted Compartments In Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Troubleshoot Socket:000445] Connection Rejected, Filter Blocked Socket While Accessing EBS R12 WebLogic Console On OCI

Online Apps DBA - Thu, 2020-01-23 03:06

Fixed: WebLogic Console For E-Business (EBS) R12 on Oracle Cloud (OCI) Socket Blocked Did you encounter the “The Server is not able to service this request: blocked Socket, weblogic.security.net.FilterException:” error while accessing the Weblogic console from the Oracle EBS environment created from the Oracle EBS Cloud Manager? If YES, then the blog post at https://k21academy.com/ebscloud36 […]

The post [Troubleshoot Socket:000445] Connection Rejected, Filter Blocked Socket While Accessing EBS R12 WebLogic Console On OCI appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Adding PostgreSQL extensions without being super user?

Yann Neuhaus - Wed, 2020-01-22 09:33

Usually, when you need to install a PostgreSQL extension you do this as superuser (or at least I am doing it like this). The downside of that is, of course, that a super user must be available once a new extension is required or that all the extensions are installed by default (e.g. in template1). Recently the question popped up internally if you can install extensions without being super user. The answer is: yes and no. It depends on the objects that get created by the extensions and it depends on the permissions of the user that wants to install the extension. As always, lets do some demos as this is the easiest way to understand how things work.

Currently there are no extensions installed in my database except for the standard PL/pgSQLl extension:

postgres=# \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(1 row)

For the purpose of this post lets create a new user without any special permissions other than login:

postgres=# create user a with login password 'a';
CREATE ROLE
postgres=# \du a
           List of roles
 Role name | Attributes | Member of 
-----------+------------+-----------
 a         |            | {}

This user, even if not granted anything, by default has permissions to create objects in the public schema (you should definitely avoid that, check here):

postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> select current_database();
 current_database 
------------------
 postgres
(1 row)
postgres=> create table tab1 ( a int );
CREATE TABLE

What this user is not able to do, is to use create extension” to install a new extension into the database:

postgres=> create extension lo;
ERROR:  permission denied to create extension "lo"
HINT:  Must be superuser to create this extension.
postgres=> 

Why is that? If we take a look at the extension’s SQL file the first statement is this:

CREATE FUNCTION ltree_in(cstring)
RETURNS ltree
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;

“CREATE FUNCTION” does work as we are able to create objects in the public schema. The issue is this:

postgres=> CREATE FUNCTION ltree_in(cstring)
postgres-> RETURNS ltree
postgres-> AS 'MODULE_PATHNAME'
postgres-> LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
ERROR:  permission denied for language c
postgres=> 

We do not have access to the language. Lets try to grant the required privilege for using the language:

postgres=# grant USAGE on LANGUAGE c to a;
ERROR:  language "c" is not trusted
DETAIL:  GRANT and REVOKE are not allowed on untrusted languages, because only superusers can use untrusted languages.

That does not work as well, as c is untrusted. The same is true for “language internal” as in intagg–1.1.sql:

postgres=> CREATE FUNCTION int_agg_state (internal, int4)
postgres-> RETURNS internal
postgres-> AS 'array_agg_transfn'
postgres-> PARALLEL SAFE
postgres-> LANGUAGE INTERNAL;
ERROR:  permission denied for language internal

As all the extensions in standard PostgreSQL community do reference either “c” or “internal” somehow we do not have any chance to get an extension installed as user “a”. Lets do another test and create a new database with user “a” as it’s owner:

postgres=> \c postgres postgres
You are now connected to database "postgres" as user "postgres".
postgres=# create database dba with owner a;
CREATE DATABASE
postgres=# \c dba a
You are now connected to database "dba" as user "a".
dba=> 

Can we install extensions now?

dba=> create extension lo;
ERROR:  permission denied to create extension "lo"
HINT:  Must be superuser to create this extension.
dba=> 

Another error message but it still does not work. We can, however, install PL/Perl as this is a trusted language:

dba=> create extension plperl;
CREATE EXTENSION
dba=> 

Actually PL/Perl is the only extension that can be installed in this scenario, which can be confirmed by a simply PL/pgSQL code block:

dba=> drop extension plperl;
DROP EXTENSION
dba=> do $$
dba$> declare
dba$>   ext record;
dba$>   exception_text text;
dba$> begin
dba$>   for ext in
dba$>       select name
dba$>         from pg_available_extensions
dba$>        order by name
dba$>   loop
dba$>     begin
dba$>        execute 'create extension '||ext.name;
dba$>        raise notice 'SUCCESS for %', ext.name;
dba$>     exception
dba$>        when others
dba$>          then get stacked diagnostics exception_text = MESSAGE_TEXT;
dba$>          raise notice '% failed with %', ext.name, exception_text;
dba$>     end;
dba$>   end loop;
dba$> end $$;
NOTICE:  adminpack failed with permission denied to create extension "adminpack"
NOTICE:  amcheck failed with permission denied to create extension "amcheck"
NOTICE:  autoinc failed with permission denied to create extension "autoinc"
NOTICE:  bloom failed with permission denied to create extension "bloom"
NOTICE:  btree_gin failed with permission denied to create extension "btree_gin"
NOTICE:  btree_gist failed with permission denied to create extension "btree_gist"
NOTICE:  citext failed with permission denied to create extension "citext"
NOTICE:  cube failed with permission denied to create extension "cube"
NOTICE:  dblink failed with permission denied to create extension "dblink"
NOTICE:  dict_int failed with permission denied to create extension "dict_int"
NOTICE:  dict_xsyn failed with permission denied to create extension "dict_xsyn"
NOTICE:  earthdistance failed with required extension "cube" is not installed
NOTICE:  file_fdw failed with permission denied to create extension "file_fdw"
NOTICE:  fuzzystrmatch failed with permission denied to create extension "fuzzystrmatch"
NOTICE:  hstore failed with permission denied to create extension "hstore"
NOTICE:  hstore_plperl failed with required extension "hstore" is not installed
NOTICE:  hstore_plperlu failed with required extension "hstore" is not installed
NOTICE:  hstore_plpython2u failed with required extension "hstore" is not installed
NOTICE:  hstore_plpython3u failed with required extension "hstore" is not installed
NOTICE:  hstore_plpythonu failed with required extension "hstore" is not installed
NOTICE:  insert_username failed with permission denied to create extension "insert_username"
NOTICE:  intagg failed with permission denied to create extension "intagg"
NOTICE:  intarray failed with permission denied to create extension "intarray"
NOTICE:  isn failed with permission denied to create extension "isn"
NOTICE:  jsonb_plperl failed with required extension "plperl" is not installed
NOTICE:  jsonb_plperlu failed with required extension "plperlu" is not installed
NOTICE:  jsonb_plpython2u failed with required extension "plpython2u" is not installed
NOTICE:  jsonb_plpython3u failed with required extension "plpython3u" is not installed
NOTICE:  jsonb_plpythonu failed with required extension "plpythonu" is not installed
NOTICE:  lo failed with permission denied to create extension "lo"
NOTICE:  ltree failed with permission denied to create extension "ltree"
NOTICE:  ltree_plpython2u failed with required extension "ltree" is not installed
NOTICE:  ltree_plpython3u failed with required extension "ltree" is not installed
NOTICE:  ltree_plpythonu failed with required extension "ltree" is not installed
NOTICE:  moddatetime failed with permission denied to create extension "moddatetime"
NOTICE:  pageinspect failed with permission denied to create extension "pageinspect"
NOTICE:  pg_buffercache failed with permission denied to create extension "pg_buffercache"
NOTICE:  pg_freespacemap failed with permission denied to create extension "pg_freespacemap"
NOTICE:  pg_prewarm failed with permission denied to create extension "pg_prewarm"
NOTICE:  pg_stat_statements failed with permission denied to create extension "pg_stat_statements"
NOTICE:  pg_trgm failed with permission denied to create extension "pg_trgm"
NOTICE:  pg_visibility failed with permission denied to create extension "pg_visibility"
NOTICE:  pgcrypto failed with permission denied to create extension "pgcrypto"
NOTICE:  pgrowlocks failed with permission denied to create extension "pgrowlocks"
NOTICE:  pgstattuple failed with permission denied to create extension "pgstattuple"
NOTICE:  SUCCESS for plperl
NOTICE:  plperlu failed with permission denied to create extension "plperlu"
NOTICE:  plpgsql failed with extension "plpgsql" already exists
NOTICE:  plpython3u failed with permission denied to create extension "plpython3u"
NOTICE:  postgres_fdw failed with permission denied to create extension "postgres_fdw"
NOTICE:  refint failed with permission denied to create extension "refint"
NOTICE:  seg failed with permission denied to create extension "seg"
NOTICE:  sslinfo failed with permission denied to create extension "sslinfo"
NOTICE:  tablefunc failed with permission denied to create extension "tablefunc"
NOTICE:  tcn failed with permission denied to create extension "tcn"
NOTICE:  tsm_system_rows failed with permission denied to create extension "tsm_system_rows"
NOTICE:  tsm_system_time failed with permission denied to create extension "tsm_system_time"
NOTICE:  unaccent failed with permission denied to create extension "unaccent"
NOTICE:  xml2 failed with permission denied to create extension "xml2"
DO

If you want to install an extension that only creates objects you are allowed to create anyway, that would succeed. The probably best way to handle extensions is to install all the required ones by default and then grant permissions to the users who need them.

Cet article Adding PostgreSQL extensions without being super user? est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator