Feed aggregator

Oracle Sign up: more problems

Dietrich Schroff - Fri, 2019-09-13 14:03
I thought, i was successful, but:

I received a mail with the following content:

"We have re-authorized a new, specific amount on the credit/debit card used during the sign up process."

and

"To verify the account you have created, please confirm the specific amount re-authorized."


My problem: there is not any "re-authorized amount" on my banking account. I do not know, what is "re-authorized"?
Is this: this amount is charged on my credit card (then i should see it).
Or is this process buggy and i was for some reason not charged?
Or is re-authorization something else?



HVR | Real-Time CDC ::Oracle Autonomous DW::

Rittman Mead Consulting - Fri, 2019-09-13 09:49
Introduction :Oracle Autonomous DW::

High quality Business Intelligence is key in decision making for any successful organisation and for an increasing number of businesses this means being able to access real-time data.  At Rittman Mead we are seeing a big upturn in interest in technologies that will help our customers to integrate real-time data into their BI systems.  In a world where streaming is revolutionising the way we consume data, log-based Change Data Capture (CDC) comes into the picture as a way of providing access to real-time transactional data for reporting and analysis. Most databases support CDC nowadays; in Oracle for example the redo logs are a potential source of CDC data. Integration tools that support CDC are coming to the fore and one of these is HVR.

HVR is a real-time data integration solution that replicates data to and from many different data management technologies, both on-premise and cloud-based, including Oracle, SAP and Amazon. The HVR GUI makes it easy to connect together multiple databases, replicate entire schemas, initialise the source and target systems and then keep them in sync.

HVR have recently added Oracle’s Autonomous Data Warehouse (ADW) to their list of supported technologies and so in this blog we are stepping through the process of configuring an HVR Channel to replicate data from an Oracle database to an instance of Oracle ADW.


Setup

Before setting up replication you have to install HVR itself. This is simple enough, a fairly manual CLI job with a couple of files to create and save in the correct directories. Firewalls also needs to allow all HVR connections. HVR needs a database schema in which to store the repository configuration data and so we created a schema in the source Oracle database. It also needs some additional access rights on the Oracle source database.


Process 1.

The first step is to register the newly created Hub in the HVR GUI. The GUI can be run on any machine that is able to connect to the server on which the HVR hub is installed. We tested two GUI instances, one  running on a Windows machine and one on a MAC. Both were easy to install and configure.

:Oracle Autonomous DW::

The database connection details entered here are for the HVR hub database, where metadata about the hub configuration is stored.

2.

Next we need to define our source and target. In both cases the connection between the HVR and the data uses standard Oracle database connectivity. The source connection is to a database on the same server as the HVR hub and the target connection uses a TNS connection pointing at the remote ADW instance.

Defining the source database involves right clicking on Location Configuration and selecting New Location:

:Oracle Autonomous DW::

Configuring the target involves the same steps:

:Oracle Autonomous DW::

You can see from the screenshot that we are using one of the Oracle-supplied tnsnames entries to connect to ADW and also that we are using a separate Oracle client install to connect to ADW. Some actions within HVR use the Oracle Call Interface and require a more recent version of the Oracle client than provided by our 12c database server install.

Next up is creating the “channel”. A channel channel in HVR groups together the source and target locations and allows the relationship between the two to be defined and maintained.  Configuring a new channel involves naming it, defining source and target locations and then identifying the tables in the source that contain the data to be replicated.

3.

The channel name is defined by right clicking on Channel Definitions and selecting New Channel.

:Oracle Autonomous DW::

We then open the new channel and right click on Location Groups and select New Group to configure the group to contain source locations:

:Oracle Autonomous DW::

The source location is the location we defined in step 2 above. We then right click on the newly created group and select New Action, Capture  to define the role of the group in the channel:

:Oracle Autonomous DW::

The Capture action defines that data will be read from the locations in this group.

A second Location Group is needed for the for the target. This time we defined the target group to have the Integrate action so that data will be written to the locations in this group.

4.

The final step in defining the channel is to identify the tables we want to replicate. This can be done using the Table Explore menu option when you right-click on Tables.:

:Oracle Autonomous DW:: 5.

With the channel defined we can start synchronising the data between the two systems. We are starting with an empty database schema in our ADW target so we use the HVR Refresh action to first create the target tables in ADW and to populate them with the current contents of the source tables.  As the Refresh action proceeds we can monitor progress:

:Oracle Autonomous DW:: 6.

Now with the two systems in sync we can start the process of real-time data integration using the HVR Initialise action. This creates two new jobs in  the HVR Scheduler which then need to be started:

:Oracle Autonomous DW::

One more thing to do of course: test that the channel is working and replications is happening in real-time. We applied a series of inserts, updates and deletes to the source system and monitored the log files for the two scheduled jobs to see the activity captured from the redo logs on the source:

:Oracle Autonomous DW::

and then applied as new transactions on the target:

:Oracle Autonomous DW::

The HVR Compare action allows us to confirm that the source and target are still in sync.

:Oracle Autonomous DW::
Conclusion

Clearly the scenario we are testing here is a simple one. HVR can do much more - supporting one-to-many, many-to-many and also bi-directional replication configurations. Nonetheless we were impressed with how easy it was to install and configure HVR and also with the simplicity of executing actions and monitoring the channel through the GUI. We dipped in to using the command line interface when executing some of the longer running jobs and this was straightforward too.

Categories: BI & Warehousing

Oracle Cloud: First login

Dietrich Schroff - Thu, 2019-09-12 13:53
After signing up to Oracle cloud i tried my first login:

https://cloud.oracle.com/en_US/sign-in


but i only got:
I think the problem is, that there i a manual review step on Oracle's side which i have not passed for now:
So let's wait for a day or two...

Oracle Cloud: Sign up failed... [3] & solved

Dietrich Schroff - Tue, 2019-09-10 13:44
Finally (see my attempts here and here) i was able to sign up to Oracle cloud.
What did the trick?

I got help from Oracle support:
So i used my gmail address and this worked:

and then:

Let's see how this cloud will work compared to Azure and AWS

Red Hat Forum Zürich, 2019, some impressions

Yann Neuhaus - Tue, 2019-09-10 07:34

Currently the Red Hat Forum 2019 in Zürich is ongoing and people just finished lunch before the more technical sessions are starting.

As expected a lot is around OpenShift 4 and automation with Ansible. As dbi is a Red Hat advanced business partner we took the opportunity to be present with a booth for getting in touch with our existing customers and to meet new people:

All the partners got their logo on a huge wall at the entrance to the event:

As the event is getting more and more popular, Red Hat moved to the great and huge location, the Stage One in Zürich Oerlikon. So all of the 850 participants found their space.

There is even space for some fun stuff:

Important as well: the catering was excellent:

The merger with IBM was an important topic and Red Hat again stated several times: Red Hat will stay Red Hat. Let’s see what happens here, not all people trust this statement. All in all it is a great atmosphere here in Oerlikon, great people to discuss with, interesting topics, great organization and a lot of “hybrid cloud”. Technology is moving fast and Red Hat is trying to stay at the front. From a partner perspective the Forum is a great chance to meet the right people within Red Hat, no matter what topic you want to discuss: Technology, marketing, training, whatever. I am pretty sure we will attend the next forum as well.

Cet article Red Hat Forum Zürich, 2019, some impressions est apparu en premier sur Blog dbi services.

Introducing Accelerated Database Recovery with SQL Server 2019

Yann Neuhaus - Tue, 2019-09-10 06:01

SQL Server 2019 RC1 was released out a few weeks ago and it is time to start blogging about my favorite core engine features that will be shipped with the next version of SQL Server. Things should not be completely different with the RTM, so let’s introduce the accelerated database recovery (aka ADR) which is mainly designed to solve an annoying issue that probably most of SQL Server DBAs already faced at least one time: long running transactions that impact the overall recovery time. As a reminder with current versions of SQL Server, database recovery time is tied to the largest transaction at the moment of the crash. This is even more true in high-critical environments where it may have a huge impact on the service or application availability and ADR is another feature that may help for sure.

Image from Microsoft documentation

In order to allow very fast rollback and recovery process the SQL Server team redesigned completely the SQL database engine recovery process and the interesting point is they have introduced row-versioning to achieve it. Row-versioning, however, exist since the SQL Server 2005 version through RCSI and SI isolation levels and from my opinion this is finally good news to extend (finally) such capabilities to address long recovery time.

Anyway, I performed some testing to get an idea of what could be the benefit of ADR and the impact of the workload as well. Firstly, I performed a recovery test without ADR and after initiating a long running transaction, I simply crashed my SQL Server instance. I used an AdventureWorks database with the dbo.bigTransactionHistory table which is big enough (I think) to get a relevant result.

The activation of ADR is per database meaning that row-versioning is also managed locally per database. It allows a better workload isolation compared to using the global tempdb version store with previous SQL Server versions.

USE AdventureWorks_dbi;

ALTER DATABASE AdventureWorks_dbi SET
    ACCELERATED_DATABASE_RECOVERY = OFF; 

ALTER DATABASE AdventureWorks_dbi SET
	COMPATIBILITY_LEVEL = 150;
GO

 

The dbo.bigtransactionHistory table has only one clustered primary key …

EXEC sp_helpindex 'dbo.bigTransactionHistory';
GO

 

… with 158’272’243 rows and about 2GB of data

EXEC sp_helpindex 'dbo.bigTransactionHistory';
GO

 

I simulated a long running transaction with the following update query that touches every row of the dbo.bigTransactionHistory table to get a relevant impact on the recovery process duration time.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

The related transactions wrote a log of records into the transaction log size as show below:

SELECT 
	DB_NAME(database_id) AS [db_name],
	total_log_size_in_bytes / 1024 / 1024 AS size_MB,
	used_log_space_in_percent AS [used_%]
FROM sys.dm_db_log_space_usage;
GO

 

The sys.dm_tran_* and sys.dm_exec_* DMVs may be helpful to dig into the transaction detail including the transaction start time and log used in the transaction log:

SELECT 
   GETDATE() AS [Current Time],
   [des].[login_name] AS [Login Name],
   DB_NAME ([dtdt].database_id) AS [Database Name],
   [dtdt].[database_transaction_begin_time] AS [Transaction Begin Time],
   [dtdt].[database_transaction_log_bytes_used] / 1024 / 1024 AS [Log Used MB],
   [dtdt].[database_transaction_log_bytes_reserved] / 1024 / 1024 AS [Log Reserved MB],
   SUBSTRING([dest].text, [der].statement_start_offset/2 + 1,(CASE WHEN [der].statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max),[dest].text)) * 2 ELSE [der].statement_end_offset END - [der].statement_start_offset)/2) as [Query Text]
FROM 
   sys.dm_tran_database_transactions [dtdt]
   INNER JOIN sys.dm_tran_session_transactions [dtst] ON  [dtst].[transaction_id] = [dtdt].[transaction_id]
   INNER JOIN sys.dm_exec_sessions [des] ON  [des].[session_id] = [dtst].[session_id]
   INNER JOIN sys.dm_exec_connections [dec] ON   [dec].[session_id] = [dtst].[session_id]
   LEFT OUTER JOIN sys.dm_exec_requests [der] ON [der].[session_id] = [dtst].[session_id]
   OUTER APPLY sys.dm_exec_sql_text ([der].[sql_handle]) AS [dest]
GO

 

The restart of my SQL Server instance kicked-in the AdventureWorks_dbi database recovery process. It took about 6min in my case:

EXEC sp_readerrorlog 0, 1, N'AdventureWorks_dbi'

 

Digging further in the SQL Server error log, I noticed the phase2 (redo) and phase3 (undo) of the recovery process that took the most of time (as expected).

However, if I performed the same test with ADR enabled for the AdventureWorks_dbi database …

USE AdventureWorks_dbi;

ALTER DATABASE AdventureWorks_dbi SET
    ACCELERATED_DATABASE_RECOVERY = ON;

 

… and I dig again into the SQL Server error log:

Well, the output above is pretty different but clear and irrevocable: there is a tremendous improvement of the recovery time process here. The SQL Server error log indicates the redo phase took 0ms and the undo phase 119ms. I also tested different variations in terms of long transactions and logs generated in the transaction log (4.5GB, 9.1GB and 21GB) without and with ADR. With the latter database recovery remained fast irrespective to the transaction log size as shown below:

But there is no free lunch when enabling ADR because it is a row-versioning based process which may have an impact on the workload. I was curious to compare the performance of my update queries between scenarios including no row-versioning (default), row-versioning with RCSI only, ADR only and finally both RCSI and ADR enabled. I performed all my tests on a virtual machine quad core Intel® Core ™ i7-6600U CPU @ 2.6Ghz and 8GB of RAM. SQL Server memory is capped to 6GB. The underlying storage for SQL Server data files is hosted on SSD disk Samsung 850 EVO 1TB.

Here the first test I performed. This is the same update I performed previously which touches every row on the dbo.bigTransactionHistory table:

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

And here the result with the different scenarios:

Please don’t focus strongly on values here because it will depend on your context but the result answers to the following questions: Does the activation of ADR will have an impact on the workload and if yes is it in the same order of magnitude than RCSI / SI? The results are self-explanatory.

Then I decided to continue my tests by increasing the impact of the long running transaction with additional updates on the same data in order to stress a little bit the version store.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

Here the new results:

This time ADR seems to have a bigger impact than RCSI in my case. Regardless the strict values of this test, the key point here is we have to be aware that enabling ADR will have an impact to the workload.

After performing these bunch of tests, it’s time to get a big picture of ADR design with several components per database including a persisted version store (PVS), a Logical Revert, a sLog and a cleaner process. In this blog post I would like to focus on the PVS component that acts as persistent version store for the concerned database. In other words, with ADR, tempdb will not be used to store row versions anymore. The interesting point is that RCSI / SI row-versioning will continue to be handle through the PVS if ADR is enabled according to my tests.

There is the new added column named is_accelerated_database_recovery_on to the sys.databases system view. In my case both RCSI and ADR are enabled in AdventureWorks_dbi database.

SELECT 
	name AS [database_name],
	is_read_committed_snapshot_on,
	is_accelerated_database_recovery_on
FROM sys.databases
WHERE database_id = DB_ID()

 

The sys.dm_tran_version_store_space_usage DMV displays the total space in tempdb used by the version store for each database whereas the new sys.dm_tran_persistent_version_store_stats DMV provides information related to the new PVS created with the ADR activation.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

SELECT 
	DB_NAME(database_id) AS [db_name],
	oldest_active_transaction_id,
	persistent_version_store_size_kb / 1024 AS pvs_MB
FROM sys.dm_tran_persistent_version_store_stats;
GO

SELECT 
	database_id,
	reserved_page_count / 128 AS reserved_MB
FROM sys.dm_tran_version_store_space_usage;
GO

 

After running my update query, I noticed the PVS in AdventureWorks_dbi database was used rather the version store in tempdb.

So, getting rid of the version store in tempdb seems to be a good idea and probably more scalable per database but according to my tests and without drawing any conclusion now it may lead to performance considerations … let’s see in the future what happens …

In addition, from a storage perspective, because SQL Server doesn’t use tempdb anymore as version store, my curiosity led  to see what happens behind the scene and how PVS interacts with the data pages where row-versioning comes into play. Let’s do some experiments:

Let’s create the dbo.bigTransationHistory_row_version table from the dbo.bigTransationHistory table with fewer data:

USE AdventureWorks_dbi;
GO

DROP TABLE IF EXISTS [dbo].[bigTransactionHistory_row_version];

SELECT TOP 1 *
INTO [dbo].[bigTransactionHistory_row_version]
FROM [dbo].[bigTransactionHistory]

 

Now, let’s have a look at the data page that belongs to my dbo.bigTransacitonHistory_row_version table with the page ID 499960 in my case:

DBCC TRACEON (3604, -1);
DBCC PAGE (AdventureWorks_dbi, 1, 499960, 3);

 

Versioning info exists in the header but obviously version pointer is set to Null because there is no additional version of row to maintain in this case. I just inserted one.

Let’s update the only row that exists in the table as follows:

BEGIN TRAN;
UPDATE [dbo].[bigTransactionHistory_row_version]
SET Quantity = Quantity + 1

 

The version pointer has been updated (but not sure the information is consistent here or at least the values displayed are weird). One another interesting point is there exists more information than the initial 14 bytes of information we may expect to keep track of the pointers. There is also extra 21 bytes at the end of row as show above. On the other hand, the sys.dm_db_index_physical_stats() DMF has been updated to reflect the PVS information with new columns inrow_*, total_inrow_* and offrow_* and may help to understand some of the PVS internals.

SELECT 
	index_id,
	index_level,
	page_count,
	record_count,
	version_record_count,
	inrow_version_record_count,
	inrow_diff_version_record_count,
	total_inrow_version_payload_size_in_bytes,
	offrow_regular_version_record_count,
	offrow_long_term_version_record_count
FROM sys.dm_db_index_physical_stats(
	DB_ID(), OBJECT_ID('dbo.bigTransactionHistory_row_version'), 
	NULL, 
	NULL, 
	'DETAILED'
)

 

Indeed, referring to the above output and correlating them to results I found inside the data page, I would assume the extra 21 bytes stored in the row seems to reflect a (diff ?? .. something I need to get info) value of the previous row (focus on in_row_diff_version_record_count and in_row_version_payload_size_in_bytes columns).

Furthermore, if I perform the update operation on the same data the storage strategy seems to switch to a off-row mode if I refer again to the sys.dm_db_index_physical_stats() DMF output:

Let’s go back to the DBCC PAGE output to confirm this assumption:

Indeed, the extra payload has disappeared, and it remains only the 14 bytes pointer which has been updated accordingly.

Finally, if I perform multiple updates of the same row, SQL Server should keep the off-row storage and should create inside it a chain of version pointers and their corresponding values.

BEGIN TRAN;

UPDATE [dbo].[bigTransactionHistory_row_version]
SET Quantity = Quantity + 1
GO 100000

 

My assumption is verified by taking a look at the previous DMVs. The persistent version store size has increased from ~16MB to ~32MB and we still have 1 version record in off-row mode meaning there is still one version pointer that references the off-row mode structure for my record.

Finally, let’s introduce the cleaner component. Like the tempdb version store, cleanup of old row versions is achieved by an asynchronous process that cleans page versions that are not needed. It wakes up periodically, but we can force it by executing the sp_persistent_version_cleanup stored procedure.

Referring to one of my first tests, the PVS size is about 8GB.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO


UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
SELECT 
	DB_NAME(database_id) AS [db_name],
	oldest_active_transaction_id,
	persistent_version_store_size_kb / 1024 AS pvs_MB
FROM sys.dm_tran_persistent_version_store_stats;
GO
-- Running PVS cleanu process
EXEC sp_persistent_version_cleanup

 

According to my tests, the cleanup task took around 6min for the entire PVS, but it was not a blocking process at all as you may see below. As ultimate test, I executed in parallel an update query that touched every row of the same table, but it was not blocked by the cleaner as show below:

This is a process I need to investigate further. Other posts are coming as well .. with other ADR components.

See you!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Cet article Introducing Accelerated Database Recovery with SQL Server 2019 est apparu en premier sur Blog dbi services.

POUG Conference 2019

Yann Neuhaus - Mon, 2019-09-09 03:47

POUG (Pint with Oracle users group) organized his annual conference on 6-7th September in Wroclaw in New Horizons Cinema.

My abstract about “MySQL 8.0 Community: Ready for GDPR?” was accepted, so I had the opportunity to be there.

Sessions

My talk was planned for the first day. New MySQL 8.0 version introduces several improvements about security and these are the main points I discussed:
– Encryption of Redo/Undo and Binary/Relay log files, which comes to enrich existing datafile encryption
– Some password features such as:
* Password Reuse Policy, to avoid a user to always use the same passwords
* Password Verification Policy, to require current password before changing it
* validate_password Component (which replaces the old validate_password Plugin), to define a secure password policy through some system variables and 3 different levels
– New caching_sha2_password plugin, which let you manage authentication in a faster and more secure way
– SQL Roles, to simplify the user access right management

Here some interesting sessions that I attended.

Keep them out of the database!

How to avoid unwanted connections to have access to our database? Flora Barrièle and Martin Berger explained some possibilities.
Following methods have limitations:
– Filter through a firewall, cause we have to involve the network team
– Use a dedicated listener for each instance, cause it’s difficult to manage in case of big number of databases and environments
To solve these issues we can use instead:
– Connection Manager (a sort of listener with in addition a set of rules to define the source, service, activity, destination)
– Access Control List (ACL, a new functionality of Oracle 12.2 which is used to protect PDBs and associated services)
– Logon triggers
– Audit and reports
In conclusion, different solutions exist. First of all we have to know our ecosystem and our environments before deciding to put something in place. Then we should make it as simple as possible, test and check what is the best for our specific situation.

The MacGyver approach

Lothar Flatz explained an approach to analyze what’s wrong with a query and how to fix it when we don’t have a lot of time.
The first step is to optimize, and for this point we have to know how the optimizer works. Then we can enforce new plans (inserting hints, changing statements text, …) and look for the outline.
Sometimes it’s not easy. Lothar’s session ended with this quote: “Performance optimization is not magic: it’s based on knowledge and facts”.

From transportable tablespaces to pluggable databases

Franck Pachot showed different ways to transport data in different Oracle versions:
– Simple logical move through export/import -> slow
– Logical move including direct-path with Data Pump export/import -> flexible, but slow
– Physical transport with RMAN duplicate -> fast, but not cross-versions
– Transportable Tablespaces which provides a mix between logical move (for metadata) and physical transport (for application/user data) -> fast and flexible (cross-versions)
– Physical transport through PDB clone -> fast, efficient, ideal in a multi-tenant environment
– Full Transportable Tablespaces to move user tablespaces and other objects such as roles, users, … -> flexible, ideal to export from 11R2 to 12c and then to non-CDB to multi-tenant, no need to run scripts on dictionary

Data Guard new features

The Oracle MAA (Maximum Availability Architectures) describes 4 HA reference architectures in order to align Oracle capabilities with customer Service Level requirements. Oracle Data Guard can match Silver, Gold and Platinum reference architectures.
Pieter Van Puymbroeck (Oracle Product Manager for Data Guard) talked about following new 19c features:
– Flashback operations are propagated automatically to the standby (requirements: configure standby for flashback database and in MOUNT state first, set DB_FLASHBACK_RETENTION_TARGET)
– Restore points are automatically propagated from the primary to the standby
– On the Active Data Guard standby, the database buffer cache state is preserved during a role change
– Multi-Instance Redo Apply (parallel redo log apply in RAC environments)
– Observe-Only mode to test fast-start failover without having any impact on the production database
– New commands such as “show configuration lag;” to check all members, and to export/import the Broker configuration

Discussion Panel

In the form of a discussion animated by Kamil Stawiarski, and with funny but serious exchanges with the audience, some Oracle Product Managers and other Oracle specialists talked about one of most topical subject today: Cloud vs on-prem. Automation, Exadata Cloud at Customer, Oracle documentation and log files and much more…

Networking moments

Lots of networking moments during this conference: a game in the city center, a speakers dinner, lunch time at the conference, the party in the Grey Music Club.

As usual it was a real pleasure to share knowledge and meet old friends and new faces.
Thanks to Luiza, Kamil and the ORA-600 Database Whisperers for their warm welcome and for the perfect organization of the event.

A suggestion? Don’t miss it next year!

Cet article POUG Conference 2019 est apparu en premier sur Blog dbi services.

Oracle Cloud: Sign up failed... [2]

Dietrich Schroff - Fri, 2019-09-06 14:36
After my failed registration to Oracle cloud, i got very fast an email from Oracle support with the following requirements:
So i tried once again with a firefox "private" window - but this failed again.
Next idea was to use a completely new installed browser: so i tried with a fresh google-chrome.
But the error still remained:
Let's hope Oracle support has another thing which will put me onto Oracle cloud.

UPDATE:


There is a tiny link "click here" just abouve the blue button. This link a have to use with the verification code provided by Oracle support.
But then the error is:
I checked this a VISA and MASTERCARD. Neither of them worked...

UPDATE 2: see here how the problem was solved.

Oracle OpenWorld and Code One 2019

Tim Hall - Fri, 2019-09-06 02:40

It’s nearly time for the madness to start again. This will be my 14th trip to San Francisco for OpenWorld, and however many it is since Java One and Code One got wrapped up into this…

  • Flights booked : ✔
  • Hotel booked : ✔
  • ESTA approved : ✔
  • Irrational fear of flying and general anxiety : ✔
  • 80 lbs weight loss : ❌
  • Talk complete : ❌
  • Denial : ✔

At the moment the scheduled stuff looks like this.

Friday :

  • 03:00 UK time : Start the trip over to SF. I know I said I would never do this again, and I know what the consequences will be…
  • Evening SF time : Groundbreaker Ambassador Dinner

Saturday : Day : ACE Director Briefing

Sunday :

  • Day : Groundbreaker Ambassador Briefing
  • Evening : Oracle ACE Dinner

Tuesday :

Session ID: DEV1314
The Seven Deadly Sins of SQL
Date: 17th Sept 2019
Time: 11:30 – 12:15

Wednesday :

Session ID: DEV6013
Embracing Constant Technical Innovation in Our Daily Life
Date: 18th Sept 2019
Time: 16:00 – 16:45
Panel: Gustavo Gonzalez, Sven Bernhardt, Debra Lilley, Francisco Munoz Alvarez, Me

Thursday : Fly home.

Friday : Arrive home, have a post-conference breakdown and promise myself I’ll never do it again…

In addition to those I have to schedule in the following:

  • A shift on the Groundbreakers Hub, but I’m not sure what day or what demo yet. I’ll probably hang around there a lot anyway.
  • Meet a photographer to get some photos done. I’ve told them they’ve got to be tasteful and “only above the waist”.
  • Spend some time annoying everyone on the demo grounds. I know Kris and Jeff are desperate to see me. It’s the highlight of their year!
  • Stalk Wim Coekaerts, whilst maintaining an air of ambivalence, so as not to give the game away. Can anyone else hear Bette Midler singing “Wind Beneath My Wings”? No? Just me?

There’s a whole bunch of other stuff too, but I’ve not got through all my emails yet. Just looking at this is giving me the fear. So much for my year off conferences…

See you there!

Cheers

Tim…

Oracle OpenWorld and Code One 2019 was first posted on September 6, 2019 at 8:40 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Tableau | Dashboard Design ::Revoke A50 Petition Data::

Rittman Mead Consulting - Mon, 2019-09-02 03:00

Dashboards are most powerful through visual simplicity. They’re designed to automatically keep track of a specific set of metrics and keep human beings updated. Visual overload is like a binary demon in analytics that many developers seem possessed by; but less is more.

For example, many qualified drivers know very little about their dashboard besides speed, revs, temperature and fuel gauge. When an additional dash warning light comes on, even if it is just the tyre pressure icon let alone engine diagnostics light, most people will just take their car to the garage. The most obvious metrics in a car are in regard to its operation; if you didn't know your speed while driving you'd feel pretty blind. The additional and not so obvious metrics (i.e. dash warning lights) are more likely to be picked up by the second type of person who will spend the most time with that car: its mechanic. It would be pointless to overload a regular driver with all the data the car can possibly output in one go; that would just intimidate them. That's not what you want a car to do to the driver and that's certainly not what any organisation would want their operatives to feel like while their “car” is moving.

In light of recent political events, the exact same can metaphorically be applied to the big red Brexit bus. Making sense of it all might be a stretch too far for this article. Still, with appropriate use of Tableau dashboard design it is possible to answer seemingly critical questions on the topic with publicly available data.



There's An Ongoing Question That Needs Answering?
Where did 6 million+ signatures really come from?


Back in the UK, the Brexit fiasco is definitely still ongoing. Just before the recent A50 extensions took place, a petition to revoke article 50 and remain in the EU attracted more than 6 million signatures, becoming the biggest and fastest growing ever in history and sparking right wing criticism over the origin of thousands of signatures, claiming that most came from overseas and discrediting its legitimacy. Government responded by rejecting the petition.

Thankfully the data is publicly available (https://petition.parliament.uk/petitions/241584.json) for us to use as an example of how a dashboard can be designed to settle such a question (potentially in real time too as more signatures come in).

Tableau can handle JSON data quite well and, to nobody’s surprise, we quickly discover that over 95% of signatures are coming from the UK.

Now that we know what we're dealing with, lets focus the map on Britain and provide additional countries data in a format that is easier to digest visually. As cool as it is to hover over the world map, there's simpler ways to take this in.

Because in this case we know more than 95% of signatures originate from the UK, the heatmap above is far more useful, showing us the signature count for each constituency at a glance. The hotter the shading, the higher the count.


Scales Might Need Calibration
Bar Chart All The Way


Humans of all levels compute a bar chart well and it's perfect for what we need to know on how many signatures are coming from abroad altogether and from what countries in descending order.

With a margin so tiny, it's trickier to get a visual that makes sense. A pie chart, for example, would hardly display the smaller slice containing all of the non-UK origin signatures. Even with a bar chart we are struggling to see anything outside of the UK in a linear scale; but it is perfect if using logarithmic scales, which are definitely a must in this scenario.

And voila! The logarithmic scale allows the remaining counts to appear alongside the UK, even though France, the next country after the UK with most signatures, has a count below 50k. This means we can keep an eye on the outliers in more detail quite effortlessly. Not much looks out of place right now considering the number of expats Britain produces to the countries on the list. Now we know, as long as none of the other countries turn red, we have nothing to worry about!


Innovate When Needed

The logarithmic scale in Tableau isn't as useful for these %, so hacking the visualised values in order to amplify the data sections of interest is a perfectly valid way of thinking outside the box. In this example, half the graph is dedicated to 90-100% and the other half 0-90%. The blue chunk is the percentage of signatures coming from the UK, while every other country colour chunk is still so small. Since the totals from other countries are about the same as each mainland constituency, it's more useful to see it as one chunk. Lastly, adding the heat colour coding keeps the visual integrity.

Interactivity

Now that we have the count, percentage and location breakdown into 3 simple graphs we feel much wiser. So it's time to make them interact with each other.

The constituency heatmap doesn't need to interact with the bar charts. The correlation between the hottest bars and the heatmap is obvious from the get go, but if we were to filter the bars using the map, the percentages would be so tiny you wouldn't see much on the % graph. The same occurs for the Country bar chart, meaning that only the percentage chart can be usefully used as a filter. Selecting the yellow chunk will show the count of signatures for every country within it only.

Another way in which interactivity can be introduced is through adding further visualisations to the tooltip. The petition data contains the MP responsible for each constituency, so we can effectively put a count of signatures to each name. It's nice to be able to see what their parliamentary voting record has been throughout this Brexit deadlock, which was obtained publicly from the House of Commons portal https://commonsvotes.digiminster.comand blended in; as more votes come in, the list will automatically increase.

Keep It Simple

As you can see, 3 is a magic number here. The trio of visuals working together makes a dashing delivery of intel to the brain. With very little effort, we can see how many signatures come from the UK compared to rest of the world, how many thousands are coming from each country, how many from each constituency, who the MP you should be writing to is and how they voted in the indicative votes. Furthermore, this dashboard can keep track of all of that in real time, flagging any incoming surge of signatures from abroad, continuously counting the additional signatures until August 2019 and providing a transparent record of parliamentary votes in a format that is very easy to visually digest.

Categories: BI & Warehousing

Oracle Cloud: Sign up failed...

Dietrich Schroff - Sun, 2019-09-01 08:38
Yesterday i tried to sign up for oracle cloud:

 So let's start the registration process:


The mobile number verification is done with SMS and after entering the 7 digit pin, you are allowed to enter a password:

As payment information only credit cards are accepted:
  • VISA
  • Mastercard
  • Amex


Eve though my credit card was accepted:



"Your credit card has been successfully validated. Please proceed to complete the Sign up."
I got the following error:

"We're unable to process your transaction. Please contact Oracle Customer Service."
The link "Oracle Customer Service" did not work, so i used the Chat Support. But inside the chat was no agent available and only "Send E-Mail" worked. Let's see what kind of response i will be given...

EDIT: Some further attempts...

EDIT 2: see here how the problem was solved.  

Ubuntu Linux: Change from Legacy boot to UEFI boot after installation

Dietrich Schroff - Sat, 2019-08-31 10:01
This weekend i did an installation of Linux on a laptop where already a windows 10 was installed.
Because laptop did not recognize my linux boot usb-stick i changed from UEFI to legacy mode and the installation went through without any problem.

At the end grub was in place but the windows installation was not listed. This is, because windows does not support booting.

The problem: If i switch back to UEFI the linux installation did not start anymore.

My solution:
  • Change to UEFI and boot with a live linux
  • Install boot-repair into the live linux
    (https://help.ubuntu.com/community/Boot-Repair)
    sudo add-apt-repository ppa:yannubuntu/boot-repair
    sudo apt-get update
    sudo apt-get install -y boot-repair
  • Then run boot repair
    boot-repair
  • Follow the instructions on the Boot-Repair homepage (s. above)
  • Enter the commands of the following popus:


And after removing the live CD i got an boot grub menu, where windows was in place and working (and the Ubuntu Linux worked, too ;-)

Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 11 Regions Latest Sydney @ Australia

Online Apps DBA - Sat, 2019-08-31 05:29

New Region Added: Sydney, Australia In 2019 till August Oracle added 7 new Regions in Gen 2 Cloud that’s OCI and a lot more in the pipeline This means you now have in total 11 regions, 4 with 3 availability domain while 7 with single availability domain If you want to get full picture related […]

The post Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 11 Regions Latest Sydney @ Australia appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Documentum – Encryption/Decryption of WebTop 6.8 passwords ‘REJECTED’ with recent JDK

Yann Neuhaus - Sat, 2019-08-31 03:00

Recently, we had a project to modernize a little bit a pretty old Documentum installation. As part of this project, there were a refresh of the Application Server hosting a WebTop 6.8. In this blog, I will be talking about an issue that we faced in encryption & decryption of passwords in the refresh environment. This new environment was using WebLogic 12.1.3 with the latest PSU in conjunction with the JDK 1.8u192. Since WebTop 6.8 P08, the JDK 1.8u111 is supported so a newer version of the JDK8 should mostly be working without much trouble.

To properly deploy a WebTop application, you will need to encrypt some passwords like the Preferences or Preset passwords. Doing so in the new environment unfortunately failed:

[weblogic@wls_01 ~]$ work_dir=/tmp/work
[weblogic@wls_01 ~]$ cd ${work_dir}/
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ jar -xf webtop_6.8_P27.war WEB-INF/classes WEB-INF/lib
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ kc="${work_dir}/WEB-INF/classes/com/documentum/web/formext/session/KeystoreCredentials.properties"
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ sed -i "s,use_dfc_config_dir=[^$]*,use_dfc_config_dir=false," ${kc}
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ sed -i "s,keystore.file.location=[^$]*,keystore.file.location=${work_dir}," ${kc}
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ grep -E "^use_dfc_config_dir|^keystore.file.location" ${kc}
use_dfc_config_dir=false
keystore.file.location=/tmp/work
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ enc_classpath="${work_dir}/WEB-INF/classes:${work_dir}/WEB-INF/lib/*"
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ java -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Aug 27, 2019 11:02:23 AM java.io.ObjectInputStream filterCheck
INFO: ObjectInputFilter REJECTED: class com.rsa.cryptoj.o.nc, array length: -1, nRefs: 1, depth: 1, bytes: 72, ex: n/a
java.security.UnrecoverableKeyException: Rejected by the jceks.key.serialFilter or jdk.serialFilter property
        at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
        at com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
        at java.security.KeyStoreSpi.engineGetEntry(KeyStoreSpi.java:473)
        at java.security.KeyStore.getEntry(KeyStore.java:1521)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.getSecretKey(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.decryptByDES(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorTool.main(TrustedAuthenticatorTool.java:64)
[weblogic@wls_01 work]$

 

As you can see above, the encryption of password is failing with some error. The issue is that starting with the JDK 1.8u171, Oracle introduced some new restrictions. From the Oracle release note (JDK-8189997):

New Features
security-libs/javax.crypto
Enhanced KeyStore Mechanisms
A new security property named jceks.key.serialFilter has been introduced. If this filter is configured, the JCEKS KeyStore uses it during the deserialization of the encrypted Key object stored inside a SecretKeyEntry. If it is not configured or if the filter result is UNDECIDED (for example, none of the patterns match), then the filter configured by jdk.serialFilter is consulted.

If the system property jceks.key.serialFilter is also supplied, it supersedes the security property value defined here.

The filter pattern uses the same format as jdk.serialFilter. The default pattern allows java.lang.Enum, java.security.KeyRep, java.security.KeyRep$Type, and javax.crypto.spec.SecretKeySpec but rejects all the others.

Customers storing a SecretKey that does not serialize to the above types must modify the filter to make the key extractable.

 

On recent versions of Documentum Administrator for example, there is no issue because it complies but for WebTop 6.8, it doesn’t and therefore to be able to encrypt/decrypt the password, you will have to modify the filter. There are several solutions to our current problem:

  • Downgrade the JDK: this isn’t a good solution since it might introduce security vulnerabilities and it will also prevent you to upgrade it in the future so…
  • Extend the ‘jceks.key.serialFilter‘ definition inside the ‘$JAVA_HOME/jre/lib/security/java.security‘ file: that’s a possibility but it means that any processes using this Java will use the updated filter list. Whether or not that’s fine, it’s up to you
  • Override the ‘jceks.key.serialFilter‘ definition using a JVM startup parameter on a per-process basis: better control on which processes are allowed to use updated filters and which ones aren’t

 

So the simplest way, and most probably the better way, to solve this issue is to simply add a command line parameter to specify that you want to allow some additional classes. By default, the ‘java.security‘ provides a list of some classes that are allowed and it ends with ‘!*‘ which means that everything else is forbidden.

[weblogic@wls_01 work]$ grep -A2 "^jceks.key.serialFilter" $JAVA_HOME/jre/lib/security/java.security
jceks.key.serialFilter = java.lang.Enum;java.security.KeyRep;\
  java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*

[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ grep "^security.provider" $JAVA_HOME/jre/lib/security/java.security
security.provider.1=com.rsa.jsafe.provider.JsafeJCE
security.provider.2=com.rsa.jsse.JsseProvider
security.provider.3=sun.security.provider.Sun
security.provider.4=sun.security.rsa.SunRsaSign
security.provider.5=sun.security.ec.SunEC
security.provider.6=com.sun.net.ssl.internal.ssl.Provider
security.provider.7=com.sun.crypto.provider.SunJCE
security.provider.8=sun.security.jgss.SunProvider
security.provider.9=com.sun.security.sasl.Provider
security.provider.10=org.jcp.xml.dsig.internal.dom.XMLDSigRI
security.provider.11=sun.security.smartcardio.SunPCSC
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Using an empty parameter allows everything (not the best idea)
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Encrypted: [4Fc6kvmUc9cCSQXUqGkp+A==], Decrypted: [MyP4ssw0rd]
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Using the default value from java.security causes the issue
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Aug 27, 2019 12:05:08 PM java.io.ObjectInputStream filterCheck
INFO: ObjectInputFilter REJECTED: class com.rsa.cryptoj.o.nc, array length: -1, nRefs: 1, depth: 1, bytes: 72, ex: n/a
java.security.UnrecoverableKeyException: Rejected by the jceks.key.serialFilter or jdk.serialFilter property
        at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
        at com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
        at java.security.KeyStoreSpi.engineGetEntry(KeyStoreSpi.java:473)
        at java.security.KeyStore.getEntry(KeyStore.java:1521)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.getSecretKey(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.encryptByDES(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorTool.main(TrustedAuthenticatorTool.java:63)
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Adding com.rsa.cryptoj.o.nc to the allowed list
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='com.rsa.cryptoj.o.nc;java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Aug 27, 2019 12:06:14 PM java.io.ObjectInputStream filterCheck
INFO: ObjectInputFilter REJECTED: class com.rsa.jcm.f.di, array length: -1, nRefs: 3, depth: 2, bytes: 141, ex: n/a
java.security.UnrecoverableKeyException: Rejected by the jceks.key.serialFilter or jdk.serialFilter property
        at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
        at com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
        at java.security.KeyStoreSpi.engineGetEntry(KeyStoreSpi.java:473)
        at java.security.KeyStore.getEntry(KeyStore.java:1521)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.getSecretKey(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.encryptByDES(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorTool.main(TrustedAuthenticatorTool.java:63)
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Adding com.rsa.jcm.f.* + com.rsa.cryptoj.o.nc to the allowed list
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='com.rsa.jcm.f.*;com.rsa.cryptoj.o.nc;java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Encrypted: [4Fc6kvmUc9cCSQXUqGkp+A==], Decrypted: [MyP4ssw0rd]
[weblogic@wls_01 work]$

 

So as you can see above, to encrypt passwords for WebTop 6.8 using a JDK 8u171+, you will need to add both ‘com.rsa.cryptoj.o.nc‘ and ‘com.rsa.jcm.f.*‘ in the allowed list. There is a wildcard for the JCM one because it will require several classes from this package.

The above was for the encryption of the password. That’s fine but obviously, when you will deploy WebTop, it will need to decrypt these passwords at some point… So you will also need to put the same JVM parameter for the process of your Application Server (for the Managed Server’s process in WebLogic):

-Djceks.key.serialFilter='com.rsa.jcm.f.*;com.rsa.cryptoj.o.nc;java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*'

 

You can change the order of the classes in the list, it just needs to be before the ‘!*‘ section because everything after that is ignored.

 

Cet article Documentum – Encryption/Decryption of WebTop 6.8 passwords ‘REJECTED’ with recent JDK est apparu en premier sur Blog dbi services.

Getting started with Hyper-V on Windows 10

The Oracle Instructor - Fri, 2019-08-30 03:27

Microsoft Windows 10 comes with its own virtualization software called Hyper-V. Not for the Windows 10 Home edition, though.

Check if you fulfill the requirements by opening a CMD shell and typing in systeminfo:

The below part of the output from systeminfo should look like this:

If you see No there instead, you need to enable virtualization in your BIOS settings.

Next you go to Programms and Features and click on Turn Windows features on or off:

You need Administrator rights for that. Then tick the checkbox for Hyper-V:

That requires a restart at the end:

Afterwards you can use the Hyper-V Manager:

Hyper-V can do similar things than VMware or VirtualBox. It doesn’t play well together with VirtualBox in my experience, though: VirtualBox VMs refused to start with errors like “VT-x is not available” after I installed Hyper-V. I also found it a bit trickier to handle than VirtualBox, but that’s maybe just because of me being less familiar with it.

The reason I use it now is because one of our customers who wants to do an Exasol Administration training cannot use VirtualBox – but Hyper-V is okay for them. And now it looks like that’s also an option. My testing so far shows that our educational cluster installation and management labs work also with Hyper-V.

Categories: DBA Blogs

Cloning of RDS Instance to Another Account

Pakistan's First Oracle Blog - Wed, 2019-08-28 21:01
Frequently, we need to refresh our development RDS based Oracle database from the production which is in another account in AWS. So we take a snapshot from production, share it with another account and then restore it in target from the snapshot.

I will post full process in a later post, but for now just sharing an issue we encountered today. While trying to share a snapshot with another account, I got the following error:


Sharing snapshots encrypted with the default service key for RDS is currently not supported.


Now, this snapshot was using default RDS keys and that is not supported. So in order to share it, we need to have customer managed keys and then copy this snapshot with these news keys and only then we can share it. You don't have to do anything at the target, as these customer managed keys become part of that snapshot. You can create customer managed keys in KMS console and may be assign to IAM user you are using.


I hope it helps.
Categories: DBA Blogs

Kafka | IoT Ecosystem ::Cluster; Performance Metrics; Sensorboards & OBD-II::

Rittman Mead Consulting - Wed, 2019-08-28 04:30
:Cluster; Performance Metrics; Sensorboards & OBD-II::

Infrastructure is the place to start and the keyword here is scalability. Whether it needs to run on premise, on cloud or both, Kafka makes it possible to scale at low complexity cost when more brokers are either required or made redundant. It is also equally easy to deploy nodes and nest them in different networks and geographical locations. As for IoT devices, whether it’s a taxi company, a haulage fleet, a racing team or just a personal car, Kafka can make use of the existing vehicle OBDII port using the same process; whether it’s a recording studio or a server room packed with sensitive electronic equipment and where climate control is critical, sensorboards can be quickly deployed and stream almost immediately into the same Kafka ecosystem. Essentially, pretty much anything that can generate data and touch python will be able to join this ecosystem.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

In large data centres it is fundamental to keep a close eye on misbehaving nodes, possibly overheating, constantly failing jobs or causing unexpected issues. Fires can occur too. This is quite a challenge with thousands and thousands of nodes. Though, Kafka allows for all of the node stats to individually stream in real time and get picked up by any database or machine, using Kafka Connect or kafka-python for consumption.

To demonstrate this on a smaller scale with a RaspberryPi 3 B+ cluster and test a humble variety of different conditions, a cluster of 7 nodes, Pleiades, was set up. Then, to make it easier to identify them, each computer was named after the respective stars of the Pleiades constellation.

  • 4 nodes {Alcyone; Atlas; Pleione; Maia} in a stack with cooling fans and heatsinks
:Cluster; Performance Metrics; Sensorboards & OBD-II::

  • 1 node in metal case with heatsink {Merope}
:Cluster; Performance Metrics; Sensorboards & OBD-II::

  • 1 node in plastic case {Taygeta}
:Cluster; Performance Metrics; Sensorboards & OBD-II::

  • 1 node in touchscreen plastic case {Electra}
:Cluster; Performance Metrics; Sensorboards & OBD-II::::Yes. It's a portable Retropie, Kafka broker & perfect for Grafana dashboards too::

Every single node has been equipped with the same python Kafka-producer script, from which the stream is updated every second in real-time under 1 topic, Pleiades. Measures taken include CPU-Percentage-%, CPU-Temperature, Total-Free-Memory, Available-System-Memory, CPU-Current-Hz.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

Kafka then connects to InfluxDB on Pleione, which can be queried using the terminal through a desktop or android SSH client. Nothing to worry about in terms of duplication, load balancing or gaps in the data. Worst case scenario InfluxDB, for example, crashes and the data will still be retrievable using KSQL to rebuild gap in DB depending on the retention policy set.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

We can query InfluxDB directly from the command line. The Measure (InfluxDB table) for Pleiades is looking good and holding plenty of data for us to see in Grafana next.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

A live feed is then delivered with Grafana dashboards. It's worth noting how mobile friendly these dashboards really are.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

At a glance, we know the critical factors such as how much available memory there is and how much processing power is being used, for the whole cluster as well as each individual node, in real time and anywhere in the world (with an internet connection).

It has then been observed that the nodes in the stack remain fairly cool and stable between 37 °C and 43 °C, whereas the nodes in plastic cases around 63 °C. Merope is in the metal casing with a heatsink, so it makes sense to see it right in the middle there at 52 °C. Spikes in temperature and CPU usage are directly linked to running processes. These spikes are followed by software crashes. Moving some of the processes from the plastic enclosures over to the stack nodes stopped Grafana from choking; this was a recurring issue when connecting to the dashboards from an external network. Kafka made it possible to track the problem in real time and allow us to come up with a solution much quicker and effortlessly; and then immediately also track if that solution was the correct approach. In the end, the SD cards between Electra and Pleione were quickly swapped, effectively moving Pleione to the fan cooled stack where it was much happier living.

If too many spikes begin to occur, we should expect for nodes to soon need maintenance, repair or replacement. KSQL makes it possible to tap into the Kafka Streams and join to DW stored data to forecast these events with increased precision and notification time. It's machine-learning heaven as a platform. KSQL also makes it possible to join 2 streams together and thus create a brand new stream, so to add external environment metrics and see how they may affect our cluster metrics, a sensor board on a RaspberryPi Zero-W was setup producing data into our Kafka ecosystem too.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

To keep track of the room conditions where the cluster sits, an EnviroPhat sensor board is being used. It measures temperature, pressure, colour and motion. There are many available sensorboards for SBCs like RaspberryPi that can just as easily be added to this Kafka ecosystem. Again, important to emphasize both data streams and dashboards can be accessed from anywhere with an internet connection.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

OBDII data from vehicles can be added to the ecosystem just as well. There are a few ways this can be achieved. The most practical, cable free option is with a Bluetooth ELM327 device. This is a low cost adaptor that can be purchased and installed on pretty much any vehicle after 1995. The adaptor plugs into the OBDII socket in the vehicle, connects via Bluetooth to a Pi-Zero-W, which then connects to a mobile phone’s 4G set up as a wi-fi hotspot. Once the data is flowing as far as needing a Kafka topic, the create command is pretty straight forward.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

With the obd-producer python script running, another equivalently difficult command opens up the console consumer for the topic OBD in Alcyone, and we can check if we have streams and if the OBD data is flowing through Kafka. A quick check on my phone reveals we have flow.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

To make things more interesting, the non-fan nodes in plastic and metal enclosures {Taygeta; Electra; Merope} were moved to a different geographical location and setup under a different network. This helps network outages and power cuts become less likely to affect our dashboard services or ability to access the IoT data. Adding cloud services to mirror this setup at this point would make it virtually bulletproof; zero point of failure is the aim of the game. When the car is on the move, Kafka is updating InfluxDB + Grafana in real time, and the intel can be tracked live as it happens from a laptop, desktop or phone from anywhere in the world.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

In a fleet scenario, harsh braking could trigger a warning and have the on-duty tracking team take immediate action; if the accelerometer spikes as well, then that could suggest an accident may have just occurred or payload checks may be necessary. Fuel management systems could pick up on driving patterns and below average MPG performance, even sense when the driver is perhaps not having the best day. This is where the value of Kafka in IoT and the possibilities of using ML algorithms really becomes apparent because it makes all of this possible in real time without a huge overhead of complexity.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

After plugging in the OBDII bluetooth adapter to the old e92-335i and driving it for 20 minutes, having it automatically stream data over the internet to the kafka master, Alcyone, and automatically create and update an OBD influxdb measure in Pleione, it can quickly be observed in Grafana that it doesn't enjoy idling that much; the coolant and intake air temperature dropped right down as it started moving at a reasonable speed. This kind of correlation is easier to spot in time series Grafana dashboards whereas it would be far less intuitive with standard vehicle dashboards that provide only current values.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

So now that a real bare-metal infrastructure exists - and it’s a self-monitoring, low power consumption cluster, spread across multiple geographical locations, keeping track of enviro-sensor producers from multiple places/rooms, logging all vehicle data and learning to detect problems as far ahead as possible - adding sensor data pickup points to this Kafka ecosystem is as simple as its inherent scalability. As such, with the right Kafka-Fu, pretty much everything is kind of plug-&-play from this point onwards, meaning we can now go onto connecting, centralising and automating as many things in life as possible that can become IoT using Kafka as the core engine under the hood.

:Cluster; Performance Metrics; Sensorboards & OBD-II::
Categories: BI & Warehousing

OAC Row Limits and Scale Up or Down

Rittman Mead Consulting - Wed, 2019-08-28 04:26
OAC Row Limits and Scale Up or Down

I created an OAC instance the other day for some analysis in preparation of my OOW talk, and during the analytic journey I reached the row limit with the error Exceeded configured maximum number of allowed input records.

OAC Row Limits and Scale Up or Down

Since a few releases back, each OAC instance has fixed row limits depending by the number of OCPU assigned that can be checked in the related documentation, with the current ones shown in the table below.

OAC Row Limits and Scale Up or Down

If you plan using BI Publisher (included in OAC a few versions ago) check also the related limits.

OAC Row Limits and Scale Up or Down

Since in my analytical journey I reached the row limit, I wanted to scale up my instance, but surprise surprise, the Scale Up or Down option wasn't available.

OAC Row Limits and Scale Up or Down

After some research I understood that Scaling Up&Down is available only if you chose originally a number of OCPUs greater than one. This is in line with Oracle's suggestion to use 1 OCPU only for non-production instances as stated in the instance creation GUI.

OAC Row Limits and Scale Up or Down

When choosing originally an OAC instance with 4 OCPUs the Scale Up/Down option becomes available (you need to start the instance first).

OAC Row Limits and Scale Up or Down

When choosing the scale option, we can decide whether to increase/decrease the number of OCPUs.

OAC Row Limits and Scale Up or Down

Please note that we could have limited choice in the number of OCPUs we can increase/decrease by depending on the availability and current usage.

Concluding, if you want to be able to Scale Up/Down your OAC instances depending on your analytic/traffic requirements, always start your instance with a number of OCPUs greater than one!

Categories: BI & Warehousing

AW-argh

Jonathan Lewis - Tue, 2019-08-27 09:59

This is another of the blog notes that have been sitting around for several years – in this case since May 2014, based on a script I wrote a year earlier. It makes an important point about “inconsistency” of timing in the way that Oracle records statistics of work done. As a consequence of being first drafted in May 2014 the original examples showed AWR results from 10.2.0.5 and 11.2.0.4 – I’ve just run the same test on 19.3.0.0 to see if anything has changed.

 

[Originally drafted May 2014]: I had to post this as a reminder of how easy it is to forget things – especially when there are small but significant changes between versions. It’s based loosely on a conversation from Oracle-L, but I’m going to work the issue in the opposite order by running some code and showing you the ongoing performance statistics rather than the usual AWR approach of reading the performance stats and trying to guess what happened.

The demonstration needs two sessions to run; it’s based on one session running some CPU-intensive SQL inside an anonymous PL/SQL block with a second another session launching AWR snapshots at carefully timed moments. Here’s the code for the working session:

rem
rem     Script:         awr_timing.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2013
rem

alter session set "_old_connect_by_enabled"=true';

create table kill_cpu(n, primary key(n))
organization index
as
select  rownum n
from    all_objects
where   rownum <= 26 -- > comment to avoid wordpress format issue
;

execute dbms_stats.gather_table_stats(user,'kill_cpu')

pause Take an AWR snapshot from another session and when it has completed  press return

declare
        m_ct    number;
begin

        select  count(*) X
        into    m_ct
        from    kill_cpu
        connect by
                n > prior n
        start with
                n = 1
        ;

        dbms_lock.sleep(30);

end;
/

You may recognise an old piece of SQL that I’ve often used as a way of stressing a CPU and seeing how fast Oracle can run. The “alter session” at the top of the code is necessary to use the pre-10g method of running a “connect by” query so that the SQL does a huge number of buffer gets (and “buffer is pinned count” visits). On my current laptop the query takes about 45 seconds (all CPU) to complete. I’ve wrapped this query inside a pl/sql block that then sleeps for 30 seconds.

From the second session you need to launch an AWR snapshot 4 times – once in the pause shown above, then (approximately) every 30 seconds thereafter. The second one should execute while the SQL statement is still running, the third one should execute while the sleep(30) is taking place, and the fourth one should execute after the pl/sql block has ended and the SQL*Plus prompt is visible.

Once you’ve got 4 snapshots you can generate 3 AWR reports. The question to ask then, is “what do the reports say about CPU usage?” Here are a few (paraphrased) numbers – starting with 10.2.0.5 comparing the “Top 5 timed events”, “Time Model”, and “Instance Activity” There are three sets of figures, the first reported while the SQL was still running, the second reported after the SQL statement had completed and the dbms_lock.sleep() call is executing, the last reported after the PL/SQL block has completed. There are some little oddities in the numbers due to backgorund “noise” – but the key points are still clearly visible:

While the SQL was executing
Top 5
-----
CPU Time                       26 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     26.9        100.0
DB CPU                                       26.2         97.6

Instance Activity
-----------------
CPU used by this session         0.65 seconds
recursive cpu usage              0.67 seconds

SQL ordered by CPU
------------------
31 seconds reported for both the SQL and PLSQL
During the sleep()
Top 5
-----
CPU Time                        19 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     19.0        100.0
DB CPU                                       18.6         98.1

Instance Activity
-----------------
CPU used by this session         0.66 seconds
recursive cpu usage             44.82 seconds

SQL ordered by CPU
------------------
14 seconds reported for both the SQL and PLSQL
After the PL/SQL block ended
Top 5
-----
CPU Time                         1 second

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                      1.4         99.9
DB CPU                                        1.4         99.7

Instance Activity
-----------------
CPU used by this session        44.68 seconds
recursive cpu usage              0.50 seconds

SQL ordered by CPU
------------------
1 second reported for the PLSQL, but the SQL was not reported

Points to notice:

While the SQL was excecuting (and had been executing for about 26 seconds, the Time Model mechanism was recording the work done by the SQL, and the Top N information echoed the Time model CPU figure. At the same time the “CPU used …” Instance Activity Statistics have not recorded any CPU time for the session – and they won’t until the SQL statement completes. Despite this, the “SQL ordered by …” reports double-count in real-time, show the SQL and the PL/SQL cursors as consuming (with rounding errors, presumable) 31 seconds each.

After the SQL execution was over and the session was sleeping the Time model (hence the Top 5) had recorded a further 19 seconds of work. The instance activity, however, has now accumulated 44 seconds of CPU, but only as “recursive CPU usage” (recursive because our SQL was called from with a PL/SQL block), with no “CPU used by this session”. The “SQL ordered by …” figures have recorded the amount of CPU used by both the SQL and PL/SQL in the interval (i.e. 14 seconds – which is a little off) recorded against both.)

After the PL/SQL block has completed the Time Model and the Top 5 report both say that nothing much happened in the interval, but the Instance Activity suddenly reports 44.68 seconds of CPU used by this session – which (roughly speaking) is truish as the PL/SQL block ended and assigned the accumulated recursive CPU usage to the session CPU figure. Finally, when we get down to the “SQL ordered by CPU” the SQL was not reported  – it did no work in the interval – but the PL/SQL block was still reported but only with a generous 1 second of CPU since all it did in the interval was finish the sleep call and tidy up the stack before ending.

Now the same sets of figures for 11.2.0.4 – there’s a lot of similarity, but one significant difference:

While the SQL was executing

Top 5
-----
CPU Time                        26.6 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     27.0        100.0
DB CPU                                       26.6         98.5

Instance Activity
-----------------
CPU used by this session         1.09 seconds
recursive cpu usage              1.07 seconds

SQL ordered by CPU
------------------
25.6 seconds reported for both the SQL and PLSQL
During the sleep()
Top 5
-----
CPU Time                        15.1 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     15.3         99.8
DB CPU                                       15.1         98.2

Instance Activity
-----------------
CPU used by this session        41.09 seconds
recursive cpu usage             41.03 seconds

SQL ordered by CPU
------------------
14.3 seconds reported for the SQL
13.9 seconds reported for the PLSQL
After the PL/SQL block ended
Top 5
-----
CPU Time                         1.4 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                      1.5         99.6
DB CPU                                        1.4         95.4

Instance Activity
-----------------
CPU used by this session         1.02 seconds
recursive cpu usage              0.95 seconds

SQL ordered by CPU
------------------
0.5 seconds reported for the PLSQL, and no sign of the SQL

Spot the one difference in the pattern – during the sleep() the Instance Activity Statistic “CPU used by this session” is recording the full CPU time for the complete query, whereas the time for the query appeared only in the “recursive cpu” in the 10.2.0.5 report.

I frequently point out that for proper understanding of the content of an AWR report you need to cross-check different ways in which Oracle reports “the same” information. This is often to warn you about checking underlying figures before jumping to conclusions about “hit ratios”, sometimes it’s to remind you that while the Top 5 might say some average looks okay the event histogram may say that what you’re looking at is mostly excellent with an occasional disaster thrown in. In this blog note I just want to remind you that if you only ever look at one set of figures about CPU usage there are a few special effects (particularly relating to long running PL/SQL / Java / SQL) where you may have to work out a pattern of behaviour to explain unexpectedly large (or small) figures and contradictory figures, The key to the problem is recognising that different statistics may be updated at different stages in a complex process.

Footnote

I doubt if many people still run 11.2.0.4, so I also re-ran the test on 19.3.0.0 before publishing. The behaviour hasn’t changed since 11.2.0.4 although the query ran a little faster, perhaps due to changes in the mechanisms for this type of “connect by pump”.

11.2.0.4 stats

Name                                            Value
----                                            -----
session logical reads                      33,554,435
consistent gets                            33,554,435
consistent gets from cache                 33,554,435
consistent gets from cache (fastpath)      33,554,431
no work - consistent read gets             33,554,431
index scans kdiixs1                        33,554,433
buffer is not pinned count                 16,777,219


19.3.0.0 stats
Name                                            Value
----                                            -----
session logical reads                      16,843,299
consistent gets                            16,843,299
consistent gets from cache                 16,843,299
consistent gets pin                        16,843,298
consistent gets pin (fastpath)             16,843,298
no work - consistent read gets             16,790,166
index range scans                          33,554,433
buffer is not pinned count                 16,790,169

Some changes are trivial (like the change of name for “index scans kdiixs1”) some are interesting (like some gets not being labelled as “pin” and “pin (fastpath)”), some are baffling (like how you can manage 33M index range scans while doing only 16M buffer gets!)

Oracle Cloud at Customers(C@C): Overview and Concepts for Beginners

Online Apps DBA - Tue, 2019-08-27 05:56

Are you a Beginner in Oracle Cloud at Customers(C@C) and looking for an Overview of Oracle C@C & its Offerings? If YES, then the blog post at https://k21academy.com/oci47 is a perfect fit! The blog post discusses: ➥ What is Oracle C@C? ➥ How is Oracle C@C beneficial to you? ➥ Oracle C@C’s Offerings: Cloud at […]

The post Oracle Cloud at Customers(C@C): Overview and Concepts for Beginners appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator