Feed aggregator

Conditional index

Tom Kyte - 2 hours 30 min ago
Tom, Thanks for taking my question. I am trying to conditionally index rows in a table. In SQL Server 2008 there is a feature called filtered indexes that allows you to create an index with a where clause. So I have a table abc: <code>create...
Categories: DBA Blogs

Transfer data from one db to another db over db link using trigger

Tom Kyte - 2 hours 30 min ago
Hi, I am working on a project in which data marts are involved. We are creating triggers to transfer data from OLTP DB to data mart (Online extraction). Following is the code of a trigger for a table involving clob column. I have seen different solut...
Categories: DBA Blogs

Introducing Data Hub Cloud Service to Manage Apache Cassandra and More

OTN TechBlog - 4 hours 16 min ago

Today we are introducing the general availability of the Oracle Data Hub Cloud Service. With Data Hub, developers are now able to initialize and run Apache Cassandra clusters on-demand without having to manage backups, patching and scaling for Cassandra clusters. Oracle Data Hub is a foundation for other databases like MongoDB, Postgres and more coming in the future. Read the full press release from OpenWorld 2017.

The Data Hub Cloud Service provides the following key benefits:

  • Dynamic Scalability – users will have access to an API and a web console interface to easily operate in minutes things such as scale-up/scale-down or scale-out/scale-in, and size their clusters accordingly to their needs.
  • Full Control –as development teams migrate from an on premise environment to the cloud, they continue to have full secure shell (ssh) access to the underlying virtual machines (VMs) hosting these database clusters so that they can login and perform management tasks in the same way they have been doing.

Developers may be looking for more than relational data management for their applications. MySQL and Oracle Database have been around for quite some time already on Oracle Cloud. Today, application developers are looking for the flexibility to choose the database technology according to the data models they use within their application. This use case specific approach enables these developers to choose the Oracle Database Cloud Service when appropriate and in other cases choose other database technologies such as MySQL, MongoDB, Redis, Apache Cassandra etc.

In such a polyglot development environment, the enterprise IT faces the key challenge of how to support as well as lower the total cost of ownership (TCO) of managing such open source database technologies within the organization. This is specifically the problem that the Oracle Data Hub Cloud Service addresses. How to Use Data Hub Cloud Service

Using the Data Hub Cloud Service to provision, administer or monitor an Apache Cassandra database cluster is extremely simple and easy. You can create an Apache Cassandra database cluster with as many nodes as you would like in 2 simple steps:

  • Step 1
    • Choose between Oracle Cloud Infrastructure and Oracle Cloud Infrastructure Classic regions
    • Choose between the latest (3.11) and stable (3.10) Apache Cassandra database versions
  • Step 2
    • Choose the cluster size, compute shape (processor cores) and the storage size. Don't worry about choosing the right value here. You can always dynamically resize when you need additional compute power or storage.
    • Provide the shell access information so that you have the full control to your database clusters.

Flexibility to choose the Database Version

When you create the cluster, you have the flexibility to choose the Apache Cassandra versions. Additionally, you can easily patch to the latest version, as it becomes available for the Cassandra version. Once you choose to apply the patch, the service applies this patch within your cluster in a rolling fashion to minimize any downtime.

Dynamic Scaling

During provisioning, you have the flexibility to choose the cluster size, the compute shapes (compute core and memory), and the storage sizes for all the nodes within the cluster. This flexibility allows you to choose the compute and storage shapes that better meet your workload and performance requirements.
If you want to add either additional nodes in your cluster (commonly referred as scale-out) or additional storage to your nodes in the cluster, you can easily do so using the Data Hub Cloud Service API or Console. So, you don't have to worry about sizing your workload at the time of provisioning.

Full Control

You have full shell access to all the nodes within the cluster so that you have full control to the underlying database and its storage. You also have the full flexibility to login to these nodes and configure the database instances to meet your scalability and performance requirements.

Once you select Create, the service will create the compute instances, attach the block volumes to the node and then lay out the Apache Cassandra binaries within each of the nodes in the cluster. In the Oracle Cloud Infrastructure Classic platform, the service will also automatically enable the network access rules so that users can now begin to use CQL (Cassandra Query Language) tool to create your Cassandra database. In the Oracle Cloud Infrastructure platform, you have the full control and flexibility to create this cluster within a specific subnet in the virtual cloud network (VCN).

Getting Started

This service is accessible via the Oracle My Services dashboard for users already under the Universal Credits. And, if you're not already using the Oracle Cloud, you can start off with a free Cloud credits to explore the services. Appreciate if you can kindly give this service a spin and share your feedback.

Additional Reference

Query Flat Files in S3 with Amazon Athena

Pakistan's First Oracle Blog - Tue, 2017-11-21 21:01
Amazon Athena enables you to access data present in flat files stored in S3 (Simple Storage Service) as if it were in a table in the database. That and you don't have to set up any server or any other software to accomplish that.

That's another glowing example of being 'Serverless.'


So if a telecommunication has hundreds of thousands or more call detail record file in CSV or Apache Parquet or any other supported format, it can just be uploaded to S3 bucket, and then by using AWS Athena, that CDR data can be queried using well known ANSI SQL.

Ease of use, performance, and cost savings are few of the benefits of AWS Athena service. True to the Cloud promise, with Athena you are charged for what you actually do; i.e. you are only charged for the queries. You are charged $5 per terabyte scanned by your queries. Beyond S3 there are no additional storage costs.

So if you have huge amount of formatted data in files and all you want to do is to query that data using familiar ANSI SQL then AWS Athena is the way to go. Beware that Athena is not for enterprise reporting and business intelligence. For that purpose we have AWS Redshift. Athena is also not for running highly distributed processing frameworks such as Hadoop. For that purpose we have AWS EMR. Athena is more suitable for running interactive queries on your supported formatted data in S3.

Remember to keep reading the AWS Athena documentation as it will keep improving, lifting limitations, and changing like everything else in the cloud.
Categories: DBA Blogs

RMAN and archivelogs

Tom Kyte - Tue, 2017-11-21 18:26
Hi, I have read quite a bit on Oracles RMAN utility and know that for hot backups RMAN doesn't use old method of placing tablespaces in Archive log mode freezing datafile headers & writing changes to Redo/ Archive logs. Hence a company with a larg...
Categories: DBA Blogs

Difference between "consistent gets direct" and "physical reads direct"

Tom Kyte - Tue, 2017-11-21 18:26
Hi Tom/Team, Could you explain the difference between "consistent gets direct" and "physical reads direct"? Thanks & Regards
Categories: DBA Blogs

Linuxgiving! The Things We do With and For Oracle Linux

OTN TechBlog - Tue, 2017-11-21 17:00

By: Sergio Leunissen - VP, Operating Systems & Virtualization 

It is almost Thanksgiving, so you may be thinking about things that you’re thankful for –good food, family and friends.  When it comes to making your (an enterprise software developer’s) work life better, your list might include Docker, Kubernetes, VirtualBox and GitHub. I’ll bet Oracle Linux wasn’t on your list, but here’s why it should be…

As enterprises move to the Cloud and DevOps increases in importance, application development also has to move faster. Here’s where Oracle Linux comes in. Not only is Oracle Linux free to download and use, but it also comes pre-configured with access to our Oracle Linux yum server with tons of extra packages to address your development cravings, including:

If you’re still craving something sweet, you can add less complexity to your list as with Oracle Linux you’ll have the advantage of runningthe exact same OS and version in development as you do in production (on-premises or in the cloud).

Related content

And, we’re constantly working on ways to spice-up your experience with Linux, from things as simple as "make it boot faster," to always-available diagnostics for network filesystem mounts, to ways large systems can efficiently parallelize tasks. These posts, from members of the Oracle Linux Kernel Development team, will show you how we are doing this:

Accelerating Linux Boot Time

Pasha Tatashin describes optimizations to the kernel to speed up booting Linux, especially on large systems with many cores and large memory sizes.

Tracing NFS: Beyond tcpdump

Chuck Leverdescribes how we are investigating new ways to trace NFS client operations under heavy load and on high performance network fabrics so that system administrators can better observe and troubleshoot this network file system.

ktask: A Generic Framework for Parallelizing CPU-Intensive Work

Daniel Jordan describes a framework that’s been submitted to the Linux community which makes better use of available system resources to perform large scale housekeeping tasks initiated by the kernel or through system calls.

On top of this, you can have your pumpkin, apple or whatever pie you like and eat it too – since Oracle Linux Premier Support is included with your Oracle Cloud Infrastructure subscription – yes, that includes Ksplice zero down-time updates and much more at no additional cost.

Most everyone's business runs on Linux now, it's at the core of today’s cloud computing. There are still areas to improve, but if you look closely, Oracle Linux is the OS you’ll want for app/dev in your enterprise.

Partner Webcast – Identity Management Update: IDM 12c Release

Oracle Identity Management, a well-recognized offering by Oracle, enables organizations to effectively manage the end-to-end lifecycle of user identities across all enterprise resources, both within...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Scaling Oracle using NVMe flash

Gerger Consulting - Tue, 2017-11-21 07:37
Attend the free webinar by storage expert Venky Nagapudi and learn how to improve the performance of your Oracle Database using new storage technologies such as NVMe flash. 

About the Webinar
Growth in users and data put an ever-increasing strain on transactional and analytics platforms. With many options available to scale platforms, what are the considerations and what are others choosing? Vexata’s VP of Product Management, Venky Nagapudi covers how the latest in storage side technologies, like NVMe flash, can deliver both vast improvements in performance as well as drive down costs and complexity of platforms. He will also cover key use cases where storage-side solutions delivered amazing results for Vexata’s customers.
In this webinar, you will:
  • Hear real-world performance scaling use cases.
  • Review the pros & cons of common scaling options.
  • See specific results of choosing a storage-side solution.


About the Presenter


Venky Nagapudi has 20 years experience in engineering and product management in the storage, networking and computer industries. Venky led product management at EMC and Applied Microsystems. Venky also held engineering leadership roles at Intel, Brocade and holds 10 patents with an MBA from Haas business school at UC Berkeley, an MSEE from North Carolina State University, and a BSEE from IIT Madras.

Sign up now.

Categories: Development

Partner Webcast – Recovery Appliance (ZDLRA) - Data protection for Oracle Database

Today’s solutions for protecting business data fail to meet the needs of mission critical enterprise databases. They lose up to a day of business data on every restore, place a heavy load on...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Import data from Flat File to two different Table using UTL_FILE.

Tom Kyte - Tue, 2017-11-21 00:06
Hi Please help this Question. Import data from Following Flat File to two different Table using UTL_FILE. a. EMP and b. DEPT Note --- 1. In Last Line NULL Employee Should not Entry into Table. 2. Deptno Should go to both the Table EMP a...
Categories: DBA Blogs

Repeating parent-nodes in hierarchical query

Tom Kyte - Tue, 2017-11-21 00:06
Hello AskTOM Team, with the schema as provided in the LiveSQL Link (which is simply the example EMP/DEPT schema), I ran the following query <code> select case when LEVEL = 1 then ENAME else rp...
Categories: DBA Blogs

Using Fast Offline Conversion to Enable Transparent Data Encryption in EBS

Steven Chan - Mon, 2017-11-20 13:07

We are pleased to announce a new capability that enables you to perform offline, in-place conversion of datafiles for use with Transparent Data Encryption (TDE). This Fast Offline Conversion feature is now available for use with Oracle E-Business Suite 12.1.3 and 12.2.2 and later 12.2.x databases.

What does this feature do?

Fast Offline Conversion converts existing clear data to TDE-encrypted tablespaces.

The encryption is transparent to the application, so code does not have to be rewritten and existing SQL statements will work unchanged. Any authorized database session can read the encrypted data: the encryption only applies to the database datafiles and backups.

This new process is now the recommended procedure for converting to TDE with minimal downtime and lowest complexity. It supersedes previous methods for converting to TDE.

How do I go about using this feature?

You enable Fast Offline Conversion by applying a patch to your EBS 12.1.0.2 or 11.2.0.4 database. The patch - which is available on request from Oracle Support - enables offline, in-place TDE conversion of datafiles.

Where are the detailed instructions?

Full steps for enabling Fast Offline Conversion are provided in the following My Oracle Support knowledge document:

Related Articles

Categories: APPS Blogs

Enabling Fluid for Firefox on Linux for PeopleTools 8.54 and 8.55

Javier Delgado - Mon, 2017-11-20 08:09
In one of our customers we came across an issue by which users connecting to PeopleSoft using Firefox in Ubuntu would be shown the classic home page instead of the Fluid landing page.

After some research, we found out that this would happen in PeopleTools 8.54 and 8.55 due to a known issue. The document 2235517.1 in My Oracle Support actually indicates that this issue is resolved in PeopleTools 8.56.



So we started looking for workarounds, until we finally found one, which was to modify the file under the following web server directory:

%PS_CFG_HOME%\webserv\%domain%\applications\peoplesoft\PORTAL.war\WEB-INF\psftdocs\%site%\browscap

In this file, we included the following changes:

(...)
browserPlatform=MACIPAD;
if mac
browserPlatform=MAC;

# customer - author - BEGIN
if (?=.*Linux)
browserPlatform=WIN8;
# customer - author - END
}

{# Form Factors
(...)
if (iPhone)|(iPad)|(iPod)
browserPlatformClass=ios;
if mac
browserPlatformClass=mac;
browserDeviceTypeClass=pc

# customer - author - BEGIN
if (?=.*Linux)
browserPlatformClass=win;
browserDeviceTypeClass=pc
# customer - author - END
}
if android
browserPlatformClass=android;
(...)

Once this was done, and after rebooting the web server, the issue was solved.

Note: I would like to thank Nicolás Zocco for his invaluable contribution in finding this workaround.

firewalld rules for Veritas Infoscale 7.3 with Oracle

Yann Neuhaus - Mon, 2017-11-20 06:30

You might wonder, but yes, Veritas is still alive and there are customers that use it and are very happy with it. Recently we upgraded a large cluster from Veritas 5/RHEL5 to Veritas InfoScale 7.3/RHEL7 and I must say that the migration was straight forward and very smooth (when I have time I’ll write another post specific to the migration). At a point in time during this project the requirement to enable the firewall on the Linux hosts came up so we needed to figure out all the ports and then setup the firewall rules for that. This is how we did it…

The first step was to create a new zone because we did not want to modify any of the default zones:

root@:/home/oracle/ [] firewall-cmd --permanent --new-zone=OracleVeritas
root@:/home/oracle/ [] firewall-cmd --reload
success
root@:/home/oracle/ [] firewall-cmd --get-zones
OracleVeritas block dmz drop external home internal public trusted work

The ports required for Veritas InfoScale are documented here. This is the set of ports we defined:

##### SSH
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-service=ssh
##### Veritas ports
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=4145/udp            # vxio
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=4145/tcp            # vxio
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=5634/tcp            # xprtld
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=8199/tcp            # vras
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=8989/tcp            # vxreserver
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14141/tcp           # had
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14144/tcp           # notifier
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14144/udp           # notifier
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14149/tcp           # vcsauthserver
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14149/udp           # vcsauthserver
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14150/tcp           # CmdServer
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14155/tcp           # wac
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14155/udp           # wac
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14156/tcp           # steward
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14156/udp           # steward
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=443/tcp             # Vxspserv
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=49152-65535/tcp     # vxio
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=49152-65535/udp     # vxio
#### Oracle ports
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=1521/tcp            # listener
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=3872/tcp            # cloud control agent

Because we wanted the firewall only on the public network, but not on the interconnect we changed the interfaces for the zone:

root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --change-interface=bond0
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --change-interface=eth0
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --change-interface=eth2

One additional step to make this active is to add the zone to the interface configuration (this is done automatically if the interfaces are under control of network manager):

root@:/home/oracle/ [] echo "ZONE=OracleVeritas" >> /etc/sysconfig/network-scripts/ifcfg-eth0
root@:/home/oracle/ [] echo "ZONE=OracleVeritas" >> /etc/sysconfig/network-scripts/ifcfg-eth2
root@:/home/oracle/ [] echo "ZONE=OracleVeritas" >> /etc/sysconfig/network-scripts/ifcfg-bond0

Restart the firewall service:

root@:/home/oracle/ [] systemctl restart firewalld

… and it should be active:

root@:/home/postgres/ [] firewall-cmd --get-active-zones
OracleVeritas
  interfaces: eth0 eth2 bond0
public
  interfaces: eth1 eth3

root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --list-all
OracleVeritas (active)
  target: default
  icmp-block-inversion: no
  interfaces: bond0 eth0 eth2
  sources: 
  services: 
  ports: 4145/udp 4145/tcp 5634/tcp 8199/tcp 8989/tcp 14141/tcp 14144/tcp 14144/udp 14149/tcp 14149/udp 14150/tcp 14155/tcp 14155/udp 14156/tcp 14156/udp 443/tcp 49152-65535/tcp 49152-65535/udp 1521/tcp 3872/tcp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

Just for completeness: You can also directly check the configuration file for the zone:

root@:/home/oracle/ [] cat /etc/firewalld/zones/OracleVeritas.xml

Hope this helps …

 

Cet article firewalld rules for Veritas Infoscale 7.3 with Oracle est apparu en premier sur Blog dbi services.

nVision Performance Tuning: 8. Interval Partitioning and Statistics Maintenance of Tree Selector Tables

David Kurtz - Mon, 2017-11-20 05:55
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The decision to use interval partitioning on the tree selector tables came from the need to have accurate statistics for the optimizer when parsing nVision queries.  It is not possible to introduce hints into nVision SQL. The dynamic nature of the code means it is not viable to consider any of the forms of database plan stability across the whole application, (although it may be possible to use SQL Profiles in limited cases). Therefore, as far as possible the optimizer has to choose the best plan on its own. Without accurate optimizer statistics, I have found that the optimizer will usually not choose to use a Bloom filter.
If the selector tables are not partitioned, then each table will usually contain rows for many different selectors. Even with perfectly up to date statistics, including a histogram on SELECTOR_NUM, and extended statistics on SELECTOR_NUM and RANGE_FROM_nn, I have found that Oracle miscosts the join on RANGE_FROMnn and the attribute on the ledger table.
I propose that the tree selector tables should be interval partition such that each selector goes into its own partition.
CREATE TABLE PSYPSTREESELECT10 
(SELECTOR_NUM INTEGER NOT NULL,
TREE_NODE_NUM INTEGER NOT NULL,
RANGE_FROM_10 VARCHAR2(10) NOT NULL,
RANGE_TO_10 VARCHAR2(10) NOT NULL)
PARTITION BY RANGE (selector_num) INTERVAL (1)
(PARTITION VALUES LESS THAN(2))
TABLESPACE "PTWORK"
STORAGE(INITIAL 128K NEXT 128K)
/
INSERT INTO PSYPSTREESELECT10
( SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10)
SELECT SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10
FROM PSTREESELECT10
/
DROP TABLE PSTREESELECT10
/
ALTER TABLE PSYPSTREESELECT10 RENAME TO PSTREESELECT10
/
  • nVision queries will reference a single selector with a literal value, and therefore Oracle will eliminate all but that single partition at parse time and will use the statistics on that partition to determine how to join it to other tables.
  • Statistics only have to be maintained at partition level, and not at table level. 
  • Now that there is only a single selector number in any one partition, there is no need for extended statistics. 
  • The need to use dynamic selectors, in order to get equality joins between selectors and ledger tables, in order to make use of the Bloom filter, means that statistics on selector table will inevitably be out of date. The PL/SQL triggers and package that log the selector usage, are also used to maintain statistics on the partition. 
  • Partitions do not have to be created in advance. They will be created automatically by Oracle as they are required by the application. 
Compound Triggers on Tree Selector Tables There are a pair of compound DML triggers on each tree selector tables, one for insert and one for delete.
  • The after row section captures the current selector number. The one for insert also counts the number of rows and tracks the minimum and maximum values of the RANGE_FROMnn and RANGE_TOnn columns. 
  • The after statement section updates the selector log. The insert trigger directly updates the statistics on the partition, including the minimum and maximum values of the range columns.
    • It is not possible to collect statistics in a trigger in the conventional manner because dbms_stats includes an implicit commit. If dbms_stats was called within an autonomous transaction it could not see the uncommitted insert into the tree selector that fired the trigger. Hence the trigger calls the XX_NVISION_SELECTORS package that uses dbms_stats.set_table_stats and dbms_stats.set_column_stats to set values directly. 
    • The trigger then submits a job to database job scheduler that will collect statistics on the partition in the conventional way using dbms_job. The job number is recorded on the selector log. The job will be picked up by the scheduler when the insert commits. However, there can be a short lag between scheduling the job, and it running. The first query in the nVision report can be parsed before the statistics are available. 
    • The procedure that directly sets the statistics has to make some sensible assumptions about the data. These mostly lead the optimizer to produce good execution plans. However, testing has shown that performance can be better with conventionally collected statistics. Therefore, the trigger both directly sets statistics and submits the database job to collect the statistics.
    • It is important that table level statistics are not maintained by either technique as this would lead to locking between sessions. Locking during partition statistics maintenance will not occur as no two sessions populate the same selector number, and each selector is in a different partition. A table statistics preference for granularity is set to PARTITION on each partitioned tree selector table. 
The combination of dynamics selectors, single value joins, interval partitioning of selector tables, logging triggers on the selector tables driving timely statistics maintenance on the partitions delivers execution plans that perform well and that make effective use of engineered system features.
However, there are two problems that then have to be worked around. 
Library Cache Contention 
Some data warehouse systems can need new partitions in tables daily or even hourly. If partitions were not created in a timely fashion, the application would either break because the partition was missing, or performance would degrade as data accumulated in a single partition. Oracle intended interval partitions to free the DBA from the need to actively manage such partitioning on a day-to-day basis by creating them automatically as the data was inserted. 
However, on a busy nVision system, this solution could create thousands of new selectors in a single day, and therefore thousands of new partitions. This is certainly not how Oracle intended interval partitioning to be used.  I freely admit that I am abusing the feature.
If you have multiple concurrent nVision reports running, using dynamic selectors, you will have multiple database sessions concurrently inserting rows into the tree selector tables each with a different selector number, and therefore each creating new partitions mostly into the same tables.
The recursive code that creates the new partitions, and maintains the data dictionary, acquires a lock the object handle in library cache to prevent other sessions from changing it at the same time.  As the number of concurrent nVisions increase you will start to see nVision sessions waiting on the library cache lock event during the insert into the tree selector table while the new partition is being created. Perversely, as the performance of the nVision queries improve (as you refine tree selector settings) you may find this contention increases. 
The workaround to this is to create multiple database schemas, each with copies of the partitioned tree selector tables (similarly interval partitioned) and the PSTREESELCTL table (to manage static selectors in those schemas). Synonyms will be required for all other tables referenced by nVision queries. 
Then a trigger on the process scheduler request table PSPRCSRQST will arbitarily set the current schema of the nVision batch processes to one of those schemas. The nVision reports still connect and run with privileges of the Owner ID (usually SYSADM), but the objects tables from the current schema. 
I have used a hash function to distribute nVision processes between schemas. I suggest the number of schemas should be a power of 2 (ie, 2, 4, 8 etc.).
CREATE OR REPLACE TRIGGER sysadm.nvision_current_schema
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus IN('7') AND new.prcsname = 'RPTBOOK' AND new.prcstype like 'nVision%')
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET current_schema = NVEXEC'||LTRIM(TO_CHAR(dbms_utility.get_hash_value(:new.prcsinstance,1,8),'00'));
EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/
Thus different nVision reports use different tree selector tables in different schemas rather than trying to create partitions in the same tree selector table, thus avoiding the library cache locking.
Limitation on the Maximum Number of Partitions In Oracle, it is not possible to have more than 1048576 partitions in a table. That applies to all forms of partitioning.
The tree selector tables are interval partitioned on selector number with an interval of 1 starting with 1. So the highest value of SELECTOR_NUM that they can store is 1048575.
INSERT INTO pstreeselect05 VALUES(1048576,0,' ',' ')
*
ERROR at line 1:
ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions
New selector numbers are allocated sequentially from PSTREESELNUM. Left unattended, the selector numbers used by nVision will increase until they eventually hit this limit, and then nVision and ad-hoc queries that use the tree-exists operator will start to fail with this error.
Therefore, a procedure RESET_SELECTOR_NUM has been added to the PL/SQL package to reset the selector number allocation table PSTREESELNUM back to 0, delete any tree selector entries for which there is no logging entry, and then runs the regular selector PURGE procedure in the same
package that will drop unwanted interval partitions.

Recommendation: XX_NVISION_SELECTORS.RESET_SELECTOR_NUM should be scheduled run sufficiently frequently to prevent the selector number reaching the maximum.  

What causes a materialized view to get invalidated

Tom Kyte - Mon, 2017-11-20 05:46
Hello, I have a materialized view whose definition looks like this: CREATE MATERIALIZED VIEW <owner>.<materialized view name> (<column list>) TABLESPACE <tablespace name> PCTUSED 0 PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE (...
Categories: DBA Blogs

Distributed Option in Oracle 7

Tom Kyte - Mon, 2017-11-20 05:46
Hi Tom! Is Oracle Distributed Option required for accessing remote databases? Also, if there is some restriction on database version i.e. the version shuld be same on all nodes?
Categories: DBA Blogs

Microservices and Updating Data Bound Context on Oracle Cloud with Application Container and Event Hub (plus DBaaS and MongoDB)–Part Two

Amis Blog - Sun, 2017-11-19 15:22

This article describes – in two installments – how events are used to communicate a change in a data record owned by the Customer microservice to consumers such as the Order microservice that has some details about the modified customer in its bound context. The first installment described the implementation of the Customer microservice – using MongoDB for its private data store and producing events to Event Hub cloud service to inform other microservices about updates in customer records. In the installment you are reading right now, the Order microservice is introduced – implemented in Node, running on Application Container Cloud, bound to Oracle Database in the cloud and consuming events from Event Hub. These events include the CustomerModified event published by the Customer microservice and used by the Order microservice to synchronize its bound context.

imageThe provisioning and configuration of the Oracle Public Cloud services used in this article is described in detail in this article: Prepare and link bind Oracle Database Cloud, Application Container Cloud, Application Container Cache and Event Hub.

The sources for this article are available on GitHub: https://github.com/lucasjellema/order-data-demo-devoxx .

The setup described in this article was used as a demonstration during my presentation on “50 Shades of Data” during Devoxx Morocco (14-16 November, Casablanca, Morocco); the slidedeck for this session is available here:

https://www.slideshare.net/lucasjellema/50-shades-of-data-how-when-and-why-bigrelationalnosqlelasticeventcqrs-devoxx-maroc-november-2017-including-detailed-demo-screenshots

The Order microservice

The Order microservice is implemented in Node and deployed on Oracle Application Container cloud. It has service bindings to Database Cloud (for its private data store with Orders and associated data bound context) and Event Hub (for consuming events such as the CustomerModified event).image

The Node runtime provided by Application Container Cloud included an Oracle Database Client and the Oracle DB driver for Node. This means that connecting to and interacting with an Oracle Database is done very easily.

The Orders microservice supports the REST call GET /order-api/orders which returns a JSON document with all customers:

image

The implementation of this functionality is straightforward Node, Express and Oracle Database driver for Node:

CODE FOR RETRIEVE ORDERS

var express = require('express')
  , http = require('http');

var bodyParser = require('body-parser') // npm install body-parser
var ordersAPI = require( "./orders-api.js" );

var app = express();
var server = http.createServer(app);

var PORT = process.env.PORT || 3000;
var APP_VERSION = '0.0.4.06';

var allowCrossDomain = function(req, res, next) {
    res.header('Access-Control-Allow-Origin', '*');
    res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE');
    res.header('Access-Control-Allow-Headers', 'Content-Type');
    res.header('Access-Control-Allow-Credentials', true); 
    next();
}

server.listen(PORT, function () {
  console.log('Server running, version '+APP_VERSION+', Express is listening... at '+PORT+" for Orders Data API");
});

app.use(bodyParser.json()); // for parsing application/json
app.use(allowCrossDomain);

ordersAPI.registerListeners(app);

And the OrdersAPI:

var oracledb = require('oracledb');

var ordersAPI = module.exports;
var apiURL = "/order-api";

ordersAPI.registerListeners =
  function (app) {
    app.get(apiURL + '/orders', function (req, res) {
      handleGetOrders(req, res);
    });
  }//registerListeners

handleGetOrders = function (req, res) {
  getOrdersFromDBTable(req, res);
}

transformOrders = function (orders) {
  return orders.map(function (o) {
    var order = {};
    order.id = o[0];
    order.customer_id = o[1];
    order.customer_name = o[2];
    order.status = o[3];
    order.shipping_destination = o[4];
    return order;
  })
}


getOrdersFromDBTable = function (req, res) {
  handleDatabaseOperation(req, res, function (request, response, connection) {
    var selectStatement = "select id, customer_id, customer_name, status , shipping_destination from dvx_orders order by last_updated_timestamp";
    connection.execute(selectStatement, {}
      , function (err, result) {
        if (err) {
          return cb(err, conn);
        } else {
          try {
            var orders = result.rows;
            orders = transformOrders(orders);
            response.writeHead(200, { 'Content-Type': 'application/json' });
            response.end(JSON.stringify(orders));
          } catch (e) {
            console.error("Exception in callback from execute " + e)
          }
        }
      });
  })
}//getOrdersFromDBTable


function handleDatabaseOperation(request, response, callback) {
  var connectString = process.env.DBAAS_DEFAULT_CONNECT_DESCRIPTOR;
  oracledb.getConnection(
    {
      user:  process.env.DBAAS_USER_NAME,
      password: process.env.DBAAS_USER_PASSWORD ,
      connectString: connectString
    },
    function (err, connection) {
      if (err) {
        console.log('Error in acquiring connection ...');
        console.log('Error message ' + err.message);
        return;
      }
      // do with the connection whatever was supposed to be done
      console.log('Connection acquired ; go execute - call callback ');
      callback(request, response, connection);
    });
}//handleDatabaseOperation


function doRelease(connection) {
  connection.release(
    function (err) {
      if (err) {
        console.error(err.message);
      }
    });
}

function doClose(connection, resultSet) {
  resultSet.close(
    function (err) {
      if (err) { console.error(err.message); }
      doRelease(connection);
    });
}

Creating new orders is supported through POST operation on the REST API exposed by the Order microservice:

image


The implementation in the Node application is fairly straightforward – see below:

CODE FOR CREATING ORDERS – added to the OrdersAPI module:

var  eventBusPublisher = require("./EventPublisher.js");

ordersAPI.registerListeners =
  function (app) {
    app.get(apiURL + '/orders', function (req, res) {
      handleGetOrders(req, res);
    });
    app.post(apiURL + '/*', function (req, res) {
      handlePost(req, res);
    });
  }//registerListeners

handlePost =
  function (req, res) {
    if (req.url.indexOf('/rest/') > -1) {
      ordersAPI.handleGet(req, res);
    } else {
      var orderId = uuidv4();
      var order = req.body;
      order.id = orderId;
      order.status = "PENDING";
      insertOrderIntoDatabase(order, req, res,
        function (request, response, order, rslt) {

          eventBusPublisher.publishEvent("NewOrderEvent", {
            "eventType": "NewOrder"
            ,"order": order
            , "module": "order.microservice"
            , "timestamp": Date.now()
          }, topicName);

          var result = {
            "description": `Order has been creatd with id=${order.id}`
            , "details": "Published event = not yet created in Database " + JSON.stringify(order)
          }
          response.writeHead(200, { 'Content-Type': 'application/json' });
          response.end(JSON.stringify(result));

        });//insertOrderIntoDatabase
    }
  }//ordersAPI.handlePost



// produce unique identifier
function uuidv4() {
  return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function (c) {
    var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8);
    return v.toString(16);
  });
}	


The function handlePost() makes a call to the module EventBusPublisher, to publish a NewOrder event on the Event Hub. The code for this module is shown below:

var kafka = require('kafka-node');

var kafkaConnectDescriptor = "129.xx.yy.zz";

var Producer = kafka.Producer
KeyedMessage = kafka.KeyedMessage;

var client;

var APP_VERSION = "0.8.3"
var APP_NAME = "EventBusPublisher"

var producer;
var client;

function initializeKafkaProducer(attempt) {
  try {
    client = new kafka.Client(kafkaConnectDescriptor);
    producer = new Producer(client);
    producer.on('ready', function () {
      console.log("Producer is ready in " + APP_NAME);
    });
    producer.on('error', function (err) {
      console.log("failed to create the client or the producer " + JSON.stringify(err));
    })
  }
  catch (e) {
    console.log("Exception in initializeKafkaProducer" + e);
    console.log("Exception in initializeKafkaProducer" + JSON.stringify(e));
    console.log("Try again in 5 seconds");
    setTimeout(initializeKafkaProducer, 5000, ++attempt);
  }
}//initializeKafkaProducer
initializeKafkaProducer(1);

var eventPublisher = module.exports;

eventPublisher.publishEvent = function (eventKey, event, topic) {
  km = new KeyedMessage(eventKey, JSON.stringify(event));
  payloads = [
    { topic: topic, messages: [km], partition: 0 }
  ];
  producer.send(payloads, function (err, data) {
    if (err) {
      console.error("Failed to publish event with key " + eventKey + " to topic " + topic + " :" + JSON.stringify(err));
    }
    console.log("Published event with key " + eventKey + " to topic " + topic + " :" + JSON.stringify(data));
  });
}

Updating the bound Context

Suppose the details for Customer with identifier 25 are going to be updated, through the Customer microservice. That would mean that the customer record in the MongoDB database will be updated. However, that is not the only place where information about the Customer is recorded. Because we want our microservice to be independent and we can only properly work with the Order microservice if we have some information about elements associated with the order – such as the customer name and some details about each of the products – we have defined the data bound context of the Order microservice to include the customer name. As you can see in the next screenshot, we have an order record in the data store of the Order microservice for customer 25 and it contains the name of the customer.

image

That means that when the Customer microservice records a change in the name of the customer, we should somehow update the bound context of the Order microservice. And that is what we will do using the CustomerModified event, produced by the Customer microservice and consumed by the Order microservice.

image

The REST call to update the name of customer 25 – from Joachim to William:

image

The customer record in MongoDB is updated

image

and subsequently a CustomerModified event is produced to the devoxx-topic on Event Hub:

image

This event is consumed by the Order microservice and subsequently it triggers an update of the DVX_ORDERS table in the Oracle Database cloud instance. The code responsible for consuming the event and updating the database is shown below:

CODE FOR CONSUME EVENT – first the EventBusListener module

var kafka = require('kafka-node');

var client;

var APP_VERSION = "0.1.2"
var APP_NAME = "EventBusListener"

var eventListenerAPI = module.exports;

var kafka = require('kafka-node')
var Consumer = kafka.Consumer

var subscribers = [];

eventListenerAPI.subscribeToEvents = function (callback) {
  subscribers.push(callback);
}

var topicName = "a516817-devoxx-topic";
var KAFKA_ZK_SERVER_PORT = 2181;
var EVENT_HUB_PUBLIC_IP = '129.xx.yy.zz';

var consumerOptions = {
    host: EVENT_HUB_PUBLIC_IP + ':' + KAFKA_ZK_SERVER_PORT,
    groupId: 'consume-order-events-for-devoxx-app',
    sessionTimeout: 15000,
    protocol: ['roundrobin'],
    fromOffset: 'earliest' // equivalent of auto.offset.reset valid values are 'none', 'latest', 'earliest'
};

var topics = [topicName];
var consumerGroup = new kafka.ConsumerGroup(Object.assign({ id: 'consumer1' }, consumerOptions), topics);
consumerGroup.on('error', onError);
consumerGroup.on('message', onMessage);



function onMessage(message) {
    subscribers.forEach((subscriber) => {
        subscriber(message.value);
    })
}

function onError(error) {
    console.error(error);
    console.error(error.stack);
}

process.once('SIGINT', function () {
    async.each([consumerGroup], function (consumer, callback) {
        consumer.close(true, callback);
    });
});

And the code in module OrdersAPI that imports the module, registers the event listener and handles the event:

var eventBusListener = require("./EventListener.js");

eventBusListener.subscribeToEvents(
  (message) => {
    console.log("Received event from event hub");
    try {
    var event = JSON.parse(message);
    if (event.eventType=="CustomerModified") {
      console.log(`Details for a customer have been modified and the bounded context for order should be updated accordingly ${event.customer.id}`);
      updateCustomerDetailsInOrders( event.customer.id, event.customer)
    }
    } catch (err) {
      console.log("Parsing event failed "+err);
    }
  }
);

function updateCustomerDetailsInOrders( customerId, customer) {
  console.log(`All orders for cusyomer ${customerId} will  be  updated to new customer name ${customer.name} `);
  console.log('updateCustomerDetailsInOrders');
  handleDatabaseOperation("req", "res", function (request, response, connection) {
    var bindvars = [customer.name, customerId];
    var updateStatement = `update dvx_orders set customer_name = :customerName where customer_id = :customerId` ;
    connection.execute(updateStatement, bindvars, function (err, result) {
      if (err) {
        console.error('error in updateCustomerDetailsInOrders ' + err.message);
        doRelease(connection);
        callback(request, response, order, { "summary": "Update failed", "error": err.message, "details": err });
      }
      else {
        connection.commit(function (error) {
          if (error) console.log(`After commit - error = ${error}`);
          doRelease(connection);
          // there is no callback:  callback(request, response, order, { "summary": "Update Status succeeded", "details": result });
        });
      }//else
    }); //callback for handleDatabaseOperation
  });//handleDatabaseOperation 
}// updateCustomerDetailsInOrders}


When we check the current set of Orders, we will find that the customer name associated with the order(s) for customer 25 have now William as the customer_name, instead of Joachim or Jochem.

imageWe can check directly in the Oracle Database Table DVX_ORDERS to find the customer name updated for both orders for customer 25:

image

The post Microservices and Updating Data Bound Context on Oracle Cloud with Application Container and Event Hub (plus DBaaS and MongoDB)–Part Two appeared first on AMIS Oracle and Java Blog.

Is it an index, a table or what?

Yann Neuhaus - Sun, 2017-11-19 10:54

A recent tweet from Kevin Closson outlined that in PostgreSQL it might be confusing if something is an index or table. Why is it like that? Lets have a look and start be re-building the example from Kevin:

For getting into the same situation Kevin described we need something like this:

postgres=# create table base4(custid int, custname varchar(50));
CREATE TABLE
postgres=# create index base4_idx on base4(custid);
CREATE INDEX

Assuming that we forgot that we created such an index and come back later and try to create it again we have exactly the same behavior:

postgres=# create index base4_idx on base4(custid);
ERROR:  relation "base4_idx" already exists
postgres=# drop table base4_idx;
ERROR:  "base4_idx" is not a table
HINT:  Use DROP INDEX to remove an index.
postgres=# 

They keyword here is “relation”. In PostgreSQL a “relation” does not necessarily mean a table. What you need to know is that PostgreSQL stores everything that looks like a table/relation (e.g. has columns) in the pg_class catalog table. When we check our relations there:

postgres=# select relname from pg_class where relname in ('base4','base4_idx');
  relname  
-----------
 base4
 base4_idx
(2 rows)

… we can see that both, the table and the index, are somehow treated as a relation. The difference is here:

postgres=# \! cat a.sql
select a.relname 
     , b.typname
  from pg_class a
     , pg_type b 
 where a.relname in ('base4','base4_idx')
   and a.reltype = b.oid;
postgres=# \i a.sql
 relname | typname 
---------+---------
 base4   | base4
(1 row)

Indexes do not have an entry in pg_type, tables have. What is even more interesting is, that the “base4″ table is a type itself. This means for every table you create a composite type is created as well that describes the structure of the table. You can even link back to pg_class:

postgres=# select typname,typrelid from pg_type where typname = 'base4';
 typname | typrelid 
---------+----------
 base4   |    32901
(1 row)

postgres=# select relname from pg_class where oid = 32901;
 relname 
---------
 base4
(1 row)

When you want to know what type a relation is of the easiest way is to ask like this:

postgres=# select relname,relkind from pg_class where relname in ('base4','base4_idx');
  relname  | relkind 
-----------+---------
 base4     | r
 base4_idx | i
(2 rows)

… where:

  • r = ordinary table
  • i = index
  • S = sequence
  • t = TOAST table
  • m = materialized view
  • c = composite type
  • f = foreign table
  • p = partitioned table

Of course there are also catalog tables for tables and indexes, so you can also double check there. Knowing all this the message is pretty clear:

postgres=# create index base4_idx on base4(custid);
ERROR:  relation "base4_idx" already exists
postgres=# drop relation base4_idx;
ERROR:  syntax error at or near "relation"
LINE 1: drop relation base4_idx;
             ^
postgres=# drop table base4_idx;
ERROR:  "base4_idx" is not a table
HINT:  Use DROP INDEX to remove an index.
postgres=# 

PostgreSQL finally is telling you that “base4_idx” is an index and not a table which is fine. Of course you could think that PostgreSQL should to that on its own but it is also true: When you want to drop something, you should be sure on what you really want to drop.

 

Cet article Is it an index, a table or what? est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator