Feed aggregator

Please help to understand how Nested Table cardinality estimation works in Oracle 12C

Tom Kyte - Wed, 2019-02-20 18:26
Hi Team, Request your help with one issue that we are facing. We pass a Nested table as a variable to a select statement. Example SQL is: <code>SELECT CAST ( MULTISET ( SELECT DEPTNBR FROM DEPT ...
Categories: DBA Blogs

Presentation of function result, which is own-type table via SELECT <func> FROM DUAL in sql developer.

Tom Kyte - Wed, 2019-02-20 18:26
Hi TOM, I've created a function, that granting access to tables and views in given schema to given user. In result, function returns a own-type table, that contains prepared statement and exception message, if thrown. 1. Creating types: <code...
Categories: DBA Blogs

With clause in distributed transactions

Tom Kyte - Wed, 2019-02-20 18:26
Hi Tom ! As there is put a restriction on GTTs: Distributed transactions are not supported for temporary tables does that mean that inline views in a query, i.e. using WITH clause, but those with MATERIALIZED hint will not work properly...
Categories: DBA Blogs

Partner Webcast – From Mobile & Chatbots to the new Digital Assistant Cloud

In the last few years we have seen a massive growth in the mobile usage of instant messaging and chat applications. With Oracle Intelligent Bots, an integrated feature of Oracle Mobile Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Intercepting ADF Table Column Show/Hide Event with Custom Change Manager Class

Andrejus Baranovski - Wed, 2019-02-20 14:12
Ever wondered how to intercept ADF table column show/hide event from ADF Panel Collection component? Yes, you could use ADF MDS functionality to store user preference for table visible columns. But what if you would want to implement it yourself without using MDS? Actually, this is possible through custom persistence manager class. I will show you how.

If you don't know what I'm talking about. Check below screenshot, this popup comes out of the box with ADF Panel Collection and it helps to manage table visible columns. Pretty much useful, especially for large tables:


Obviously, we would like to store user preference and next time the user comes back to the form, he should see previously stored setup for the table columns. One way to achieve this is to use out of the box ADF MDS functionality. But what if you don't want to use it? Still possible - we can catch all changes done through Manage Columns popup in custom Change Manager class. Extend from SessionChangeManager and override only a single method - addComponentChange. This is the place where we intercept changes and could log them to DB for example (later on form load, we could read table setup and apply it before fragment is rendered):


Register custom Change Manager class in web.xml:


Manage Columns popup is out of the box functionality offered by ADF Panel Collection component:


Method addComponentChange will be automatically invoked and you should see similar output when changing table columns visibility:


Download sample application code from my GitHub repository.

Quarterly EBS Upgrade Recommendations: February 2019 Edition

Steven Chan - Wed, 2019-02-20 09:41

We've previously provided advice on the general priorities for applying EBS updates and creating a comprehensive maintenance strategy.   

Here are our latest upgrade recommendations for Oracle E-Business Suite updates and technology stack components. These quarterly recommendations are based upon the latest updates to Oracle's product strategies, latest support timelines, and newly-certified releases

You can research these yourself using this MOS Note:

Upgrade Recommendations for February 2019

  EBS 12.2  EBS 12.1  EBS 12.0  EBS 11.5.10 Check your EBS support status and patching baseline

Apply the minimum 12.2 patching baseline
(EBS 12.2.3 + latest technology stack updates listed below)

In Premier Support to December 30, 2030

Apply the minimum 12.1 patching baseline
(12.1.3 Family Packs for products in use + latest technology stack updates listed below)

In Premier Support to December 31, 2021

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 12.0 users should be on the minimum 12.0 patching baseline

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 11i users should be on the minimum 11i patching baseline

Apply the latest EBS suite-wide RPC or RUP

12.2.8
Oct 2018

12.1.3 RPC5
Aug 2016

12.0.6

11.5.10.2
Use the latest Rapid Install

StartCD 51
Feb 2016

StartCD 13
Aug 2011

12.0.6


11.5.10.2

Apply the latest EBS technology stack, tools, and libraries

AD/TXK Delta 10
Sep 2017

FND
Apr 2017

Web ADI RPC
Jan 2018

Report Manager Bundle 5
Nov 2018

EBS 12.2.7 OAF Update 1
Jan 2019

EBS 12.2.6 OAF Update 16
Nov 2018

EBS 12.2.5 OAF Update 20
Jul 2018

EBS 12.2.4 OAF Update 20
Nov 2018

ETCC
Feb 2019

Web Tier Utilities 11.1.1.9

Daylight Savings Time DSTv32
Dec 2018

Upgrade to JDK 7

Web ADI Bundle 5
Jan 2018

Report Manager Bundle 5
Jan 2018

FND
Apr 2017

OAF Bundle 5
Jun 2016

JTT Update 4
Oct 2016

Daylight Savings Time DSTv32
Dec 2018

Upgrade to JDK 7

N/A

N/A

Apply the latest security updates

Jan 2019 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager to Feb 2020

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Jan 2019 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager to Feb 2020

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Oct. 2015 Critical Patch Update April 2016 Critical Patch Update Use the latest certified desktop components

Switch to Java Web Start

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements

Upgrade to IE 11

Upgrade to Firefox ESR 60

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

Switch to Java Web Start

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements

Upgrade to IE 11

Upgrade to Firefox ESR 60

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

N/A N/A Upgrade to the latest database Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 Database 11.2.0.4 or 12.1.0.2 If you're using Oracle Identity Management

Upgrade to Oracle Access Manager 11gR2 or Oracle Access Manager 12c

Upgrade to Oracle Internet Directory 11gR1 or Oracle Internet Directory 12c or Oracle Unified Directory 11gR2

Migrate from Oracle SSO to Oracle Access Manager 11gR2 or Oracle Access Manager 12c

Upgrade to Oracle Internet Directory 11gR1 or Oracle Internet Directory 12c

N/A N/A If you're using Oracle Discoverer

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA)

Discoverer 11.1.1.7 is in Sustaining Support as of June 2017

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA)

Discoverer 11.1.1.7 is in Sustaining Support as of June 2017

N/A N/A If you're using Oracle Portal Migrate to Oracle WebCenter  11.1.1.9 Migrate to Oracle WebCenter 11.1.1.9

N/A

N/A
Categories: APPS Blogs

Cloning is one of the common task (apart from Patching & Troubleshooting).

Online Apps DBA - Wed, 2019-02-20 08:00

There are 3 main steps in Cloning an Oracle EBS Environment Visit: https://k21academy.com/appsdba25 & learn with our [BLOG] Oracle EBS (R12): Database Cloning from RMAN Backup to Practice with 3 Phase Cloning Process: 1. Prepare Source Database System 2. Copy the source database to the target system 3. Configure Target Database System There are 3 […]

The post Cloning is one of the common task (apart from Patching & Troubleshooting). appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

What's new in 19c - Part II (Automatic Storage Management - ASM)

Syed Jaffar - Wed, 2019-02-20 07:32
Not too many features to talk on 19c ASM. Below is my hand-pick features of 19c ASM for this blog post.


Automatic block corruption recovery 

With Oracle 19c, the CONTENT.CHECK disk group attribute on Exadata and cloud environment is set to true by default. During data copy operation, if Oracle ASM relocation process detects block corruption, it perform automatic block corruption recovery by replacing the corrupted blocks with an uncorrupted mirror copy, if one is avialble.


Parity Protected Files Support

The level of data mirroring is controlled through ASM disk group REDUNDANCY attribute. When a two or three way of ASM mirroring is configured to a disk group to store write-once files, like archived logs and backup sets, a great way of space is wasted. To reduce the storage overahead to such file types, ASM now introduced PARITY value with REDUNDANCY file type property. The PARITY value specifies single parity for redundancy. Set the REDUNDANCY settings to PARITY to enable this feature.

The redundancy of a file  can be modified after its creation. When the property is changed from HIGH, NORMAL or UNPROTECTED to PARITY, only the files created after this change will have impact, while the existing files doesn't have any impact.

A few enhancements are done in Oracle ACFS, Oracle ADVM and ACFS replication. Refer 19c ASM new features for more details.


** Leaf nodes are de-supported as part of Oracle Flex Cluster architecture from 19c.


[Blog] Oracle WebLogic Administration: Supported Maximum Availability Architectures (MAA)

Online Apps DBA - Wed, 2019-02-20 07:22

  Supported Maximum Availability Architectures (MAA) is a multi-datacenter solutions that provides continuous availability to protect an Oracle WebLogic Server system against downtime across multiple data-centers… Want to know more about MAA? Visit: https://k21academy.com/weblogic24 and consider our new blog covering: ✔ Active-Active Application Tier with Active-Passive Database Tier and its key aspects ✔ Active-Passive Application […]

The post [Blog] Oracle WebLogic Administration: Supported Maximum Availability Architectures (MAA) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Business-Critical Cloud Adoption Growing yet Security Gaps Persist, Report Says

Oracle Press Releases - Wed, 2019-02-20 07:00
Press Release
Business-Critical Cloud Adoption Growing yet Security Gaps Persist, Report Says Oracle and KPMG study finds that confusion over cloud security responsibilities, lack of visibility and shadow IT complicate corporate security

REDWOOD SHORES, Calif. and NEW YORK—Feb 20, 2019

Companies continue to move business critical workloads and their most sensitive data to the cloud, yet security challenges remain, according to the second annual Oracle and KPMG Cloud Threat Report 2019 released today. The report found that 72 percent of respondents feel the public cloud is more secure than what they can deliver in their own data center and are moving data to the cloud, but visibility gaps remain that can make it hard for businesses to understand where and how their critical data is handled in the cloud.

The survey also found a projected 3.5 times increase in the number of organizations with more than half of their data in the cloud from 2018 to 2020, and 71 percent of organizations indicated that a majority of this cloud data is sensitive, up from 50 percent last year. However, the vast majority (92 percent) noted they are concerned about employees following cloud policies designed to protect this data.

The report found that the mission-critical nature of cloud services has made cloud security a strategic imperative. Cloud services are no longer nice-to-have tertiary elements of IT—they serve core functions essential to all aspects of business operations. The 2019 report identified several key areas where the use of cloud service can present security challenges for many organizations.

  • Confusion about the shared responsibility security model has resulted in cybersecurity incidents. Eighty-two percent of cloud users have experienced security events due to confusion over the shared responsibility model. While 91 percent have formal methodologies for cloud usage, 71 percent are confident these policies are being violated by employees, leading to instances of malware and data compromise.
  • CISOs are too often on the cloud security sidelines. Ninety percent of CISOs surveyed are confused about their role in securing a Software as a Service (SaaS) versus the cloud service provider environment.
  • Visibility remains the top security challenge. The top security challenge identified in the survey is detecting and reacting to security incidents in the cloud, with 38 percent of respondents naming it as their top challenge today. Thirty percent cited the inability of existing network security controls to provide visibility into cloud-resident server workloads as a security challenge.
  • Rogue cloud application use and lack of security controls put data at risk. Ninety-three percent of respondents indicated they are still dealing with “shadow IT”—in which employees use unsanctioned personal devices and storage or file share software for corporate data. Half of organizations cited lack of security controls and misconfigurations as common reasons for fraud and data exposures. Twenty-six percent of organizations cited unauthorized use of cloud services as their biggest cybersecurity challenge today.

“The world’s most important workloads are moving to the cloud, heightening the need for a coordinated, integrated and layered security strategy,” said Kyle York, vice president of product strategy, Oracle Cloud Infrastructure. “Starting with a cloud platform built for security and applying AI to safeguard data while also removing the burden of administrative tasks and patching removes complexity and helps organizations safeguard their most critical asset—their data.”

“As organizations continue to transition their cyber security thinking from strictly risk management to more of a focus on business innovation and growth, it is important that enterprise leaders align their business and cyber security strategies,” said Tony Buffomante, U.S. Leader of KPMG LLP’s Cyber Security Services. “With cloud services becoming an integral part of business operations, there is an intensified need to improve the security of the cloud and to integrate cloud security into the organization’s broader strategic risk mitigation plans.”

Additional Key Findings
  • Automation may improve chronic patching problems: Fifty-one percent surveyed report patching has delayed IT projects and 89 percent of organizations want to employ an automatic patching strategy.
  • Machine learning may help decrease threats: Fifty-three percent are using machine learning to decrease overall cyber security threats, while 48 percent are using a Multi-factor Authentication (MFA) solution to automatically trigger a second factor of authentication upon detecting anomalous user behavior.
  • Supply chain risk: Business-critical services must be contained as supply chain compromise has led to the introduction of malware in 49 percent of cases, followed by unauthorized access of data in 46 percent of cases.
  • Security events continue to increase while shared responsibility confusion expands: Only 1 in 10 organizations can analyze more than 75 percent of their security event data and 82 percent of cloud users have experienced security events due to confusion over cloud shared responsibility models.
  • Cloud adoption has expanded the core-to-edge threat model: An increasingly mobile workforce accessing both on premise and cloud-delivered applications and data dramatically complicates how cybersecurity professionals must think about their risk and exposure. In 2018, the number one area of investment was training, but this year, training slipped to number two and was replaced by edge-based security controls (e.g., WAF, CASB, Botnet/DDoS Mitigation controls).

To find out more about the Oracle and KPMG Cloud Threat Report 2019, visit Oracle at the RSA Conference, March 4-8 in San Francisco. (Booth #1559 – Moscone South).

About the Report

The Oracle and KPMG Cloud Threat Report 2019 examines emerging cyber security challenges and risks that businesses are facing as they embrace cloud services at an accelerating pace. The report provides leaders around the globe and across industries with important insights and recommendations for how they can help ensure that cyber security is a critical business enabler. The data in the report is based on a survey of 450 cyber security and IT professionals from private and public-sector organizations in North America (United States and Canada), Western Europe (United Kingdom), and Asia (Australia, Singapore).

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Michael Rudnick
KPMG LLP
+1.201.307.7398
mrudnick@kpmg.com
Christine Curtin
KPMG LLP
+1.201.307.8663
ccurtin@kpmg.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About KPMG LLP

KPMG LLP, the audit, tax and advisory firm (www.kpmg.com/us), is the independent U.S. member firm of KPMG International Cooperative ("KPMG International"). KPMG International’s independent member firms have 197,000 professionals working in 154 countries. KPMG International has been named a Leader in the Forrester Research Inc. report, The Forrester Wave™ Information Security Consulting Services Q3 2017. Learn more at www.kpmg.com/us. Some or all of the services described herein may not be permissible for KPMG audit clients and their affiliates.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Michael Rudnick

  • +1.201.307.7398

Christine Curtin

  • +1.201.307.8663

Oracle Exposes “DrainerBot” Mobile Ad Fraud Operation

Oracle Press Releases - Wed, 2019-02-20 06:00
Press Release
Oracle Exposes “DrainerBot” Mobile Ad Fraud Operation Millions of Consumer Devices May Be Infected; Apps Can Drain 10GB Data/Month; Joint Work of Teams from Oracle’s Moat and Dyn Acquisitions Led to Discovery

Redwood City, CA—Feb 20, 2019

Oracle today announced the discovery of and mitigation steps for “DrainerBot,” a major mobile ad fraud operation distributed through millions of downloads of infected consumer apps. Infected apps can consume more than 10GB of data per month downloading hidden and unseen video ads, potentially costing each device owner a hundred dollars per year or more in data overage charges.

DrainerBot was uncovered through the joint efforts of Oracle technology teams from its Moat and Dyn acquisitions. Now part of the Oracle Data Cloud, Moat offers viewability, invalid traffic (IVT), and brand safety solutions, while Dyn enables DNS and security capabilities as part of Oracle Cloud Infrastructure.

The DrainerBot code appears to have been distributed via an infected SDK integrated into hundreds of popular consumer Android apps and games[1] like "Perfect365," "VertexClub," “Draw Clash of Clans,” “Touch ‘n’ Beat – Cinema,” and “Solitaire: 4 Seasons (Full).” Apps with active DrainerBot infections appear to have been downloaded by consumers more than 10 million times, according to public download counts.

Information About DrainerBot
  • DrainerBot is an app-based fraud operation that uses infected code on Android devices to deliver fraudulent, invisible video ads to the device.
  • The infected app reports back to the ad network that each video advertisement has appeared on a legitimate publisher site, but the sites are spoofed, not real.
  • The fraudulent video ads do not appear onscreen in the apps (which generally lack web browsers or video players) and are never seen by users.
  • Infected apps consume significant bandwidth and battery, with tests and public reports indicating an app can consume more than 10 GB/month of data or quickly drain a charged battery, even if the infected app is not in use or in sleep mode.
  • The SDK being used in the affected apps appears to have been distributed by Tapcore, a company in the Netherlands.
  • Tapcore claims to help software developers monetize stolen or pirated installs of their apps by delivering ads through unauthorized installs, although fraudulent ad activity also takes place after valid app installs.
  • On its website, Tapcore claims to be serving more than 150 million ad requests daily and says its SDK has been incorporated into more than 3,000 apps.
  Supporting Quotes

“Mobile app fraud is a fast-growing threat that touches every stakeholder in the supply chain, from advertisers and their agencies to app developers, ad networks, publishers, and, increasingly, consumers themselves,” said Mike Zaneis, CEO of the Trustworthy Accountability Group (TAG). “These types of fraud operations cross all four of TAG’s programmatic pillars, including fraud, piracy, malware, and transparency, and preventing such operations will require unprecedented cross-industry collaboration. As the ad industry’s leading information-sharing body, we are delighted to work with Oracle to educate and inform TAG’s membership about this emerging threat.”

“DrainerBot is one of the first major ad fraud operations to cause clear and direct financial harm to consumers,” said Eric Roza, SVP and GM of Oracle Data Cloud. “DrainerBot-infected apps can cost users hundreds of dollars in unnecessary data charges while wasting their batteries and slowing their devices. We look forward to working with companies across the digital advertising ecosystem to identify, expose, and prevent this and other emerging types of ad fraud.”

“Mobile devices are a prime target with a number of potential infection vectors, which are growing increasingly complicated, interconnected, and global in nature,” said Kyle York, VP of product strategy, Oracle Cloud Infrastructure. “The discovery of the DrainerBot operation highlights the benefit of taking a multi-pronged approach to identifying digital ad fraud by combining multiple cloud technologies. Bottom line is both individuals and organizations need to pay close attention to what applications are running on their devices and who wrote them."

Resources

Detailed information and mitigation resources for DrainerBot can be found at info.moat.com/drainerbot, including:

  • Information and advice for consumers on identifying potentially-infected apps on their devices, as well as general device security tips;
  • Access to a list of app IDs that have shown DrainerBot activity; (Note: Not all apps listed may currently be infected)
  • Access to the DrainerBot SDK, as well as related documentation;
  • Access to sample infected APKs for use by antivirus and security providers to identify and mitigate the DrainerBot threat.
 

Oracle Data Cloud’s Moat Analytics helps top advertisers and publishers measure and drive attention across trillions of ad impressions and content views, so they can avoid invalid traffic (IVT), improve viewability, and better protect their media spend. Among those solutions, Pre-Bid by Moat helps marketers identify and utilize ad inventory that meets their high standards for IVT, third-party viewability, and brand safety.

Oracle Cloud Infrastructure edge services (formerly Dyn) offer managed Web Application Security, DNS, and Internet Intelligence services that help companies build and operate a secure, intelligent cloud edge, protecting them from a complex and evolving cyberthreat landscape.

[1] All of the apps identified have recently generated fraudulent DrainerBot impressions identified by Moat Analytics.

Contact Info
Shasta Smith
Oracle
+1.503.560.0756
shasta.smith@oracle.com
About Oracle Data Cloud

Oracle Data Cloud helps marketers use data to capture consumer attention and drive results. Used by 199 of the 200 largest advertisers, our Audience, Context and Measurement solutions extend across the top media platforms and a global footprint of more than 100 countries. We give marketers the data and tools needed for every stage of the marketing journey, from audience planning to pre-bid brand safety, contextual relevance, viewability confirmation, fraud protection, and ROI measurement. Oracle Data Cloud combines the leading technologies and talent from Oracle’s acquisitions of AddThis, BlueKai, Crosswise, Datalogix, Grapeshot, and Moat.

About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure is an enterprise Infrastructure as a Service (IaaS) platform. Companies of all sizes rely on Oracle Cloud to run enterprise and cloud native applications with mission-critical performance and core-to-edge security. By running both traditional and new workloads on a comprehensive cloud that includes compute, storage, networking, database, and containers, Oracle Cloud Infrastructure can dramatically increase operational efficiency and lower total cost of ownership. For more information, visit https://cloud.oracle.com/iaas.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Shasta Smith

  • +1.503.560.0756

DOAG day “database migration” in Mannheim at 19.02.2019

Yann Neuhaus - Wed, 2019-02-20 02:12

Yesterday I attended DOAG conference in Mannheim about migrating Oracle databases.

First presentation was about challenges about migrating to multitenant databases. With Oracle 20 it is probably not possible anymore to create a non CDB database or to upgrade from a non CDB database. So in the next years all databases have to be migrated to multitenant architecture. Problems with licensing, different charactersets were covered and some migration methods to PDB were shown.
In the second lecture causes of failures of database migrations were shown which were not caused by database technology itself but by surrounding technologies and human errors.
In the third speech a new migration method with RMAN incremental backups in combination with transportable tablespaces was shown, this is very interesting for migration of big databases.
Also a migration method with duplicate command with noopen option was presented.

Last but not least an entertaining show about migration projects was hold, the lessons learned were presented in Haiku (Japanese poem form).

Cet article DOAG day “database migration” in Mannheim at 19.02.2019 est apparu en premier sur Blog dbi services.

Oracle Identity Cloud Service enabling legal and compliance requirements

Context of identity and cloud access controls An intelligent Security Operations Center (SOC) needs threat intelligence and contextual awareness but, first of all, it needs to have the context of...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SQL Performance Tuning

Tom Kyte - Wed, 2019-02-20 00:06
Hi Team, i am kinda new to SQL performance tuning, So i need you guys to suggest me some helpful hand books (with test cases) which will suit for me. I have googled for it but don't know which one should i prefer as a newbie. So please share your...
Categories: DBA Blogs

What does argument "shares" stand for in create_plan_directive()?

Tom Kyte - Wed, 2019-02-20 00:06
Hi, In procedures create_cdb_plan_directive() or create_cdb_profile_directive() of dbms_resource_manager package, it is clear what role plays The parameter called "shares". Now, I fail to see what can be done with this parameter in create_plan_dir...
Categories: DBA Blogs

Dynamic database creation

Tom Kyte - Wed, 2019-02-20 00:06
Hello.. I'm asking if there is anyway to create an Oracle PDB database dynamically, I mean after the completion of some web registration, for example if I want to provide a complete private oracle database for my customer after he/she register to ...
Categories: DBA Blogs

How to print function name in sqlplus along with creation time.

Tom Kyte - Wed, 2019-02-20 00:06
Hi, I need advise on below query. Whenever we created any procedure/function from sqlplus command prompt then Function created message shown in sqlplus. E:g SQL> @C:\abc.fnc; Function created. 1:Can...
Categories: DBA Blogs

Podcast: JET-Propelled JavaScript

OTN TechBlog - Tue, 2019-02-19 23:00

JavaScript has been around since 1995. But a lot has changed in nearly a quarter-century. No longer limited to the browser, JavaScript has become a full fledged programming language, finding increasing use in enterprise application development. In this program a panel of experts explores the evolution of JavaScript, discusses how it is used in modern development projects, and then takes a close look at Oracle JavaScript Extension Toolkit, otherwise known as JET. Take a listen!

This program is Oracle Groundbreakers podcast #363. It was recorded on Thursday January 17, 2019.

The Panelists Listed alphabetically Joao Tiago Abreu Joao Tiago Abreu
Software Engineer and Oracle JET Specialist, Crossjoin Solutions, Portugal
Twitter  LinkedIn  Andrejus Baranovskis Andrejus Baranovskis
Oracle Groundbreaker Ambassador
Oracle ACE Director
CEO & Oracle Expert, Red Samurai Consulting
Twitter LinkedIn Luc Bors Luc Bors
Oracle Groundbreaker Ambassador
Oracle ACE Director
Partner & Technical Director, eProseed, Netherlands
Twitter LinkedIn John Brock John Brock
Senior Manager, Product Management, Development Tools, Oracle, Seattle, WA
Twitter LinkedIn  Daniel Curtis Daniel Curtis
Oracle Front End Developer, Griffiths Waite, UK
Author of Practical Oracle JET: Developing Enterprise Applications in JavaScript (June 2019, Apress)
Twitter LinkedIn    Additional Resources Coming Soon
  • DevOps, Streaming, Liquid Software, and Observability. Featuring panelists Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov
  • Polyglot Programming and GraalVM. Featuring panelists Rodrigo Botafogo, Roberto Cortez, Dr. Chris Seaton, Oleg Selajev.
  • Serverless and the Fn Project. A discussion of where Serverless fits in the IT landscape. Panelists TBD panel.
Subscribe Never miss an episode! The Oracle Groundbreakers Podcast is available via: Participate

If you have a topic suggestion for the Oracle Groundbreakers Podcast, or if you are interested in participating as a panelist, please post a comment. We'll get back to you right away.

PeopleSoft Value Continues Through at Least 2030 with Oracle Premier Support

Chris Warticki - Tue, 2019-02-19 18:33
Ongoing PeopleSoft Investment with Oracle Applications Unlimited

In June 2018, Oracle announced that we will provide Oracle Premier Support for PeopleSoft 9.2 continuous innovation release through at least 2030.  Oracle listened to our customers’ feedback and implemented the following features in the most current version of PeopleSoft:

Get the Benefits of Oracle Premier Support Through at Least 2030

For Oracle Premier Support customers, this provides you with ongoing benefits, including:

  • Software updates
  • Security alerts and updates
  • Critical patch updates
  • Tax, legal, and regulatory updates
  • Access to the My Oracle Support knowledge base
  • 24x7 assistance with service requests, and much more

Oracle is committed to ongoing development of new PeopleSoft features and enhancements through at least 2030, giving PeopleSoft customers the ability to leverage their Oracle PeopleSoft investment in software and customizations for years to come.  Get more with PeopleSoft Support.

Listen to Marc Weintraub, Oracle Vice President, Applications discuss Oracle's commitment to customers, innovation, and products.



Committed to Customers | Committed to Innovation | Committed to Products

The Importance of Feature Engineering and Selection

Rittman Mead Consulting - Tue, 2019-02-19 10:27

In machine learning your model is only ever as good as the data you train it on. As such a significant proportion of your effort should be focused on creating a dataset that is optimised to maximise the information density of your data. Feature engineering and selection are the methods used for achieving this goal.

In this context, the definition of a feature will be a column or attribute of the data.

Feature engineering is a broad term that covers a number of manipulations that may be carried out on your dataset. There are therefore many processes that could be considered part of feature engineering. In this post I introduce some of the high-level activities carried out as a part of feature engineering, as well as, some of the most common methods of feature selection, but this is by no means an exhaustive list.

Engineering Features

Feature engineering is the process by which knowledge of data is used to construct explanatory variables, features, that can be used to train a predictive model. Engineering and selecting the correct features for a model will not only significantly improve its predictive power, but will also offer the flexibility to use less complex models that are faster to run and more easily understood.

At the start of every machine learning project the raw data will be inevitably messy and unsuitable for training a model. The first step is always data exploration and cleaning, which involves changing data types and removing or imputing missing values. With an understanding of the data gained through exploration, it can be prepared in such a way that it is useful for the model. This may include removing outliers or specific features you don’t want the model to learn; as well as creating features from the data that better represent the underlying problem, facilitating the machine learning process and resulting in improved model accuracy.

Unprocessed data will likely contain features with the following problems:

Issue Solution Missing values Imputed in data cleaning Does not belong to the same dimension Normalisation/standardisation Information redundancy Filtered out in feature selection Decomposing or Splitting Features

One form of feature engineering is to decompose raw attributes into features that will be easier to interpret patterns from. For example, decomposing dates or timestamp variables into a variety of constituent parts may allow models to discover and exploit relationships. Common time frames for which trends occur include: absolute time, day of the year, day of the week, month, hour of the day, minute of the hour, year, etc. Breaking dates up into new features such as this will help a model better represent structures or seasonality in the data. For example, if you were investigating ice cream sales, and created a “Season of Sale” feature, the model would recognise a peak in the summer season. However, an “Hour of Sale” feature would reveal an entirely different trend, possibly peaking in the middle of each day.

Your data can also be binned into buckets and converted into factors (numerical categories) or flattened into a column per category with flags. Which of these will work best for your data depends on a number of factors including how many categorical values you have, and their frequency. (A similar process can be utilised for natural language processing or textual prediction see bag of words.)

Data Enrichment

Data enrichment is the process of creating new features by introducing data from external sources. Externally collated data is invaluable in prediction success, there is a plethora of publicly accessible datasets that will in most situations create impactful features.

Third party datasets could include attributes that are challenging or costly to collect directly; or are possibly more accurately available online.

It is important when enriching a dataset to consider the relevance of sources, as irrelevant features will unnecessarily complicate the model adding to the noise and increasing the chance of overfitting. For example, when working with dates it is generally insightful to introduce data on national holidays. In the case of our ice cream sales example, you may want to include national holidays, temperature and weather features, as these would be expected to influence sales. However, adding temperature or weather data from another country or other areas will definitely not be relevant and will in the best case have no relation to the data, but in the worst case have a spurious correlation and mislead the model when training.

Feature Transformations

Feature transformations can include aggregating or combining attributes to create new features. Useful and relevant features will depend on the problem at hand but averages, sums and ratios over different groupings can better expose trends to a model.

Multiplying or aggregating features to create new combined features can help with this. Categorical features can be combined into a single feature containing all combination of the two categories. This can easily be overdone and it is necessary to be careful as to not overfit due to misleading combined features.

It is possible to identify higher order interactions via a simple decision tree, the initial branches can be used to identify which features to combine.

A general requirement for some machine learning algorithms is standardisation/normalisation. This rescales the features so they represent a standard normal distribution (centred around 0 with a standard deviation of 1). The benefits of standardisation are that you do not emphasise variables with larger magnitudes and when comparing measurements with different units.

Automated Feature Engineering

Engineering features manually as described above can be very time consuming and requires a good understanding of the underlying data, structures in the data, the problem you are trying to solve and how best to represent the data to have the desired effect. Manual feature engineering is problem specific and cannot be applied to another dataset or problem.

There has been some progress made in the automation of feature engineering. FeatureTools for example is a python framework for transforming datasets into feature matrices. In my opinion there are positives and negatives to such an approach Feature engineering is time-consuming and any automation of this process would be beneficial. However, creating many useless features will lead to overfitting and automatically created features can result in loss of interpretability and understanding.

Feature Selection

Of the features now available in your data set, some will be more influential than others on the model accuracy. Feature selection aims to reduce the dimensionality of the problem by removing redundant or irrelevant features. A feature may be redundant if it is highly correlated with another feature, but does so because it is based on the same underlying information. These types of features can be removed from the data set without any loss of information. In our ice cream example, sales may be correlated with temperature and suncream usage, but the relationship with suncream is a result of this also being correlated with the confounding variable temperature.

Reducing the number of features through feature selection ensures training the model will require less memory and computational power, leading to shorter training times and will also help to reduce the chance of overfitting. Simplification of the training data will also make the model easier to interpret, which can be important when justifying real-world decision making as a result of model outputs.

Feature Selection Methods

Feature selection algorithms rank or score features based on a number of methods so that the least significant features can be removed. In general, the features are chosen from two perspectives; feature divergence and correlations between features and the dependent variable (the value being predicted). Some models have built-in feature selections, that aim to reduce or discount features as part of the model building process, for example LASSO Regression.

Methods that can be used to reduce features include: Correlation

A feature that is strongly correlated with the dependent variable may be important to the model. The correlation coefficients produced are univariate and therefore only correspond to each individual feature’s relationship to the dependent variable, as opposed to combinations of features.

Near Zero Variance

Depending on the problem you are dealing with you may want to remove constant and almost constant features across samples. There are functions that will remove these automatically such as nzv() in R. They can be tuned from removing only features which have a single unique value across all samples or those that have a few unique values across the set, to those with a large ratio of the most common value to the second most common.

Principal component analysis (PCA)

PCA is an unsupervised dimensionality reduction method, its purpose is to find the directions (the so-called principal components) in feature space that maximise the variance in the dataset. You are essentially finding the axes of feature space that are intuitive to the shape of the data, where there is the greatest variation, and therefore the most information. A very simple example would be a 3D feature space of x, y, z. If you look at the data through the x,y axis and all of your points were tightly clustered together this would not be a very good axis to view your data structure though. However, if you viewed it in the x, z plane and your data was spread out, this would be much more useful as you are able to observe a trend in the data. Principal components are dimensions along which your data points are most spread out, but as opposed to the example above, feature space will have n-dimensions not 3, and a principal component can be expressed a single feature or as a combination of many existing features.

Linear discriminant analysis (LDA)

LDA is a supervised dimensionality reduction method, using known class groupings. It achieves a similar goal to PCA, but instead of finding the axes that maximise the variance, it will represent the axes that maximise the separation between multiple classes. These are called linear discriminants.

For multi-class classification, it would be assumed that LDA would achieve better results than PCA, but this is not always the case.

Summary

The features in your data will influence the results that your predictive model can achieve.

Having and engineering good features will allow you to most accurately represent the underlying structure of the data and therefore create the best model.

Features can be engineered by decomposing or splitting features, from external data sources, or aggregating or combining features to create new features.

Feature selection reduces the computation time and resources needed to create models as well as preventing overfitting which would degrade the performance of the model. The flexibility of good features allows less complex models, which would be faster to run and easier to understand, to produce comparable results to the complex ones.

Complex predictive modelling algorithms perform feature importance and selection internally while constructing models. These models can also report on the variable importance determined during the model preparation process. However, this is computationally intensive and by first removing the most obviously unwanted features, a great deal of unnecessary processing can be avoided.

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator