Feed aggregator

5 main building blocks of the new Visual Builder Cloud Service

Amis Blog - 1 hour 26 min ago

In may 2018 Oracle introduced the new version of Visual Builder Cloud Service. This version is not just aimed at the Citizen Developer, in the end an experienced JavaScript can do nice things with it.

In this blog I will have a look at 5 of the 6 main building blocks you build a VBCS applications with:

  1. REST service connections
  2. Flows and Pages
  3. Variables
  4. Action Chains
  5. UI Components

Putting all of this in one blog is a lot, so this is a lengthy one. The final result can be found here.

With VBCS you can create a lot using Drag and Drop! But in the end you have to be aware that it is all Javascipt, HTML5 and CSS you are creating. And it is all build on JET!

Before we can start with these concepts, I create a New Application.

Rest Service Connection

I start with creating some endpoints for a publicly available REST API, https://docs.openaq.org/ An API with Air quality measurements

This API contains several endpoints, the graph I am going to create uses the measurements API. As I am from the Netherlands, I use data from the Netherlands in this blog.

First I create the Service Connection based on the Cities endpoint.

1. Create Service Connection by Service Endpoint

2. Specify “URL”: https://api.openaq.org/v1/cities

3. Service Base URL is populated with https://api.openaq.org/v1/, give Service proper name

4. For Request > URL Parameters Add the Static Parameter “country” with value “NL”

5. Test the endpoint and Copy to Response Body

6. Create the endpoint

Create flow and first Page

I will create a Web Application that contains the main Flow and main-start Page.

On the main-start Page I drop a Table UI Component.

I have marked yellow the Table UI component, the Collapse/Expand Property Inspector and Quick start button.

For this Table we Add Data, which is available on the Quick start Tab. Select the cities Service Endpoint.

As the country is hardcoded, I won’t display it in the table. I reorder the columns with drag and drop. City I select as Primary Key.

In the layout editor the Data from the Service endpoint is displayed. In the code you will see that an Oracle JET oj-table component is used.

You can also run the Application:

Next we add a Page for the Line-graph and drag an Oracle JET Line Chart on it.

Variables and Types

The responses from a Rest endpoints are stored in Variables, UI components and Action Chains use Variables.

When you have a look at the Line Chart code, it contains two Arrays: Groups and Series. The Groups Array is an Array of Strings ( [‘Group A’,’Group B’] ), the Series array is an Array of Objects ( [{name:’Series 1′,items:[41,34]},{name:’Series 2′,items:[55,30]},…] ). The Serie Object consists of a String (name) and a numeric Array (items).

For the line Graph I create two Types in the main Flow.

  1. a Type with a structure that can hold the result data of a REST call
  2. a Type with a structure that can be mapped to a JET line graph

A getCitiesResponse type was already created by VBCS for the response of the REST call. This is the final result I want:

Action Chain

I create an Action chain that will do these steps:

  • Show notification with City selected
  • Call measurement REST endpoint and map to linegraphDatatype
  • Map linegraphDatatype to linegraphType using JavaScript
  • Navigate to page line-graph page

When I open the Actions Tab for main-start Page, I see that a Action-Chain was already created. This Action-Chain saves the Primary Key of the row selected in my city Table.

I now create the mentioned Action Chain. In this ActionChain I create a variable and assign the page variable with the selected City as Default.

 

Next I drop a Fire Notification Action on the +-sign below  Start.

I set the Display Mode to transinet and specify the Message as

{{ “AirQuality data for ” + $chain.variables.selectedCity + ” is being retrieved.” }}

The measurement REST endpoint is called with a Call Rest Endpoint Action. selectedCity from the Action Chain is mapped to the city parameter of this Action.

The Result of this Action has to be mapped to linegraphData variable using Asign Variables Action.

This linegraphData array I need to convert to my linegraph object. For this I will call a peace of javascript. First I create a function in the main Flow javascript.

The complete peace of javascript can be found in the Application Export that is attached to this blog.

This javascript function can be called with a Call Module Function Action, VBCS recognizes the function added in the javascript. The linegraphData variable needs to be mapped to the chartData parameter.

The result from the javascript function needs to be mapped to the linegraph variable using an Assign Variables Action.

Finally I navigate to the main-graph Page using a Navigate Action.

A quick way to call this Action Chain (and get the linegraph) is by calling it from the already existing Action Chain to handle the selection of a row in the Cities table. I add a Call Action Chain Action

UI Components

The linegraph variable is now ready to be used by our graph. In the Data Tab of the Chart we set the Groups and Series.

To get a readable layout for the date-axis, I enable the x-axis as time-axis-type:

Everything together

The final graph for Amsterdam:

Amsterdam pm25 graph

The VBCS export can be downloaded here

The post 5 main building blocks of the new Visual Builder Cloud Service appeared first on AMIS Oracle and Java Blog.

Error in date comparison for partitioned table

Tom Kyte - 5 hours 37 min ago
Hi Tom, I am seeing a strange issue with a query which queries data from a partitioned table having sub-partitions. Please see table, query and the error. Can you please help, what could be the reason for this error? <code>CREATE TABLE trans_de...
Categories: DBA Blogs

Procedure Performance Number vs Pls_integer

Tom Kyte - 5 hours 37 min ago
I have task to improve performance in some of the packages and procedures in our application. We have 1 package and it has subprograms around 15-20 procedures. Below are my clarifications required. Iam making changes to datatypes from NUMBER to...
Categories: DBA Blogs

Windocks – a different way to use SQL Server on Docker

Yann Neuhaus - Thu, 2018-05-24 13:14

I spent some times to write blog posts about SQL Server on Docker since few months and you likely noticed it concerned mainly SQL Server on Linux. But what about SQL Server on Docker and Windows? If you take a look at the Docker images provided by Microsoft, only 2017 version is available and it is only intended for development and testing use. Versions prior 2017 seem to not be supported so far (but I don’t have in mind the Microsoft plan on this topic) and if you want to use containers with older SQL Server versions you have to start from scratch by yourself. This is not a hard task but I have in mind a discussion with one of my customers about the maintenance of SQL Server images prior 2017 and he told me he didn’t allocate resources to create and maintain such images by himself.

In addition, I recently presented at different events including our dbi services internal event  for customers the internal project we are managing about our DMK maintenance module tool for SQL Server. To cut the story short, this tool aims to provide smart maintenance capabilities for customer databases including backups, index, statistics etc …

Over the time, the code grew up and we had to support different versions of SQL Server from 2008 to SQL Server 2017. In the same time, the number of scenarios we have to manage and to test regarding the new features shipped with service packs and cumulative updates as well increased drastically over the time. So we decided to move on docker containers from different obvious reasons:

  • Containers are easy to provision for unit testing
  • Lower footprint on our local machine
  • We can share easily SQL images between members of our “development” team in a private registry
  • We will able to integer containers in a potential “future” CI pipeline

In our context, each developer has a local docker engine installed on a Windows 10 Pro laptop but for medium/large companies, internal rules may prevent installing such product on each individual developer laptop. Therefore, they prefer likely to provide a dedicated environment for developers that meet the internal rules and to keep the focus on flexibility.

In such shared infrastructure a main concern will be probably disk space issues because of the potential testing database(s) size and the number of containers provisioned at the same time. Let’s say that several developers would like to provision one SQL Server instance each with a testing database attached on it and that comes from the production through a sanitized database pipeline. The total size of this database is enough big to warn about a potential disk space issue because each instanced container will include its own image size (roughly 10GB with SQL Server containers on Windows) + the size of the testing database – let’s say more than 50 GBs and likely more for usual cases.

If you’re in one of the aforementioned scenarios, Windocks may be a good alternative scenario.

During this month I had the opportunity to test the product in the context of our internal project and after some experimentation stuff I admit I was agreeably surprised by some built-in capabilities as:

  • Windows authentication mode supported
  • SQL Server database cloning capabilities
  • The ability to use both Docker CLI and Web UI (even if I’m in favor of Docker CLI)
  • The ability to refresh a cloned database image from a differential backup

Before to dig further into the above features let’s just take few seconds to see the Windocks architecture design

blog 133 - 1 - windocks - architecture

We may notice some differences here. With a traditional Docker architecture, containers run on shared operating system kernel (either Windows or Linux) whereas Windocks is a strictly application construct-oriented solution and requires installing a SQL Server instance on the host as base instance for SQL Server containers. This is a fundamental change that provides some interesting advantages:

  • We don’t have to rebuild the containers to get OS & Framework patches through base image updates because containers rely only on the SQL Server base instance
  • We don’t break compatibility with some Windows storage functionalities as VSS and SQL Writer as well if you rely on them with third-party backup tools
  • We benefit from the underlying security and configuration of the SQL Server based instance meaning we may use Windows authentication for instance

Let’s talk now about one interesting feature shipped with Windocks that is cloned databases. Here a picture (from Windocks) that shows a high-level workflow of using SQL Server containers with cloned databases.

blog 133 - 2 - windocks - cloned db architecture

The starting point is a full database backup or a snapshot and Windocks will generate the corresponding VHD parent image of the backup media. Then each SQL Server container generated will use a writable clone of this parent image reducing drastically the disk footprint of each container (at least when the SQL Server container is generated). This is a common workflow that customers may implement for unit testing or for CI/CD pipeline to refresh development environments. The production database may be big in size and in this context the interest of using cloned databases becomes obvious. Another way provided would be to rely on mounted snapshot-based volumes from the storage provider but at this stage I didn’t test it. Maybe a next time!

To give a better picture of what it is possible to do with cloned databases, let me show you one way to implement it in the context of our DMK maintenance tool project. The development workflow of the development project is as follows:

blog 133 - 3 - windocks - dev workflow

We use SSDT and GitLab to manage our DMK maintenance tool sources and we perform unit testing by provisioning one or several SQL Server containers regarding the target version we want to validate at this moment. Furthermore we developed testing scenarios based on tSQLt framework we run after provisioning the SQL Server containers. With such architecture, we initially have to maintain images of different SQL Server versions and each time we want to create a container we have to attach one copy of the customized AdventureWorks_dbi database. Let’s say we want to work with 4 containers (SQL Server 2016) at time. We must to ensure we have sufficient disk space for 4 copies of this database (5 x 4 = 20GB) + space required for each container (10GB x 4 = 40GB) = 60GB. The dbi_tools database is intended to stay very small (512MB up to 1GB) comparing to other testing components, this is way I didn’t include it to the math.

Let’s now apply Windocks with the above context and the new architecture becomes as follows:

blog 133 - 4 - windocks - dev workflow

In this second scenario, we may include both the AdventureWorks_dbi and dbi_tools databases (including tSQLt framework objects) in a VHD parent image. For the demo, I used a SQL Server 2016 instance installed on the host that will be controlled by Windocks service during the container generation.

As I said previously we may rely on the configuration and the security of the base instance. Thus, I configured my security based on both Windows Authentication (dbi-services\clustadmin domain account) for SQL Server sysadmins and SQL authentication for development purpose (windock user).

In terms of configuration, tSQLt framework requires enabling CLR on the server level, so I changed the configuration directly on the SQL based instance to allow all my containers to inherit this configuration change.

SELECT 
	[name],
	principal_id,
	[type_desc]
FROM 
	sys.server_principals 
WHERE 
	[type] IN ('U', 'S')
	AND [name] NOT LIKE '##MS_%##'
	AND [name] NOT LIKE 'NT SERVICE%'

 

blog 133 - 41- windocks - base instance security

Here the content of my docker file.

FROM mssql-2016
SETUPCLONING FULL AdventureWorks_dbi \\Win2012sql16ls\windock\AdventureWorks_dbi_2008.bak
SETUPCLONING FULL dbi_tools \\Win2012sql16ls\windock\dbi_tools.bak

 

You may notice some new commands here:

  • FROM mssql-2016 indicates we will use the SQL Server 2016 instance as base image.
  • SETUPCLONING FULL indicates to generate the VHD parent image that we will include both the AdventureWorks_dbi and dbi_tools databases in a remote network path

Go ahead and let’s generate the corresponding SQL Server docker cloned image with the special -t flag

docker build -t 2016withdmk C:\DMK\WithClone\BasedImage\

 

blog 133 - 5- windocks - cloned database image

The process may take some times to generate the VHD parent image depending on the different database sizes and the location (local fast disk, network share bandwidth etc …)

blog 133 - 6- windocks - cloned db size

As expected, because the VHD parent image is a full byte copy of the data, the size is basically the sum of both AdventureWorks_dbi and dbi_tools database sizes.

Compared to a traditional approach, the provisioning of the associated containers will be faster irrespective to database size. Let’s create 2 containers from the image generated previously with the following docker commands:

PS C:\DMK\WithClone> docker run --name sql20161 -d 2016withdmk
PS C:\DMK\WithClone> docker run --name sql20162 -d 2016withdmk

 

Note that by default, Windocks will choose a random port between a configured range in the node.conf (START_PORT/ PORTS_PER_USER) unless you override the default behavior using the -p parameter.

blog 133 - 7- windocks - create containers

Let’s get a picture of the existing containers. My 2 containers have been generated correctly from the 2016withdmk base image.

PS C:\DMK\WithClone> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS
89344d99758b        2016withdmk         ""                  5 minutes ago       Started             1433/
4dfd1a198626        2016withdmk         ""                  4 minutes ago       Started             1434/

 

Let’s take another look at the storage side:

blog 133 - 8 - windocks - cloned databases diff

The interesting point here is the two differential disks created during the creation of my 2 previous containers are very small size (~70MB per container).

Let’s try to connect from SSMS to the new fresh containers:

blog 133 - 9 - windocks - ssms connection

It works and did you notice I was connected with my domain account? :)

Last topic I wanted to share with you is the Windocks capability to update a base image from differential backups. In a real context, you may need to refresh frequently your environment with recent data for developers and regarding your full backup size it might be advantageous to deal with differential backups.

Let’s consider this process with my internal project environment. During the development process we already had to scale the database schema of our testing database with the features shipped with new versions, service packs or cumulative update over the time. Let’s say we have to add a new dbo.bigTransactionhistory_cci table to test columnstore index scenarios. We first add the concerned table. Then we will perform a differential backup and finally we will update the 2016withDMK base image with it. Obviously in the context of my demo, the database size is likely not big enough to take full advantage of this feature but I trust you to draw a comparison with a more realistic scenario.

The image I want to update is named 2016withdmk. Note the mssql-xxx images that are in fact images from SQL Server base instances installed on my host server.

PS C:\DMK\WithClone> docker images
REPOSITORY            TAG                 IMAGE ID            CREATED                  VIRTUAL SIZE
2016withdmk           none                b3a249ba-2cf        Less than a second ago   0 B
agitated_heisenberg   none                bbd0ce26-4bb        Less than a second ago   0 B
dotnet-4.5            none                dotnet-4.5          3 years ago              0 B
windows               none                windows             3 years ago              0 B
mssql-2008r2          none                mssql-2008r2        3 years ago              0 B
mssql-2016            none                mssql-2016          3 years ago              0 B
mssql-2014            none                mssql-2014          3 years ago              0 B

 

My new docker file content to update the 2016withdmk base image is as follows.

FROM 2016withdmk

SETUPCLONING DIFF AdventureWorks_dbi \\Win2012sql16ls\windock\AdventureWorks_dbi_2008.DIFF
SETUPCLONING DIFF dbi_tools \\Win2012sql16ls\windock\dbi_tools.DIFF

 

I used the differential backup of the AdventureWorks_dbi database with the SETUPCLONNING DIFF command.

Let’s start updating the 2016withDMK base image with the following docker command (I tagged my new image with 1.2 suffix):

docker build -t 2016withdmk1.2 C:\DMK\WithClone\DiffImage\

 

blog 133 - 10 - windocks - cloned database diff backup

Although using a differential backup may allow for substantial productivity keep in mind that updating a VHD parent image will require creating an additional VHD parent image that will become another full byte copy of the environment as shown below:

blog 133 - 11- windocks - cloned db size diff

After running the same docker run command exposed before, let’s get a picture of the running on my server. The container id 789ce49562d0 is the new fresh container with updated data (2016withdmk1.2 image).

PS C:\DMK\WithClone> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS
89344d99758b        2016withdmk         ""                  About an hour ago   Started             1433/
4dfd1a198626        2016withdmk         ""                  About an hour ago   Started             1434/
789ce49562d0        2016withdmk1.2      ""                  2 minutes ago       Started             1436/

 

I may retrieve the new dbo.bigTransactionHistory_cci after connecting to the concerned SQL Server instance. I’m now able to update and to execute tSQLt scenarios to test columnstore index maintenance scenarios!

blog 133 - 12- windocks - ssms connection 2

 

In this blog post we’ve just surfaced some capabilities and possibilities provided by Windocks. There are other interesting features as encrypted password in the docker file (as we may use with Docker Swarm for instance), the support of TDE databases or lastly the ability to manage SQL Server on Linux. I will probably blog about it in the near future. Stay tuned!

 

 

 

 

Cet article Windocks – a different way to use SQL Server on Docker est apparu en premier sur Blog dbi services.

New OA Framework 12.2.4 Update 19 Now Available

Steven Chan - Thu, 2018-05-24 10:54

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.4 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.4 users should apply this patch. Future OAF patches for EBS Release 12.2.4 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.4 bundle patches.

This latest bundle patch includes a fix for the following issues:

  • In the IE11 browser, selecting a value from a LOV causes the page to scroll to the top.

Related Articles

Categories: APPS Blogs

Oracle Cloud Infrastructure CLI on Developer Cloud

OTN TechBlog - Thu, 2018-05-24 10:00

With our May 2018 release of Oracle Developer Cloud, we have integrated Oracle Cloud Infrastructure command line interface (from here on, will be using OCIcli in the blog) as part of the build pipeline in Developer Cloud. This blog will help you understand how you can configure and execute OCIcli commands as part of the build pipeline, configured as part of the build job in Developer Cloud.

Configuring the Build VM Template for OCIcli

You will have to create a build VM with the OCIcli software bundle, to be able to execute the build with OCIcli commands. Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select OCIcli from the list of software bundles available for configuration and click on the + sign to add it to the template. You will also have to add the Python3.5 software bundle, which is a dependency for the OCIcli. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “OCIcli” for our blog.

Build Job Configuration

Configure the Tenancy OCID as Build Parameter using String Parameter and give the name as per your wish. I have named it as "T" and have provided a default value to it, as shown in the screenshot below.

In the Builders tab Select OCIcli Builder and a Unix Shell builder in this sequence from the Add Builder drop down.

On adding the OCIcli Builder, you will see the form as below.

For the OCIcli Builder, you can get the parameters from the OCI console. Below screenshots would show where to get each of these form values from the OCI console.Below highlighted are in red boxes shows where you can get the Tenancy OCID and the region for the “Tenancy” and “Region” fields respectively in the OCIcli builder form.

For the “User OCID” and “Fingerprint” you need go to User Settings by clicking over the username drop down in the OCI console located at right hand side top. Please refer the screen shot below.

Please refer the links below for understanding the process of generating the Private Key and configuring the Public Key for the user in the OCI console.

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

In the Unix Shell Builder you can try out the below command:

oci iam compartment list -c $T

This command will list all the compartment in the Tenancy with OCID given to variable ‘T’ that we configured in the Build parameters tab as a String Parameter.

 

Post execution of the command, you can view the output on the console log. As shown below.

There are tons of other OCIcli commands that you can run as part of the build pipeline. Please refer this link for the same.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Postgres, the fsync() issue, and ‘pgio’ (the SLOB method for PostgreSQL)

Yann Neuhaus - Thu, 2018-05-24 08:05

That’s a long blog post title, which is actually just a good pretext to play with Kevin Closson SLOB method for PostgreSQL: pgio
I use the beta version of pgio here. If you want to read more about it, you can start on https://kevinclosson.net/2018/05/22/sneak-preview-of-pgio-the-slob-method-for-postgressql-part-i-the-beta-pgio-readme-file/. If you are used to the SLOB for Oracle (https://kevinclosson.net/slob/) you will quickly understand the ‘why’ and ‘how’ of pgio.

PostgreSQL’s fsync() surprise

You may have read about the fsync() issue. Postgres, from the beginning, relies a lot on the filesystem buffering to optimize I/O. So they write() to the data files but fsync() only at checkpoints. This is ok when everything goes well because the writes since the last checkpoints are protected by the Write Ahead Logging, where fsync() occurs for each writes at commit (if you didn’t change the default parameters for WAL). But when a problem occurs, such as power outage, some writes may be lost, or partially lost, and that’s not easy to detect at checkpoint time with fsync().

So, basically, there’s a risk of corruption and there are no easy ways to detect it.

You can read the details from https://lwn.net/Articles/752063/ and that’s not the subject of this post.

Most of the other databases are opening files with O_DSYNC, which means that the write() call will detect the error immediately. And the major ones are doing direct I/O anyway because they have their own buffer cache and do not need the performance overhead and corruption risk of double buffering.

Why is this so hard to fix?

So why is it so hard to do the same with Postgres? Just because it was not initially designed to optimize I/O and postgres relied heavily on the OS filesystem for that. The database systems which sync at each write, and which can do direct I/O, have implemented many optimizations to reduce the overhead of a disk latency at each write. They have their own buffer cache, with a background database writer which re-orders the writes in the optimal way. And they have multiblock writes for large contiguous writes which bypass the buffer cache.

However, you may have a storage system where write latency is minimal, and you may have an application where the overhead here is not a big problem. This means that you should measure it in order to balance between performance and prevention of corruption. And this is where the SLOB method is awesome: reliable and predictable metrics to measure IOPS.

pgio to the rescue

This is my first trial of pgio, in beta version. It cannot be easier. I’ve just un-tar-ed it:

tar -xvf pgio-0.9.tar
cd pgio

I’ve setup the pgio.conf with 4 schemas and 2 threads per schema:

UPDATE_PCT=10
RUN_TIME=300
NUM_SCHEMAS=4
NUM_THREADS=2
WORK_UNIT=255
UPDATE_WORK_UNIT=8
SCALE=200M
DBNAME=pgio
CONNECT_STRING="pgio"
CREATE_BASE_TABLE=TRUE

Because I want to test writes, I’ve set the UPDATE_PCT so that 10% of calls will do an UPDATE. And I kept the default work unit to read 255 blocks and, for those 10% updates, update 8 blocks only. I’ll run that with 2 threads per schemas, which means 8 concurrent sessions. And they will run for 300 seconds.

In this test I didn’t want to set different values. I just want to see what happens in IOPS for a common workload of lot of reads and small changes. the scale is 200M here. My workload sessions will find their buffers in memory.

On each test, I’ve created the pgio database:

create tablespace pgio location '/u01/pgdata';
CREATE TABLESPACE
create database pgio tablespace pgio;
CREATE DATABASE

Then run the setup.sh to load data in those schemas:

Job info: Loading 200M scale into 4 schemas as per pgio.conf->NUM_SCHEMAS.
Batching info: Loading 2 schemas per batch as per pgio.conf->NUM_THREADS.
Base table loading time: 15 seconds.
Waiting for batch. Global schema count: 2. Elapsed: 0 seconds.
Waiting for batch. Global schema count: 3. Elapsed: 103 seconds.
Waiting for batch. Global schema count: 4. Elapsed: 177 seconds.
Waiting for batch. Global schema count: 4. Elapsed: 249 seconds.
 
Group data loading phase complete. Elapsed: 249 seconds.
List of relations
Schema | Name | Type | Owner | Size | Description
--------+-----------+-------+----------+--------+-------------
public | pgio1 | table | postgres | 200 MB |
public | pgio2 | table | postgres | 200 MB |
public | pgio3 | table | postgres | 200 MB |
public | pgio4 | table | postgres | 200 MB |
public | pgio_base | table | postgres | 29 MB |
(5 rows)

And then I’m ready to run the runit.sh

ext4 mount option

My tablespace is on an ext4 filesystem:

-bash-4.2$ df -HT /u01/pgdata
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb ext4 32G 1.2G 29G 4% /u01

I’ll run the same workload, several times, with changing only one mount option:

async All I/O to the filesystem should be done asynchronously. (See also the sync option.)
sync All I/O to the filesystem should be done synchronously. In case of media with limited number of write cycles (e.g. some flash drives) "sync" may cause life-cycle shortening.

Which means that some runs will run with /u01 mounted as:

/dev/sdb on /u01 type ext4 (rw,nosuid,nodev,relatime,sync,seclabel,data=ordered)

and some others will run with the default (async):

/dev/sdb on /u01 type ext4 (rw,nosuid,nodev,relatime,seclabel,data=ordered)

I did multiple runs and checked that the result is consistent among them. I’ll show only one result for each configuration.

Run it with async

Here is the output of one ‘runit.sh’ when /u01 was in async:

Date: Thu May 24 10:56:57 CEST 2018
Database connect string: "pgio".
Shared buffers: 128MB.
Testing 4 schemas with 2 thread(s) accessing 200M (25600 blocks) of each schema.
Running iostat, vmstat and mpstat on current host--in background.
Launching sessions. 4 schema(s) will be accessed by 2 thread(s) each.
pg_stat_database stats:
datname| blks_hit| blks_read|tup_returned|tup_fetched|tup_updated
BEFORE: pgio | 252209 | 118112 | 110420 | 6788 | 18
AFTER: pgio | 25189171 | 136972696 | 159128092 | 147250205 | 573216
DBNAME: pgio. 4 schemas, 2 threads(each). Run time: 300 seconds. RIOPS >456181< CACHE_HITS/s >83123<

This shows that, within those 5 minutes, I’ve fetched 147243417 tuples and updated 573198 ones.

pgio takes snapshots of iostat, vmstat and mpstat. Here is a sample after 1 minute of run where we show that all CPU are busy in user or kernel, but not waiting on I/O latency:

10:57:51 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
10:57:54 AM all 52.82 0.00 42.22 0.09 0.00 1.11 0.00 0.00 0.00 3.76
10:57:54 AM 0 54.11 0.00 40.75 0.00 0.00 1.37 0.00 0.00 0.00 3.77
10:57:54 AM 1 54.42 0.00 40.14 0.34 0.00 1.02 0.00 0.00 0.00 4.08
10:57:54 AM 2 51.19 0.00 43.34 0.34 0.00 0.68 0.00 0.00 0.00 4.44
10:57:54 AM 3 51.02 0.00 44.22 0.34 0.00 1.36 0.00 0.00 0.00 3.06
10:57:54 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
10:57:57 AM all 53.33 0.00 42.15 0.00 0.00 1.02 0.00 0.00 0.00 3.50
10:57:57 AM 0 53.95 0.00 42.27 0.00 0.00 0.69 0.00 0.00 0.00 3.09
10:57:57 AM 1 52.56 0.00 42.66 0.00 0.00 0.68 0.00 0.00 0.00 4.10
10:57:57 AM 2 54.27 0.00 40.27 0.00 0.00 1.37 0.00 0.00 0.00 4.10
10:57:57 AM 3 52.72 0.00 43.54 0.00 0.00 1.36 0.00 0.00 0.00 2.38
10:57:57 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
10:58:00 AM all 54.10 0.00 41.54 0.09 0.00 0.77 0.00 0.00 0.00 3.50
10:58:00 AM 0 55.14 0.00 39.38 0.34 0.00 1.03 0.00 0.00 0.00 4.11
10:58:00 AM 1 54.95 0.00 40.96 0.00 0.00 0.68 0.00 0.00 0.00 3.41
10:58:00 AM 2 54.11 0.00 41.10 0.00 0.00 0.68 0.00 0.00 0.00 4.11
10:58:00 AM 3 52.05 0.00 44.86 0.00 0.00 0.68 0.00 0.00 0.00 2.40
10:58:00 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle

Run it with sync

Here is the output of one ‘runit.sh’ when /u01 was in sync:

Date: Thu May 24 12:18:54 CEST 2018
Database connect string: "pgio".
Shared buffers: 128MB.
Testing 4 schemas with 2 thread(s) accessing 200M (25600 blocks) of each schema.
Running iostat, vmstat and mpstat on current host--in background.
Launching sessions. 4 schema(s) will be accessed by 2 thread(s) each.
pg_stat_database stats:
datname| blks_hit| blks_read|tup_returned|tup_fetched|tup_updated
BEFORE: pgio | 255169 | 119163 | 112734 | 6945 | 18
AFTER: pgio | 15040938 | 74598977 | 87775490 | 86742056 | 337889
DBNAME: pgio. 4 schemas, 2 threads(each). Run time: 300 seconds. RIOPS >248266< CACHE_HITS/s >49285<

This shows that, within those 5 minutes, I’ve fetched 86735111 tuples and updated 337871 ones. So, basically the IOPS have been divided by two here in this example when waiting on each writes to be synced to disk.

pgio takes snapshots of iostat, vmstat and mpstat. Here is a sample after 1 minute of run where we show that all CPU are 30% idle waiting on I/O completion:

12:19:51 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
12:19:54 PM all 29.95 0.00 27.79 10.88 0.00 1.26 0.00 0.00 0.00 30.13
12:19:54 PM 0 30.63 0.00 27.46 11.27 0.00 0.70 0.00 0.00 0.00 29.93
12:19:54 PM 1 30.07 0.00 27.62 12.24 0.00 0.70 0.00 0.00 0.00 29.37
12:19:54 PM 2 30.28 0.00 27.82 10.92 0.00 0.35 0.00 0.00 0.00 30.63
12:19:54 PM 3 28.02 0.00 28.02 8.56 0.39 3.89 0.00 0.00 0.00 31.13
12:19:54 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
12:19:57 PM all 30.10 0.00 27.92 11.24 0.00 1.00 0.00 0.00 0.00 29.74
12:19:57 PM 0 29.29 0.00 28.57 10.71 0.00 0.36 0.00 0.00 0.00 31.07
12:19:57 PM 1 30.88 0.00 28.07 11.93 0.00 0.35 0.00 0.00 0.00 28.77
12:19:57 PM 2 30.31 0.00 27.18 12.54 0.00 0.70 0.00 0.00 0.00 29.27
12:19:57 PM 3 30.43 0.00 27.67 9.88 0.00 2.77 0.00 0.00 0.00 29.25
12:19:57 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
12:20:00 PM all 29.51 0.00 27.00 10.76 0.00 1.08 0.00 0.00 0.00 31.66
12:20:00 PM 0 29.58 0.00 28.17 10.56 0.00 0.35 0.00 0.00 0.00 31.34
12:20:00 PM 1 29.72 0.00 26.22 12.24 0.00 0.70 0.00 0.00 0.00 31.12
12:20:00 PM 2 29.12 0.00 26.32 10.88 0.00 0.35 0.00 0.00 0.00 33.33
12:20:00 PM 3 29.34 0.00 27.80 8.88 0.00 3.09 0.00 0.00 0.00 30.89
12:20:00 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle

So what?

Currently, Postgres relies on the filesystem to optimize the I/O, but there’s a risk of corruption in case of failure. We can force to wait for I/O completion with the ‘sync’ mount options of the filesystems, or even with some file attributes (chattr -R +S) for ext4 or xfs, but there’s a performance penalty. The important thing is to measure this penalty, and this is where pgio is great: measure the performance penalty with a workload that is customizable (amount of changes, amount of data,…) but also predictable (does not depend on other parameters like an application benchmark). When you know how being in ‘sync’ impacts your system, you can choose. And we can bet that future versions of Postgres will improve and offer ways to stay efficient without compromising the data at first power outage.

 

Cet article Postgres, the fsync() issue, and ‘pgio’ (the SLOB method for PostgreSQL) est apparu en premier sur Blog dbi services.

XMLQuery

Tom Kyte - Thu, 2018-05-24 07:46
Hi Tom, I am trying to learn XQuery use with SQL but it looks very complicated. Can you please advise with some simple cases (I am not interested in XML generation but using XML and XQuery functions with relational data tables). Where to start? Wh...
Categories: DBA Blogs

Oracle Database - Grant/Revoke High Concurrency

Tom Kyte - Thu, 2018-05-24 07:46
We have an Oracle 10g release 2 database running on a production environment. It's experiencing a lot of concurrency, as Sql Developer 17.4 "Waits for past 1 hour" graph shows. When the database is behaving slow, we take a look at that graph, and ...
Categories: DBA Blogs

Can I user automatic List in subpartitions?

Tom Kyte - Thu, 2018-05-24 07:46
Dears, I have a table that contains two columns one for year and the other for month. I need to partition this table based on year and month, where year represent the partitions and under that the month represent that sub-partitions. What I need i...
Categories: DBA Blogs

Multiple block allocation to small table

Tom Kyte - Thu, 2018-05-24 07:46
Hi, I executed below query on my database and found given output: <code>select a.table_name, a.NUM_ROWS, a.AVG_ROW_LEN, a.LAST_ANALYZED, a.SAMPLE_SIZE, a.blocks from user_tables a where num_rows <10;</code> Output: <code>TABLE_NA...
Categories: DBA Blogs

Nested loop and hash join.

Tom Kyte - Thu, 2018-05-24 07:46
Hi Tom, Can you help me in understanding how optimizer decides which join ( hash or nested loop) it will use for joining. Also which is the driving table in nested loop. There are lot of confusing answers on this on internet, which one to rely...
Categories: DBA Blogs

Create a physical standby for 12c RAC

Tom Kyte - Thu, 2018-05-24 07:46
hi - this weekend we have a project that is going live. we will be importing data (about 1TB) into the database. after that we want to create the physical standby. what is the best, efficient and most proven way to create a physical standby database ...
Categories: DBA Blogs

How to gather statistics on a standard edition database

Tom Kyte - Thu, 2018-05-24 07:46
Hi, I'll like to gather some statistics on long running statements on a standard edition database. Can you please suggest the best way to gather stats on this statement? <code> BANNER ...
Categories: DBA Blogs

Limit and conversion very long IN list : WHERE x IN ( ,,, ...)

Tom Kyte - Thu, 2018-05-24 07:46
How many elements may be in the WHERE x IN (,,,) list ? I see 2 ways to overcome IN list limitation: 1) use x=el_1 OR x=el_2 OR x=el_3 OR ... 2) create temporary table , but another question arise here: why create table A( X INTEGER, Y...
Categories: DBA Blogs

ADWC – System and session settings (DWCS lockdown profile)

Yann Neuhaus - Thu, 2018-05-24 05:04

The Autonomous Data Warehouse Cloud service is a PaaS managed service where we have a PDB and an ADMIN user which has most of the system privileges. For example, we have the privilege to change initialization parameters:
SQL> select * from dba_sys_privs where grantee=user and privilege like 'ALTER S%';
 
GRANTEE PRIVILEGE ADMIN_OPTION COMMON INHERITED
------- --------- ------------ ------ ---------
ADMIN ALTER SESSION YES NO NO
ADMIN ALTER SYSTEM YES NO NO

Still, not everything is allowed for several reasons: ensure that we cannot break the Oracle managed CDB and force us to use only the features allowed in the ‘autonomous’ service. This is limited with a lockdown profile:
SQL> show parameter pdb_lockdown
 
NAME TYPE VALUE
------------ ------ -----
pdb_lockdown string DWCS

DWCS means Data Warehouse Cloud Service which was the name of the Autonomous Data Warehouse Cloud service until Larry Ellison announces this self-driven-no-human trend under the marketing umbrella of ‘autonomous’.

The limitations are all documented but I like to see them by myself and in 18c we have a mean to see the lockdown rules from the PDB itself, through V$LOCKDOWN_RULES.

ALTER SYSTEM

Basically, in this ADWC all ALTER SYSTEM statements are disallowed and then they add the few exceptions for what we are allowed to:

SQL> select * from v$lockdown_rules where rule in ('ALTER SYSTEM') and clause_option is null;
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SYSTEM DISABLE ALL 73
STATEMENT ALTER SYSTEM SET ENABLE COMMON 73
STATEMENT ALTER SYSTEM KILL SESSION ENABLE ALL 73

You can ignore what is enabled for COMMON users because we have no common user to connect to our PDB. We will see which ALTER SYSTEM SET clauses are allowed. But in addition to those, only the ‘KILL SESSION’ is allowed for ALTER SYSTEM.

Here is the detail about the parameters we can set:

SQL> select * from v$lockdown_rules left outer join (select upper(name) clause_option,display_value,description from v$parameter) using (clause_option) where rule in ('ALTER SYSTEM') and clause_option is not null and status='ENABLE';
 
CLAUSE_OPTION RULE_TYPE RULE CLAUSE STATUS USERS CON_ID DISPLAY_VALUE DESCRIPTION
------------- --------- ---- ------ ------ ----- ------ ------------- -----------
APPROX_FOR_AGGREGATION STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE Replace exact aggregation with approximate aggregation
APPROX_FOR_COUNT_DISTINCT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE Replace count distinct with approx_count_distinct
APPROX_FOR_PERCENTILE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 none Replace percentile_* with approx_percentile
AWR_PDB_AUTOFLUSH_ENABLED STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE Enable/Disable AWR automatic PDB flushing
NLS_LANGUAGE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 AMERICAN NLS language name
NLS_SORT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS linguistic definition name
NLS_TERRITORY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 AMERICA NLS territory name
NLS_CALENDAR STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS calendar system name
NLS_COMP STATEMENT ALTER SYSTEM SET ENABLE ALL 73 BINARY NLS comparison
NLS_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS local currency symbol
NLS_DATE_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 DD-MON-YYYY HH24:MI:ss NLS Oracle date format
NLS_DATE_LANGUAGE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS date language name
NLS_DUAL_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 Dual currency symbol
NLS_ISO_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS ISO currency territory name
NLS_LENGTH_SEMANTICS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 BYTE create columns using byte or char semantics by default
NLS_NCHAR_CONV_EXCP STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE NLS raise an exception instead of allowing implicit conversion
NLS_NUMERIC_CHARACTERS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS numeric characters
NLS_TIMESTAMP_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time stamp format
NLS_TIMESTAMP_TZ_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 timestamp with timezone format
NLS_TIME_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time format
NLS_TIME_TZ_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time with timezone format
OPTIMIZER_IGNORE_HINTS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE enables the embedded hints to be ignored
OPTIMIZER_IGNORE_PARALLEL_HINTS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE enables embedded parallel hints to be ignored
PLSCOPE_SETTINGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 identifiers:all plscope_settings controls the compile time collection, cross reference, and storage of PL/SQL source code identifier and SQL statement data
PLSQL_CCFLAGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 PL/SQL ccflags
PLSQL_DEBUG STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE PL/SQL debug
PLSQL_OPTIMIZE_LEVEL STATEMENT ALTER SYSTEM SET ENABLE ALL 73 1 PL/SQL optimize level
PLSQL_WARNINGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 DISABLE:ALL PL/SQL compiler warnings settings

The APPROX_ ones, disable by default, can be used to transparently use approximations for faster results.
The NLS_ ones can be used to set NLS defaults for our sessions.
OPTIMIZER_IGNORE_ are new in 18c and are set by default here to ignore embedded hints. However, we can set then to false.
PLSQL_ are the defaults for sessions and I don’t understand why warnings are not enabled by default. Fortunately, we are able to change that at PDB level.

There are also some rules to disable some ALTER SYSTEM SET. They are there for the common users only (which have ALTER SYSTEM SET enabled) but they are interesting to see what Oracle choose to set in the ADWC service which cannot be changed in the PDB even by their common users:

SQL> select * from v$lockdown_rules left outer join (select upper(name) clause_option,display_value,description from v$parameter) using (clause_option) where rule in ('ALTER SYSTEM') and clause_option is not null and status='DISABLE';
CLAUSE_OPTION RULE_TYPE RULE CLAUSE STATUS USERS CON_ID DISPLAY_VALUE DESCRIPTION
------------- --------- ---- ------ ------ ----- ------ ------------- -----------
DB_FILES STATEMENT ALTER SYSTEM SET DISABLE ALL 73 25 max allowable # db files
"_PDB_INHERIT_CFD" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
"_PDB_MAX_AUDIT_SIZE" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
"_PDB_MAX_DIAG_SIZE" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
MAX_IDLE_TIME STATEMENT ALTER SYSTEM SET DISABLE ALL 73 60 maximum session idle time in minutes
PARALLEL_DEGREE_POLICY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 AUTO policy used to compute the degree of parallelism (MANUAL/LIMITED/AUTO/ADAPTIVE)
_PARALLEL_CLUSTER_CACHE_POLICY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 ADAPTIVE policy used for parallel execution on cluster(ADAPTIVE/CACHED)
_ENABLE_PARALLEL_DML STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enables or disables parallel dml
RESULT_CACHE_MODE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FORCE result cache operator usage mode
RESULT_CACHE_MAX_RESULT STATEMENT ALTER SYSTEM SET DISABLE ALL 73 1 maximum result size as percent of cache size
RESOURCE_MANAGER_PLAN STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FORCE:DWCS_PLAN resource mgr top plan
_CELL_OFFLOAD_VECTOR_GROUPBY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FALSE enable SQL processing offload of vector group by
PARALLEL_MIN_DEGREE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 CPU controls the minimum DOP computed by Auto DOP
_MAX_IO_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 Maximum I/O size in bytes for sequential file accesses
_LDR_IO_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 size of write IOs used during a load operation
_LDR_IO_SIZE2 STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 size of write IOs used during a load operation of EHCC with HWMB
_OPTIMIZER_GATHER_STATS_ON_LOAD_ALL STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable/disable online statistics gathering for nonempty segments
_OPTIMIZER_GATHER_STATS_ON_LOAD_HIST STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable/disable online histogram gathering for loads
_DATAPUMP_GATHER_STATS_ON_LOAD STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE Gather table statistics during Data Pump load rather thanimporting statistics from the dump file. This should be set to TRUE in the lockdown profile in a DWCS environment.
_OPTIMIZER_ANSWERING_QUERY_USING_STATS STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable statistics-based query transformation
_PX_XTGRANULE_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 128000 default size of a external table granule (in KB)
_OPTIMIZER_ALLOW_ALL_ACCESS_PATHS STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FALSE allow all access paths
_DATAPUMP_INHERIT_SVCNAME STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE Inherit and propagate service name throughout job
_DEFAULT_PCT_FREE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 1 Default value of PCT_FREE enforced by DWCS lockdown

So, among the interesting ones, Result Cache is forced for all results (RESULT_CACHE_MODE=FORCE), Parallel DML is enabled for all sessions (but we will see that we can disable it at session level), PCTFREE will always be 1 (_DEFAULT_PCT_FREE=1), statistics are gathered during load (this is a 18c feature). And we cannot change that.

There are only few additional ALTER SYSTEM SET which are allowed at session level:

SQL> select * from v$lockdown_rules where rule in ('ALTER SESSION') and clause is not null and clause_option is not null
and (clause_option,status,users) not in (select clause_option,status,users from v$lockdown_rules where rule in ('ALTER SYSTEM') and clause is not null and clause_option is not null)
;
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SESSION SET CONTAINER ENABLE ALL 73
STATEMENT ALTER SESSION SET CURRENT_SCHEMA ENABLE ALL 73
STATEMENT ALTER SESSION SET EDITION ENABLE ALL 73
STATEMENT ALTER SESSION SET OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES ENABLE ALL 73
STATEMENT ALTER SESSION SET DEFAULT_COLLATION ENABLE ALL 73
STATEMENT ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ACTIVE ENABLE ALL 73
STATEMENT ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ALL ENABLE ALL 73
STATEMENT ALTER SESSION SET TIME_ZONE ENABLE ALL 73

Besides the parameters here are what we can do with ALTER SESSION:

SQL> select * from v$lockdown_rules where rule='ALTER SESSION' and clause_option is null;
 
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SESSION DISABLE ALL 73
STATEMENT ALTER SESSION SET ENABLE COMMON 73
STATEMENT ALTER SESSION ADVISE COMMIT ENABLE ALL 73
STATEMENT ALTER SESSION CLOSE DATABASE LINK ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE COMMIT IN PROCEDURE ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL QUERY ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE RESUMABLE ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE COMMIT IN PROCEDURE ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL QUERY ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE RESUMABLE ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL QUERY ENABLE ALL 73

I’ll show other rules (other than ALTER SYSTEM and ALTER SESSION statements) in a future post. Lockdown profiles is a great feature because they have very fine granularity and makes it easy to document what is allowed or not. Oracle introduced them for their own usage in the public cloud. You can use the same on-premises for your private cloud. This requires multitenant architecture, but the option is not mandatory.

 

Cet article ADWC – System and session settings (DWCS lockdown profile) est apparu en premier sur Blog dbi services.

EMEA Edge Conference 2018

Anthony Shorten - Wed, 2018-05-23 19:41

I will be attending the EMEA Oracle Utilities Edge Conference on the 26 - 27 June 2018 in the Oracle London office. This year we are running an extended set of technical sessions around on-premise and the Oracle Utilities Cloud Services. This forum is open to Oracle Utilities customers and Oracle Utilities partners.

The sessions mirror the technical sessions for the conference in the USA held earlier this year with the following topics:

Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity and privacy concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities
Product Using the Oracle In-Memory Database Option
The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps

Note: These sessions are not recorded or materials distributed outside this forum.

This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations.

See you there if you are attending.

If you wish to attend, contact your Oracle Utilities local sales representative for details of the forum and the registration process.

Running Code as SYS From Another User not SYSDBA

Pete Finnigan - Wed, 2018-05-23 13:06
I have been embroiled in a twitter thread today about the post i made in this blog yesterday around granting privileges to a user and who should do the granting. Patrick today asked a further question: How do you make....[Read More]

Posted by Pete On 22/05/18 At 08:42 PM

Categories: Security Blogs

Who Should Grant Object Rights?

Pete Finnigan - Wed, 2018-05-23 13:06
Patrick Jolliffe posted a question via a tweet back in April but due to personal health pressures with a close relative of mine I have not had the time to deal with much over the last few months. I did....[Read More]

Posted by Pete On 21/05/18 At 07:08 PM

Categories: Security Blogs

Plaintiffs' Law Firms to Pay Oracle $270,000 to Settle Sanctions

Oracle Press Releases - Wed, 2018-05-23 12:43
Press Release
Plaintiffs' Law Firms to Pay Oracle $270,000 to Settle Sanctions

Redwood Shores, Calif.—May 23, 2018

Today, four plaintiffs’ law firms agreed to pay Oracle Corporation $270,000 to avoid Oracle’s motion for sanctions over their misconduct in a lawsuit related to Oracle’s acquisition of Micros Systems, Inc. in Case No. 13-C-14-099672, in the Circuit Court for Howard County, Maryland.

In 2014, Brower Piven, Robbins Arroyo LLC, Weisslaw LLC, and Pomerantz, LLP, along with other firms, sued Micros and its directors, claiming that Micros’s shareholders were not adequately informed about the transaction and that Oracle’s offer price was too low. The firms also sued Oracle for aiding and abetting the Micros board, despite the fact that Oracle simply engaged in arm’s-length negotiations to obtain the best price possible. The trial court dismissed all claims with prejudice against all defendants, including Oracle, and then courts at every level of the state system rejected five subsequent motions for reconsideration and appeals brought by these firms, affirming the trial judge’s finding that “with respect to Oracle, the Plaintiffs have failed to allege any acts, alleged acts, by agents or employees or Oracle that were impermissible under the law. The mere fact that Oracle pursued the merger is insufficient.” The outrageous litigation conduct drew the sanctions motion by Oracle, which the four firms identified above agreed to settle rather than defend.

“This substantial monetary settlement reflects the strength of our sanctions motion for what Oracle believes was clearly a “strike” suit brought against Oracle. For more than a decade, plaintiffs’ lawyers have brought these suits, challenging legitimate public mergers, in order to line their own pockets at the expense of shareholders. We are grateful that the Maryland courts recognized that the claims had no merit, and we urge other public companies to challenge these baseless suits. Shareholders’ attorneys cannot simply claim and collect what is effectively an unwarranted tax on mergers,” said Dorian Daley, Oracle’s General Counsel.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, SCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Pages

Subscribe to Oracle FAQ aggregator