Feed aggregator

PDB upgrade from 12c to 18c

Yann Neuhaus - 3 hours 4 min ago

Oracle 18c is out, in the Oracle Cloud, and the first thing I do with a new version is testing how long it takes to upgrade a previous version PDB by unplug/plug. Faster upgrade should be the benefit of having a slim dictionary where the system objects are reduced to metadata links and data links. However, it looks like upgrading the PDB dictionary still takes the same time as upgrading the CDB$ROOT.

The idea is to create a DBaaS service with a new CDB in 18.1 and plug a PDB coming from 12.2.0.1. Actually, I’m saying 18.1 but that may be 18.0 as I’m now lost in those version numbers. The cloud service was created with version: “18.0.0.0”, V$VERSION displays 18.1.0.0 for the release and 18.0.0.0 for the version:
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

My understanding is that the 18.0.0.0 is the version of the 18c dictionary, which will need a full upgrade only for 19c (19.0.0.0). And 18.1.0.0 is about the version, which will be incremented by Release Updates later.

I have an unplugged PDB that I plug into the new CDB:
SQL> create pluggable database PDB0 using '/u01/app/temp/PDB0.pdb';
Pluggable database PDB0 created.

When I open it, I get a warning:
SQL> alter pluggable database pdb0 open;
 
ORA-24344: success with compilation error
24344. 00000 - "success with compilation error"
*Cause: A sql/plsql compilation error occurred.
*Action: Return OCI_SUCCESS_WITH_INFO along with the error code
 
Pluggable database PDB0 altered.

Then I check the PDB PLUG IN VIOLATIONS:
 
SQL> select * from pdb_plug_in_violations;
TIME NAME CAUSE TYPE ERROR_NUMBER LINE MESSAGE STATUS ACTION CON_ID
---- ---- ----- ---- ------------ ---- ------- ------ ------ ------
24-FEB-18 08.35.16.965295000 PM PDB0 OPTION ERROR 0 1 Database option APS mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.966343000 PM PDB0 OPTION ERROR 0 2 Database option CATALOG mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.966556000 PM PDB0 OPTION ERROR 0 3 Database option CATJAVA mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.966780000 PM PDB0 OPTION ERROR 0 4 Database option CATPROC mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.966940000 PM PDB0 OPTION ERROR 0 5 Database option CONTEXT mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.967096000 PM PDB0 OPTION ERROR 0 6 Database option DV mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.967250000 PM PDB0 OPTION ERROR 0 7 Database option JAVAVM mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.967403000 PM PDB0 OPTION ERROR 0 8 Database option OLS mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.967602000 PM PDB0 OPTION ERROR 0 9 Database option ORDIM mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.967785000 PM PDB0 OPTION ERROR 0 10 Database option OWM mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.967939000 PM PDB0 OPTION ERROR 0 11 Database option SDO mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.968091000 PM PDB0 OPTION ERROR 0 12 Database option XDB mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.968246000 PM PDB0 OPTION ERROR 0 13 Database option XML mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.968398000 PM PDB0 OPTION ERROR 0 14 Database option XOQ mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 08.35.16.971138000 PM PDB0 Parameter WARNING 0 1 CDB parameter compatible mismatch: Previous '12.2.0' Current '18.0.0' PENDING Please check the parameter in the current CDB 1
24-FEB-18 08.35.17.115346000 PM PDB0 VSN not match ERROR 0 1 PDB's version does not match CDB's version: PDB's version 12.2.0.1.0. CDB's version 18.0.0.0.0. PENDING Either upgrade the PDB or reload the components in the PDB. 4

The messages are clear: all components have a 12.2.0.1 dictionary and must be upgraded to a 18.0.0.0.0 one

The PDB is opened in MIGRATE mode with only RESTRICTED sessions enabled:
SQL> show pdbs
SP2-0382: The SHOW PDBS command is not available.
SQL> pdbs
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1201448 2 11:42:25 NONE 942476327 3958500
3 CDB1PDB MOUNTED NORMAL 942476327 2 19:58:55 NONE 942476327 3958500
4 PDB0 MIGRATE YES NEW 941386968 3 20:34:50 NONE 942476327 3958500

Then, here is the upgrade for this newly plugged PDB0:

[oracle@DBaaS18c 18c]$ dbupgrade -c PDB0
 
Argument list for [/u01/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catctl.pl] Run in c = PDB0
Do not run in C = 0
Input Directory d = 0
Echo OFF e = 1
Simulate E = 0
Forced cleanup F = 0
Log Id i = 0
Child Process I = 0
Log Dir l = 0
Priority List Name L = 0
Upgrade Mode active M = 0
SQL Process Count n = 0
SQL PDB Process Count N = 0
Open Mode Normal o = 0
Start Phase p = 0
End Phase P = 0
Reverse Order r = 0
AutoUpgrade Resume R = 0
Script s = 0
Serial Run S = 0
RO User Tablespaces T = 0
Display Phases y = 0
Debug catcon.pm z = 0
Debug catctl.pl Z = 0
 
catctl.pl VERSION: [18.0.0.0.0] STATUS: [Production] BUILD: [RDBMS_18.1.0.0.0_LINUX.X64_180103.1] ...

The Build number mentions 18.1 built on 03-JAN-2018

Look at the summary to see the time it takes:
Oracle Database Release 18 Post-Upgrade Status Tool 02-24-2018 21:36:5
[PDB0]  
Component Current Full Elapsed Time
Name Status Version HH:MM:SS
 
Oracle Server UPGRADED 18.1.0.0.0 00:13:37
JServer JAVA Virtual Machine UPGRADED 18.1.0.0.0 00:00:51
Oracle XDK UPGRADED 18.1.0.0.0 00:00:21
Oracle Database Java Packages UPGRADED 18.1.0.0.0 00:00:05
OLAP Analytic Workspace UPGRADED 18.1.0.0.0 00:00:11
Oracle Label Security UPGRADED 18.1.0.0.0 00:00:03
Oracle Database Vault UPGRADED 18.1.0.0.0 00:00:34
Oracle Text UPGRADED 18.1.0.0.0 00:00:11
Oracle Workspace Manager UPGRADED 18.1.0.0.0 00:00:18
Oracle Real Application Clusters UPGRADED 18.1.0.0.0 00:00:00
Oracle XML Database UPGRADED 18.1.0.0.0 00:00:49
Oracle Multimedia UPGRADED 18.1.0.0.0 00:01:03
Spatial UPGRADED 18.1.0.0.0 00:02:06
Oracle OLAP API UPGRADED 18.1.0.0.0 00:00:08
Upgrade Datapatch 00:00:05
Final Actions 00:00:09
Post Upgrade 00:00:02
Post Upgrade Datapatch 00:00:04
 
Total Upgrade Time: 00:20:47 [PDB0]  
Database time zone version is 26. It is older than current release time
zone version 31. Time zone upgrade is needed using the DBMS_DST package.
 
Grand Total Upgrade Time: [0d:0h:21m:10s]

Capture18cCDBROOTupg
Here we see 18.1 but the important number is the time: 21 minutes… Once again, I see no improvement in the time to upgrade the PDB dictionary. This was on a service with 2 OCPU and I’ve run a whole CDB upgrade with a similar shape and the time to upgrade the CDB$ROOT is exaclty the same – see the screenshot on the right.

Finally I open the PDB:

SQL> alter pluggable database pdb0 open;
Pluggable database PDB0 altered.

And check that the violations are resolved:

SQL> select * from pdb_plug_in_violations where status'RESOLVED';
 
TIME NAME CAUSE TYPE ERROR_NUMBER LINE MESSAGE STATUS ACTION CON_ID
---- ---- ----- ---- ------------ ---- ------- ------ ------ ------
24-FEB-18 09.46.25.302228000 PM PDB0 OPTION WARNING 0 15 Database option RAC mismatch: PDB installed version 18.0.0.0.0. CDB installed version NULL. PENDING Fix the database option in the PDB or the CDB 4

Ok, I suppose I can ignore that as this is not RAC.

I’ve not seen a lot of differences in the dbupgrade output. There’s a new summary of versions before and after upgrade, which was not there in 12c:

DOC>#####################################################
DOC>#####################################################
DOC>
DOC> DIAG OS Version: linux x86_64-linux-thread-multi 2.6.39-400.211.1.el6uek.x86_64
DOC> DIAG Database Instance Name: CDB1
DOC> DIAG Database Time Zone Version: 31
DOC> DIAG Database Version Before Upgrade: 12.2.0.1.0
DOC> DIAG Database Version After Upgrade: 18.1.0.0.0
DOC>#####################################################
DOC>#####################################################

However, be careful with this information. The OS Version is not correct:

[opc@DB ~]$ uname -a
Linux DB 4.1.12-112.14.10.el6uek.x86_64 #2 SMP Mon Jan 8 18:24:01 PST 2018 x86_64 x86_64 x86_64 GNU/Linux

It seems that this info comes from Config.pm which is the OS version where the perl binaries were built…

In summary, nothing changes here about the time it takes to upgrade a PDB when plugged into a new CDB.
However, in 18c (and maybe only with next Release Updates) we should have a way to get this improved by recording the upgrade of CDB$ROOT and re-playing a trimmed version on the PDB dictionaries, in the same way as in Application Containers for application upgrades. We already see some signs of it with ‘_enable_cdb_upgrade_capture’ undocumented parameter and PDB_UPGRADE_SYNC database property. This may even become automatic when PDB is opened with the PDB_AUTO_UPGRADE property. But that’s for the future, and not yet documented.

For the moment, you still need to run a full catupgrd on each container, through catctl.pl called by the ‘dbupgrade’ script. Here on a 2 OCPU service, it takes 20 minutes.

 

Cet article PDB upgrade from 12c to 18c est apparu en premier sur Blog dbi services.

datagaurd switchover switch back

Tom Kyte - 5 hours 58 min ago
Hi Team, I am trying to do manual switch over not by using switch over command , this is just for my learning purpose . if my understanding about the facts are correct , below are the steps : 1. shutdown the primary copy control file and re...
Categories: DBA Blogs

finding min max from dataset

Tom Kyte - 5 hours 58 min ago
I have following initial dataset <code> F_ID L_CAT CHG_DT F1 VHL 01-FEB-2016 F1 VHL 10-FEB-2016 F1 VHL 15-FEB-2016 F1 MHL 20-FEB-2016 F1 VHL 25-FEB-2016 F1 VHL 28-FEB-2016 F1 MHL 05-MAR-2016 F1 MHL 10-MAR-2016 F2 VHL 01-F...
Categories: DBA Blogs

Using SQL profiles and baselines in Oracle 11g Express Edition

Tom Kyte - 5 hours 58 min ago
Hi I'm trying to test some scenarios on SQL profiles and baselines in Oracle 11g Express edition. Even if there are active profile or baseline on a query, when executed or generated execution plan, I can see that it is not using them. Is there a...
Categories: DBA Blogs

Configuring huge pages for your PostgreSQL instance, Debian version

Yann Neuhaus - 7 hours 2 min ago

In the last post we had a look at how you can configure huge pages on RedHat and CentOS systems. For Debian and Debian based systems the procedure is different as Debian does not come with tuned. Lets see how it works there.

Checking the basic system configuration works the same in Debian as in RedHat based distributions by checking the /proc/meminfo file:

postgres@debianpg:/home/postgres/ [PG1] cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

So nothing configured for huge pages in the default configuration. Using the same procedure from the last post this is how you calculate the required huge pages for the PostgreSQL instance:

postgres@debianpg:/home/postgres/ [PG1] head -1 $PGDATA/postmaster.pid
6661
postgres@debianpg:/home/postgres/ [PG1] grep ^VmPeak /proc/6661/status
VmPeak:	  393836 kB
postgres@debianpg:/home/postgres/ [PG1] grep ^Hugepagesize /proc/meminfo
Hugepagesize:       2048 kB
postgres@debianpg:/home/postgres/ [PG1] echo "393836/2048" | bc
192

We’ll need at least 192 pages. Lets add that to /etc/sysctl.conf:

postgres@debianpg:/home/postgres/ [PG1] sudo bash
root@debianpg:/home/postgres$ echo "vm.nr_hugepages=200" >> /etc/sysctl.conf

Notify the system about that change:

root@debianpg:/home/postgres$ sysctl -p
vm.nr_hugepages = 200

… and we have 200 huge pages available:

postgres@debianpg:/home/postgres/ [PG1] cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:     200
HugePages_Free:      200
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Again, lets force PostgreSQL to use huge pages and restart the instance:

postgres@debianpg:/home/postgres/ [PG1] psql -c "alter system set huge_pages=on" postgres
ALTER SYSTEM
postgres@debianpg:/home/postgres/ [PG1] pg_ctl -D $PGDATA restart -m fast
waiting for server to shut down.... done
server stopped
waiting for server to start....2018-02-25 17:13:59.398 CET [6918] LOG:  listening on IPv6 address "::1", port 5432
2018-02-25 17:13:59.398 CET [6918] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2018-02-25 17:13:59.403 CET [6918] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-02-25 17:13:59.421 CET [6919] LOG:  database system was shut down at 2018-02-25 17:13:59 CET
2018-02-25 17:13:59.427 CET [6918] LOG:  database system is ready to accept connections
 done
server started

… and that’s it:

postgres@debianpg:/home/postgres/ [PG1] cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:     200
HugePages_Free:      193
HugePages_Rsvd:       64
HugePages_Surp:        0
Hugepagesize:       2048 kB

We can do the same test as in the last post to check that the number of huge pages will increase when you have load on the system:

postgres=# create table t1 as select * from generate_series(1,1000000);
SELECT 1000000
postgres=# select count(*) from t1;
  count  
---------
 1000000
(1 row)

postgres=# \! cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:     200
HugePages_Free:      184
HugePages_Rsvd:       55
HugePages_Surp:        0
Hugepagesize:       2048 kB

Btw: This is on Debian 9 (not sure if it is the same for lower versions):

postgres@debianpg:/home/postgres/ [PG1] cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
 

Cet article Configuring huge pages for your PostgreSQL instance, Debian version est apparu en premier sur Blog dbi services.

Configuring huge pages for your PostgreSQL instance, RedHat/CentOS version

Yann Neuhaus - 12 hours 54 min ago

Almost every PostgreSQL I get in touch with is not configured to use huge pages, which is quite a surprise as it can give you a performance boost. Actually it is not the PostgreSQL instance you need to configure but the operating system to provide that. PostgreSQL will use huge pages by default when they are configured and will fall back to normal pages otherwise. The parameter which controls that in PostgreSQL is huge_pages which defaults to “try” leading to the behavior just described: Try to get them, otherwise use normal pages. Lets see how you can do that on RedHat and CentOS. I’ll write another post about how you do that for Debian based distributions shortly.

What you need to know is that RedHat as well as CentOS come with tuned profiles by default. This means kernel parameters and other settings are managed through profiles dynamically and not anymore by adjusting /etc/sysctl (although that works as well). When you are in virtualized environment (VirtualBox in my case) you probably will see something like this:

postgres@pgbox:/home/postgres/ [PG10] tuned-adm active
Current active profile: virtual-guest

Virtual guest is maybe not the best solution for database server as it comes with those settings (especially vm.dirty_ratio and vm.swappiness):

postgres@pgbox:/home/postgres/ [PG10] cat /usr/lib/tuned/virtual-guest/tuned.conf  | egrep -v "^$|^#"
[main]
summary=Optimize for running inside a virtual guest
include=throughput-performance
[sysctl]
vm.dirty_ratio = 30
vm.swappiness = 30

What we do at dbi services is to provide our own profile which adjusts the settings better suited for a database server.

postgres@pgbox:/home/postgres/ [PG10]  cat /etc/tuned/dbi-postgres/tuned.conf | egrep -v "^$|^#"
[main]
summary=dbi services tuned profile for PostgreSQL servers
[cpu]
governor=performance
energy_perf_bias=performance
min_perf_pct=100
[disk]
readahead=>4096
[sysctl]
vm.overcommit_memory=2
vm.swappiness=0
vm.dirty_ratio=2
vm.dirty_background_ratio=1

What has all this to do with larges pages you might think. Well, tuning profiles can also be used to configure them and for us this is the preferred method because we can do it all in one file. But we before we do that lets look at the PostgreSQL instance:

postgres=# select version();
                                                          version                                                           
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 10.0 build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)

postgres=# show huge_pages;
 huge_pages 
------------
 try
(1 row)

As said at the beginning of this post the default behavior of PostgreSQL is to use them if available. The question now is: How can you check if you have huge pages configured on the operating system level? The answer is in the virtual /proc/meminfo file:

postgres=# \! cat /proc/meminfo | grep -i huge
AnonHugePages:      6144 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Alle “HugePages” statistics report a zero so this system definitely is not configured to provide huge pages to PostgreSQL. AnonHugePages is for Transparent Hugepage and it is common recommendation to disable them for database servers. So we have two tasks to complete:

  • Disable transparent huge pages
  • Configure the system to provide enough huge pages for our PostgreSQL instance

For disabling transparent huge pages we just need to add the following lines to our tuning profile:

postgres@pgbox:/home/postgres/ [PG10] sudo echo "[vm]
> transparent_hugepages=never" >> /etc/tuned/dbi-postgres/tuned.conf

When transparent huge pages are enabled you can see that in the following file:

postgres@pgbox:/home/postgres/ [PG10] cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

Once we switch the profile to our own profile:

postgres@pgbox:/home/postgres/ [PG10] sudo tuned-adm profile dbi-postgres
postgres@pgbox:/home/postgres/ [PG10] sudo tuned-adm active
Current active profile: dbi-postgres

… you’ll notice that it is disabled from now on:

postgres@pgbox:/home/postgres/ [PG10] cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]

Task one completed. For configuring the operating system to provide huge pages for our PostgreSQL we need to know how many huge pages we require. How do we do that? The procedure is documented in the PostgreSQL documentation. Basically you start your instance and then check how many you would require. In my case, to get the PID of the postmaster process:

postgres@pgbox:/home/postgres/ [PG10] head -1 $PGDATA/postmaster.pid
1640

To get the VmPeak for that process:

postgres@pgbox:/home/postgres/ [PG10] grep ^VmPeak /proc/1640/status
VmPeak:	  344340 kB

As the huge page size is 2MB on my system (which should be default for most systems):

postgres@pgbox:/home/postgres/ [PG10] grep ^Hugepagesize /proc/meminfo
Hugepagesize:       2048 kB

… we will require at least 344340/2048 huge pages for this PostgreSQL instance:

postgres@pgbox:/home/postgres/ [PG10] echo "344340/2048" | bc
168

All we need to do is to add this to our tuning profile in the “[sysctl]” section:

postgres@pgbox:/home/postgres/ [PG10] grep nr_hugepages /etc/tuned/dbi-postgres/tuned.conf 
vm.nr_hugepages=170

Re-set the profile and we’re done:

postgres@pgbox:/home/postgres/ [PG10] sudo tuned-adm profile dbi-postgres
postgres@pgbox:/home/postgres/ [PG10] cat /proc/meminfo | grep -i huge
AnonHugePages:      4096 kB
HugePages_Total:     170
HugePages_Free:      170
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

This confirms that we now have 170 huge pages of which all of them are free to consume. Now lets configure PostgreSQL to only start when it can get the amount of huge pages required by switching the “huge_pages” parameter to “on” and restart the instance:

postgres@pgbox:/home/postgres/ [PG10] psql -c "alter system set huge_pages=on" postgres
ALTER SYSTEM
Time: 0.719 ms
postgres@pgbox:/home/postgres/ [PG10] pg_ctl -D $PGDATA restart -m fast
waiting for server to shut down.... done
server stopped
waiting for server to start....2018-02-25 11:21:29.107 CET - 1 - 3170 -  - @ LOG:  listening on IPv4 address "0.0.0.0", port 5441
2018-02-25 11:21:29.107 CET - 2 - 3170 -  - @ LOG:  listening on IPv6 address "::", port 5441
2018-02-25 11:21:29.110 CET - 3 - 3170 -  - @ LOG:  listening on Unix socket "/tmp/.s.PGSQL.5441"
2018-02-25 11:21:29.118 CET - 4 - 3170 -  - @ LOG:  redirecting log output to logging collector process
2018-02-25 11:21:29.118 CET - 5 - 3170 -  - @ HINT:  Future log output will appear in directory "pg_log".
 done
server started

As the instance started all should be fine and we can confirm that by looking at the statistics in /proc/meminfo:

postgres@pgbox:/home/postgres/ [PG10] cat /proc/meminfo | grep -i huge
AnonHugePages:      4096 kB
HugePages_Total:     170
HugePages_Free:      162
HugePages_Rsvd:       64
HugePages_Surp:        0
Hugepagesize:       2048 kB

You might be surprised that not all (actually only 8) huge pages are used right now but this will change as soon as you put some load on the system:

postgres=# create table t1 as select * from generate_series(1,1000000);
SELECT 1000000
postgres=# select count(*) from t1;
  count  
---------
 1000000
(1 row)

postgres=# \! cat /proc/meminfo | grep -i huge
AnonHugePages:      4096 kB
HugePages_Total:     170
HugePages_Free:      153
HugePages_Rsvd:       55
HugePages_Surp:        0
Hugepagesize:       2048 kB
postgres=# 

Hope this helps …

 

Cet article Configuring huge pages for your PostgreSQL instance, RedHat/CentOS version est apparu en premier sur Blog dbi services.

Microservices with Oracle ADF Remote Regions

Andrejus Baranovski - Sat, 2018-02-24 13:50
ADF remote regions - functionality available in the latest ADF 12c versions. ADF remote region runs on different server and content is delivered to consuming module through ADF remote region servlet. This allows to decouple large enterprise system into separate modules, each running independently on separate servers. Gained advantage - system becomes more scalable and reliable, even if several modules will be down, system will continue to be functional.

Concept of ADF remote regions, reminds me closely microservices architecture. Microservices - also known as the microservice architecture - is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. The microservice architecture enables the continuous delivery/deployment of large, complex applications. It also enables an organization to evolve its technology stack (as describe here).

Let's see how ADF remote regions are configured and consumed. Sample application (available for download from GitHub repository) is based on Employees and Jobs modules. Each module is deployed on different servers, Employees module is consumed in Jobs. Microservice here - Employees table. This table comes from loosely coupled service and is consumed within Jobs module:


Employees module runs on ADF server in Docker container (read more: Essential WebLogic Tuning to Run on Docker and Avoid OOM):


I will stop this module:


Jobs module still works, even Employees module is not available anymore - it is stopped. Otherwise if both modules would be deployed as single application - if application is down, system will be completely unavailable. But now users can access part of the functionality:


Will dive into technical part. ADF remote region is not different from the way how regular ADF region is consumed. You still must use ADF region tag, to define region:


Key difference is in region bindings - there is one extra property called Remote Connection. This connection defines source, from where remote region content is transferred. All other properties are the same, we can pass parameters too:


Consuming module must define ADF remote region connection. If connection details are correct, you should see ADF task flows with remote access listed:


Remote region connection wizard. You should use module context root and point to ADF remote region servlets rtfquery and rr:


This wizard can be opened by right clicking on Connections folder and going to New Connections section:


Consumer module should be enabled with remote region consumer support:


Producer module should be enabled with remote region producer support:


Producer module is configured with required servlets automatically, as soon as you enable remote region producer support:


Read more about ADF remote regions in ADF developer guide - 26.13 Creating Remote Regions in a Fusion Web Application.

Oracle Linux 7 UEK5 (Linux kernel 4.14) sneak preview

Wim Coekaerts - Sat, 2018-02-24 12:36

We just published an initial preview version of our next kernel-uek. This is based on upstream Linux 4.14 (latest stable -14). UEK4 is/was based on a 4.1 upstream Linux kernel.

If you want to try it out, you can just add the yum repo below on your  Oracle Linux 7-based system. If you don't have a quick OL7 environment, remember you can sign up for a free account on Oracle Cloud and quickly create an Oracle Linux 7 instance and do exactly the same.

There will be very regular updates of this preview kernel going forward so you can remain up to date with our development efforts. The source code is there as well and we are going to push the git repos onto github/oracle soon(ish).

All you have to do is add the following to your /etc/yum.repos.d/public-yum-ol7.repo file.

[ol7_developer_UEKR5] name=Oracle Linux $releasever UEK5 Development Packages ($basearch) baseurl=http://yum.oracle.com/repo/OracleLinux/OL7/developer_UEKR5/$basearch/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1

and then upgrade your kernel

# yum upgrade kernel-uek

reboot and you are all set.

If you want the latest dtrace along with it, it's in the same repo, you can just do

# yum install dtrace-utils Do a dtrace -l, you can see there are over 5000 probes now!

 

count the number of execution for a specific SQL

Tom Kyte - Fri, 2018-02-23 22:46
Hello, I need a query to count the number of executions and the CPU time for those executions for a specific SQL_ID in the last hour. background: we have a simple query which executed very frequently by our customers and it takes 0.3 sec (usua...
Categories: DBA Blogs

Redo log archived but not applied in DR.

Tom Kyte - Fri, 2018-02-23 22:46
Greetings, I have configured DG with maximum availability with one primary and one standby. It was working okay and real time redo transport was applying fine. Now, the problem is :: Archived log is being transmitted to Standby but is not...
Categories: DBA Blogs

how to handle errors in oracle parallel function

Tom Kyte - Fri, 2018-02-23 22:46
I have parallel function like which takes cursor as input and creates flat file using UTL_FILE <code> CREATE OR REPLACE FUNCTION generate_file (p_input IN SYS_REFCURSOR) RETURN dump_ntt -- this just a nested table PIPELINED PARALLEL...
Categories: DBA Blogs

Can i give different permission SELECT/INSERT to same Schema for different application in one database?

Tom Kyte - Fri, 2018-02-23 22:46
Hi sir, I am DBA and now i stuck in one situation, I have one database and 3 different application, data will be come from one application and I need to store those data in one schema. Now, Scenario is 2 application can have read only access ...
Categories: DBA Blogs

database performance select count

Tom Kyte - Fri, 2018-02-23 22:46
Hi Team , i have query mentioned below : T1(id number , name varchar2 (20)); Column ID is full of null in table T1. select count(*) from table t1 where id is not null and name like :b1; This query is going on FTS , so we rewritten i...
Categories: DBA Blogs

database performance latch free

Tom Kyte - Fri, 2018-02-23 22:46
Hi Team, when latch contention is seen in database , how to identify type of latch and how to proceed for further troubleshooting? Additionally , if we want find latch type from ash , how to figure it from ASH , i see P1,P@ value are provided b...
Categories: DBA Blogs

PLSQL_OPTIMIZE_LEVEL 3 produces wrong function result

Tom Kyte - Fri, 2018-02-23 22:46
We looked into tuning our PL/SQL codebase and tested with all our packages compiled with PLSQL_OPTIMIZE_LEVEL 3. At first glance everything worked, but then our code started to act weird. After some investigation we found out, that PLSQL_OPTIMIZE_...
Categories: DBA Blogs

Follow up from RMOUG Training Days talks

Bobby Durrett's DBA Blog - Fri, 2018-02-23 15:34

I have uploaded a zip of my two RMOUG Training Days talks here: zip

During the Toastmasters talk there was a question about finding a club in your area.

Here is how to find a Toastmasters club near you. Go to www.toastmasters.org and click on the FIND A CLUB button. Enter your address or zip and you get a map with club locations like this:

Click on the club to see details of when it meets and whether they are open to new members.

You can call the phone number and you can visit their website if they have one.

Toastmasters provides clubs with hosting for sites like this so it is easy to find out where they meet and to ask questions. In a public club like the one in this example you can just show up at a meeting and they will welcome you. A corporate club will often be just for employees so unless you work for the company that hosts the club you probably will have to look elsewhere. But there are a ton of clubs and most would love to have new members.

At the Python presentation people wanted to look at my PythonDBAGraphs source code. It is on GitHub here: PythonDBAGraphs. This is my largest example of Python code that an Oracle DBA might write. I think that the Oracle community may find PythonDBAGraphs more useful as an example than as a practical application. I use it every day for my work so it has value but it is more command line and less finished than a product that you would buy. Still, it shows how to use Python with Oracle databases and how to graph things using Python. And, it is a fairly simple yet still useful example of Python code.

Anyway, I wanted to publish the PowerPoint files and give these two follow ups based on the questions that I got after each talk.

Bobby

Categories: DBA Blogs

ODA Lite: What is this ‘odacli’ repository?

Yann Neuhaus - Fri, 2018-02-23 15:00

When ODA Lite was introduced, with ODA X6-2 S/M/L, and now with ODA x7-2 S/M, a new ‘odacli’ was there to manage it. It will probably replace the oakcli for ODA HA as well in the future. One big difference: it uses a repository to record the configuration and the operations. I don’t really like it because when something fails you are blocked. Oracle Support can modify the directory, but they ask for an access to the machine for that and this is not easy in secured environments. Anyway, I really don’t understand why another repository has been introduced. We already have the Oracle Inventory, the Grid Infrastructure resources, the Linux /etc files,… And now we have a closed repository which controls everything, accessible only with the very limited odacli commands which are not the best example of automation code and error handling.

This post is about viewing what is inside. You may also want to update it in case you have a problem. I can’t tell you not to do it: this blog has readers who fixed critical issues by editing the binary data in system files, so changing some metadata in an embedded SQL database is not so dangerous. On the other hand, you take the risk to mess up everything. So better contact Oracle Support of you are not 142% sure about what you do. But when the support is long to answer, asks a remote access, or has no other solution than re-image the ODA, you may have to find other alternatives. Just limit yourseld to what you know you can do without risk.

So, this repository is stored in an embedded JavaDB which is, as far as I understand it, An Apache Derby database recolored in red by Oracle. There’s a jdbc driver to access it.

You find the repository on your ODA in the following directory:
/opt/oracle/dcs/repo

You will probably copy the directory elsewhere to look at it. And you may do that with the DCS agent stopped.

SQuirreL SQL Client

I used SQuirreL SQL Client to read this database:

Download SQuirreL SQL Client: squirrel-sql-snapshot-20180206_2232-standard.jar from https://sourceforge.net/projects/squirrel-sql/files/latest/download and install it.

Download derby.jar from https://db.apache.org/derby/derby_downloads.html

Run SQuirreL SQL Client, and add the derby.jar: CaptureDerby001

Connect to it. If you still have the repo at the original place, the URL is jdbc:derby:/opt/oracle/dcs/repo/node_0. There is no user and no password.

Then, in the ‘APP’ catalog, you can browse the tables:

CaptureDerby002

SchemaSpy

You probably want to see the data model for those few tables. I did it on a 12.1.2.11.0 repository. I used SchemaSpy (http://schemaspy.org/) which is awesome because it uses the awesome Graphviz (https://www.graphviz.org/) for the visual representation. If you want to do the same, here is how I did it:


export PATH=$PATH:"/cygdrive/c/Program Files (x86)/Graphviz2.38/bin"
java -jar schemaspy*.jar -t derby -db ./repo/node_0 -dp ./derby.jar -o ODAviz -u "" -cat "APP"

Here are some of the schemas generated if you want to have a look at what is stored in the ODA repository: ODAviz.pub.zip

The schema is very simple. Only a few referential integrity constraints and very simple information.

One additional warning: modifications here are not supported by Oracle, and that may even be forbidden as the Capacity On Demand core count is also stored there.

 

Cet article ODA Lite: What is this ‘odacli’ repository? est apparu en premier sur Blog dbi services.

oci-utils (oracle cloud infrastructure) for Oracle Linux package

Wim Coekaerts - Fri, 2018-02-23 10:53

We recently added another little utilities RPM for Oracle Linux 7 to our collection:

oci-utils is an Oracle Linux RPM that contains a set of scripts to make managing an OCI instance easier, from within the instance.

The current version provides tools that help with managing block volumes (attach, remove, automatic discovery), secondary vnic configuration, a script to query the public IP of an instances and a script that lets you query instance metadata key/value pairs without having to parse or read json.

# yum install oci-utils Package content:

Binaries:
/usr/bin/oci-iscsi-config /usr/bin/oci-metadata /usr/bin/oci-network-config /usr/bin/oci-public-ip System service
/etc/systemd/ocid.service /usr/libexec/ocid MAN pages
oci-iscsi-config(1) oci-metadata(1) oci-network-config(1) oci-public-ip(1) ocid(8)
Ideally you start the ocid service, it will monitor for any changes in block devices or vnic's attached or removed. Today, when you add a block device, you have to run a number of iscsiadm commands to actually discover it and attach it to your instance. When ocid is running, it will, on a regular basis, probe to see if these devices have been created through the OCI web console, cli or SDK. It will then automatically disover them for you.

oci-iscsi-config is a simple wrapper around iscsiadm that provides you with a single command to list and attach/detach devices without having to know the iscsiadm command syntax.

ex:

# oci-iscsi-config -s For full functionality of this utility the ocid service must be running The administrator can start it using this command: sudo systemctl start ocid.service ocid already running. Currently attached iSCSI devices: Target iqn.2015-02.oracle.boot:uefi Persistent portal: 169.254.0.2:3260 Current portal: 169.254.0.2:3260 State: running Attached device: sda Size: 46.6G Partitions: Device Size Filesystem Mountpoint sda1 544M vfat /boot/efi sda2 8G swap [SWAP] sda3 38G xfs /

<attach a 50G block volume in the OCI webconsole>

# oci-iscsi-config -s Currently attached iSCSI devices: Target iqn.2015-12.com.oracleiaas:31b78e27-0c73-43ff-98b9-0ced1722a08c Persistent portal: 169.254.2.2:3260 Current portal: 169.254.2.2:3260 State: running Attached device: sdb Size: 50G File system type: Unknown Mountpoint: Not mounted Target iqn.2015-02.oracle.boot:uefi Persistent portal: 169.254.0.2:3260 Current portal: 169.254.0.2:3260 State: running Attached device: sda Size: 46.6G Partitions: Device Size Filesystem Mountpoint sda1 544M vfat /boot/efi sda2 8G swap [SWAP] sda3 38G xfs /

You can see /dev/sdb now show up after a few seconds, without having to run any commands.

oci-network-config is similar

oci-network-config is similar # oci-network-config -s CONFIG ADDR SPREFIX SBITS VIRTRT NS IND IFACE VLTAG VLAN STATE MAC VNIC - 10.0.0.2 10.0.0.0 24 10.0.0.1 - 0 ens3 - - UP 02:00:17:01:ed:6b ocid1.vnic.oc1.iad.abuwcljs4ik52qrq7itbb32rwajjqddt7utla64t47fkkq7tebw5gknt5csa <add a secondary interface>
# oci-network-config -s CONFIG ADDR SPREFIX SBITS VIRTRT NS IND IFACE VLTAG VLAN STATE MAC VNIC - 10.0.0.2 10.0.0.0 24 10.0.0.1 - 0 ens3 - - UP 02:00:17:01:ed:6b ocid1.vnic.oc1.iad.abuwcljs4ik52qrq7itbb32rwajjqddt7utla64t47fkkq7tebw5gknt5csa ADD 10.0.0.3 10.0.0.0 24 10.0.0.1 - 1 ens4 - - UP 02:00:17:01:eb:53 ocid1.vnic.oc1.iad.abuwcljsxek2mqaotafcohdmvghzrzx3jiiwq3zo45fh65dvlkpinndfjvma oci-public-ip just contacts an internet facing server to return your public IP of your instance.

# oci-public-ip Public IP address: 129.213.44.98
oci-medata let's you pretty-print the instance metadata and query for a given key

# oci-metadata -g region Instance details: Region: iad (Ashburn, VA, USA) # oci-metadata -g state Instance details: Instance state: Running

An updated version in the near future will also use the SDK (if installed along with your pem key) to go and create a block device and attach it from within your instance and/or create a secondary vnic and automatically create and attach it.

One roadmap item is the ability to use dynamic groups and principals to allow for an instance with the right privileges to do the block volume create/secondary vnic create without a pem key.

give it a try.

New SHA-2 functions showing up in PostgreSQL 11

Yann Neuhaus - Fri, 2018-02-23 10:03

A recent commit announced new SHA-2 functions coming up in PostgreSQL 11. Until now you can use the md5 function to generate hashes for test data or whatever you want. This commit adds more functions you can use for that. Let’s see how they work.

When you want to try what follows make sure you are on the development version of PostgreSQL. You can find a little howto here.

For generating test data in PostgreSQL I often use things like that:

postgres@pgbox:/home/postgres/ [PGDEV] psql
psql (11devel)
Type "help" for help.

postgres=# \! cat a.sql
drop table if exists t1;
create table t1 as
  select a.*
       , md5(a::varchar)
    from generate_series (1,1000000) a;
postgres=# \i a.sql
psql:a.sql:1: NOTICE:  table "t1" does not exist, skipping
DROP TABLE
SELECT 1000000
postgres=# select * from t1 limit 5;
 a |               md5                
---+----------------------------------
 1 | c4ca4238a0b923820dcc509a6f75849b
 2 | c81e728d9d4c2f636f067f89cc14862c
 3 | eccbc87e4b5ce2fe28308fd9f2a7baf3
 4 | a87ff679a2f3e71d9181a67b7542122c
 5 | e4da3b7fbbce2345d7772b0674a318d5
(5 rows)

Now we have more function to chose from:

postgres=# \df *sha*
                                       List of functions
   Schema   |               Name               | Result data type | Argument data types | Type 
------------+----------------------------------+------------------+---------------------+------
 pg_catalog | pg_advisory_lock_shared          | void             | bigint              | func
 pg_catalog | pg_advisory_lock_shared          | void             | integer, integer    | func
 pg_catalog | pg_advisory_unlock_shared        | boolean          | bigint              | func
 pg_catalog | pg_advisory_unlock_shared        | boolean          | integer, integer    | func
 pg_catalog | pg_advisory_xact_lock_shared     | void             | bigint              | func
 pg_catalog | pg_advisory_xact_lock_shared     | void             | integer, integer    | func
 pg_catalog | pg_relation_is_publishable       | boolean          | regclass            | func
 pg_catalog | pg_stat_reset_shared             | void             | text                | func
 pg_catalog | pg_try_advisory_lock_shared      | boolean          | bigint              | func
 pg_catalog | pg_try_advisory_lock_shared      | boolean          | integer, integer    | func
 pg_catalog | pg_try_advisory_xact_lock_shared | boolean          | bigint              | func
 pg_catalog | pg_try_advisory_xact_lock_shared | boolean          | integer, integer    | func
 pg_catalog | sha224                           | bytea            | bytea               | func
 pg_catalog | sha256                           | bytea            | bytea               | func
 pg_catalog | sha384                           | bytea            | bytea               | func
 pg_catalog | sha512                           | bytea            | bytea               | func

Using the same test script as before but with the sha224 function:

postgres=# \! cat a.sql
drop table if exists t1;
create table t1 as
  select a.*
       , sha224(a::text::bytea)
    from generate_series (1,1000000) a;
postgres=# \i a.sql
DROP TABLE
SELECT 1000000
postgres=# select * from t1 limit 5;
 a |                           sha224                           
---+------------------------------------------------------------
 1 | \xe25388fde8290dc286a6164fa2d97e551b53498dcbf7bc378eb1f178
 2 | \x58b2aaa0bfae7acc021b3260e941117b529b2e69de878fd7d45c61a9
 3 | \x4cfc3a1811fe40afa401b25ef7fa0379f1f7c1930a04f8755d678474
 4 | \x271f93f45e9b4067327ed5c8cd30a034730aaace4382803c3e1d6c2f
 5 | \xb51d18b551043c1f145f22dbde6f8531faeaf68c54ed9dd79ce24d17
(5 rows)

You can use the other functions in the same way, of course.

 

Cet article New SHA-2 functions showing up in PostgreSQL 11 est apparu en premier sur Blog dbi services.

How to prevent RMAN archivelog backup from log switch

Yann Neuhaus - Fri, 2018-02-23 06:18

When RMAN backups archivelogs, it does a logswitch before backing up all archivelogs. In some cases this log switching should be suppressed. This can be done with expression “until time ‘sysdate'”, which actually filters nothing.

For example:

backup archivelog until time 'sysdate';
 

Cet article How to prevent RMAN archivelog backup from log switch est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator