Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 1 hour 42 min ago

Documentum CS 7.* – 777 permission on jobs log

Mon, 2019-01-14 14:32

A few weeks ago at a customer, our team was involved in a security control.
We tracked files with 777 permission and we detected that logs generated by Documentum jobs have 777 permissions.

Security before anything else, that’s why this topic was my top priority!

First of all, I checked the logs on some Content Servers, and I had the same issue everywhere.

[dmadmin@vmCS1 sysadmin]$ cd $DOCUMENTUM/dba/log/Repo1/sysadmin
[dmadmin@vmCS1 sysadmin]$ ls -rtl
total 192
-rwxrwxrwx. 1 dmadmin dmadmin   1561 Oct 25 10:12 DataDictionaryPublisherDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   5172 Oct 28 08:02 DMCleanDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   6701 Oct 28 08:17 DMFilescanDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin  14546 Nov  2 00:01 ConsistencyCheckerDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   2969 Nov  2 00:09 ContentWarningDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin    596 Nov  2 00:12 DBWarningDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin 102765 Nov  2 00:17 FileReportDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   3830 Nov  2 00:25 LogPurgeDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin    527 Nov  2 00:28 QueueMgtDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin  15932 Nov  2 00:31 StateOfDocbaseDoc.txt

I verified the umask at operating system level:

[dmadmin@vmCS1 ~]$ umask
0027

umask has the expected value!
For more information regarding the umask : https://en.wikipedia.org/wiki/Umask

Check if a different value of umask is set in the server.ini file ([SERVER_STARTUP] section):

[dmadmin@vmCS1 ~]$ cd $DOCUMENTUM/dba/config/Repo1
[dmadmin@vmCS1 ~]$ grep umask server.ini
[dmadmin@vmCS1 ~]$ 

No result
If it has been set, the umask setting in the server.ini overwrite the one set at operation system level.
This umask value is intended to control the permissions of files associated with documents stored in the repository, and their enclosing folders.
In my case, these files and folders have the correct permission.

Well, why only these logs have a different permission? I checked again some servers and I saw that not all jobs log have 777 permission, strange:

[dmadmin@vmCS2 sysadmin]$ ls -rtl
total 108
-rwxrwxrwx. 1 dmadmin dmadmin   601  Oct 18 07:12 DMFilescanDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   138  Oct 20 21:37 UpdateStatsDoc.txt
-rw-r-----. 1 dmadmin dmadmin   1832 Oct 24 13:45 FTCreateEventsDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   1251 Oct 25 11:55 DataDictionaryPublisherDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   442  Oct 28 07:12 DMCleanDoc.txt

In fact, the common point between logs with 777 permission is that they are generated by dmbasic methods. These logs are not controlled by the umask set at the operating system level or server.ini.

The system umask value is overridden in the docbase start script, and set to 0. This value is then inherited by dmbasic methods!

[dmadmin@vmCS1 sysadmin]$ grep umask $DOCUMENTUM/dba/dm_start_Repo1
umask 0

I feel better now :D

So, to resolve this issue I had to:

  • Change the umask to 027 instead of 0 in the docbase start script
  • Stop the docbase
  • Change the permission of logs already generated
  • Start the docbase
  • Check the logs after a job execution

To make it easy and quickly, you can use below steps:
Commands below take in account High Availability case, don’t worry about that ;)

  1. To change on one docbase
    Define the docbase name

    		export DCTM_DOCBASE_NAME="DOCBASENAME"

    Check if it is a HA environment or not, and set the DCTM_DOCBASE_GLOBAL_NAME accordingly:

    		cd $DOCUMENTUM/dba
    		export DCTM_DOCBASE_SERVER_CONFIG=$(grep server_config_name config/${DCTM_DOCBASE_NAME}/server.ini | cut -d \  -f 3) ;
    		if [ ${DCTM_DOCBASE_SERVER_CONFIG} == ${DCTM_DOCBASE_NAME} ]
    		then
    			export DCTM_DOCBASE_GLOBAL_NAME=${DCTM_DOCBASE_NAME}
    		else
    			export DCTM_DOCBASE_SERVICE_NAME=$(grep 'service =' config/${DCTM_DOCBASE_NAME}/server.ini | cut -d \  -f 3) ;
    			export DCTM_DOCBASE_GLOBAL_NAME=${DCTM_DOCBASE_NAME}"_"${DCTM_DOCBASE_SERVICE_NAME}
    		fi

    Change the umask value in the start script

    		cp -p dm_start_${DCTM_DOCBASE_GLOBAL_NAME} dm_start_${DCTM_DOCBASE_GLOBAL_NAME}_bck_$(date +%Y%m%d-%H%M%S)
    		echo "Docbase ${DCTM_DOCBASE_NAME} : Start script has been saved"
    		sed -i 's,umask 0,umask 027,' dm_start_${DCTM_DOCBASE_GLOBAL_NAME}
    		echo "Docbase ${DCTM_DOCBASE_NAME} : Umask changed"

    Stop the docbases using the following command:

    		./dm_shutdown_${DCTM_DOCBASE_GLOBAL_NAME}

    Check if the docbase has been stopped:

    		ps -ef | grep ${DCTM_DOCBASE_NAME}

    Change the permission of existing files:

    		DCTM_DOCBASE_ID_DEC=$(grep docbase_id config/${DCTM_DOCBASE_NAME}/server.ini | cut -d \  -f 3)
    		DCTM_DOCBASE_ID_HEX=$(printf "%x\n" $DCTM_DOCBASE_ID_DEC)
    		chmod 640 log/*${DCTM_DOCBASE_ID_HEX}/sysadmin/*

    Start the docbase using the following command:

    		./dm_start_${DCTM_DOCBASE_GLOBAL_NAME}
  2. To change on all docbases
    Check if it is a HA environment or not (check done one docbase only), and set the DCTM_DOCBASE_GLOBAL_NAME accordingly, then change the umask value in the start script.

    		cd $DOCUMENTUM/dba
    		export FIRST_DOCBASE_NAME=$(ls config | head -1)
    		export DCTM_DOCBASE_SERVER_CONFIG=$(grep server_config_name config/${FIRST_DOCBASE_NAME}/server.ini | cut -d \  -f 3)
    		if [ ${FIRST_DOCBASE_NAME} == ${DCTM_DOCBASE_SERVER_CONFIG} ]
    		then
    			export HA_ENV="NO"
    		else
    			export HA_ENV="YES"
    		fi
    		
    		for i in `ls config`; do 
    			if [ ${HA_ENV} == "NO" ]
    			then
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}
    			else
    				export DCTM_DOCBASE_SERVICE_NAME=$(grep 'service =' config/${i}/server.ini | cut -d \  -f 3)
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}"_"${DCTM_DOCBASE_SERVICE_NAME}
    			fi
    			cp -p dm_start_${DCTM_DOCBASE_GLOBAL_NAME} dm_start_${DCTM_DOCBASE_GLOBAL_NAME}_bck_$(date +%Y%m%d-%H%M%S)
    			echo "Docbase ${i} : Start script has been saved"
    			sed -i 's,umask 0,umask 027,' dm_start_${DCTM_DOCBASE_GLOBAL_NAME}
    			echo "Docbase ${i} : Umask changed"
    		done

    Stop the docbases using the following command:

    		for i in `ls config`; do 
    			if [ ${HA_ENV} == "NO" ]
    			then
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}
    			else
    				export DCTM_DOCBASE_SERVICE_NAME=$(grep 'service =' config/${i}/server.ini | cut -d \  -f 3)
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}"_"${DCTM_DOCBASE_SERVICE_NAME}
    			fi
    			echo "Stopping docbase ${i}"
    			./dm_shutdown_${DCTM_DOCBASE_GLOBAL_NAME}
    			echo "The docbase ${i} has been stopped"
    		done

    Check that all docbases are stopped

    		ps -ef | grep dmadmin

    Change permission on log files

    chmod 640 log/*/sysadmin/*

    Start the docbases using the following commands:

    
    		for i in `ls config`; do 
    			if [ ${HA_ENV} == "NO" ]
    			then
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}
    			else
    				export DCTM_DOCBASE_SERVICE_NAME=$(grep 'service =' config/${i}/server.ini | cut -d \  -f 3)
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}"_"${DCTM_DOCBASE_SERVICE_NAME}
    			fi
    			echo "Starting docbase ${i}" 
    			./dm_start_${DCTM_DOCBASE_GLOBAL_NAME}
    			echo "The docbase ${i} has been started" 
    		done

    Check that all docbases are started

    		ps -ef | grep dmadmin

I was able to sleep peacefully this night ;) and you know now how to resolve this security issue.

Cet article Documentum CS 7.* – 777 permission on jobs log est apparu en premier sur Blog dbi services.

Copy or Migrate a SSISDB environment

Fri, 2019-01-11 10:42

Integration catalog is easy to work with and the environments feature is as well very convenient.

In SQL Server management Studio, there are a lot of features available to administrate your SQL Server instances, but some are missing that can be very useful. You have the possibility to move an SSISDB environment but this is only possible in another folder of your SSIS catalog. But how if you will to duplicate your environment or move it on another server.

With the below steps, you will be able to generate the script, to create your environment, on the same server changing the folder or/and the environment name or to run it on another server.

  1. Run the script below where you have initially created your SSIS catalog environment

 

DECLARE @FolderName nvarchar(128) = N'SSISDemo'
DECLARE @EnvName nvarchar(128)= 'Env_SSISDemo1'
SELECT 
'DECLARE @FolderName nvarchar(128) = N''' + @FolderName + '''
DECLARE @EnvName nvarchar(128)= ''' + @EnvName + '''
EXEC [SSISDB].[catalog].[create_environment] @folder_name=@FolderName, @environment_name=@EnvName, @environment_description=N''' + e.description + '''' 
as tsql_EnvCopy 
FROM SSISDB.catalog.folders f
INNER JOIN SSISDB.catalog.environments e on e.folder_id = f.folder_id
WHERE f.name = @FolderName
AND e.name = @EnvName
UNION ALL 
SELECT 
'EXEC [SSISDB].[catalog].[create_environment_variable] 
@folder_name=@FolderName, 
@environment_name=@EnvName, 
@variable_name=N'''+ ev.name + ''', 
@data_type=N'''+ ev.type + ''', 
@sensitive='+ CONVERT(NCHAR,ev.sensitive) +', 
@value = ' + 
CASE ev.sensitive
WHEN 0 THEN 
CASE ev.type 
WHEN 'Date Time' THEN ''''+ CONVERT(NVARCHAR(max),ev.value) + '''' 
WHEN 'String' THEN 'N'''+ CONVERT(NVARCHAR(max),ev.value) + '''' 
ELSE CONVERT(NVARCHAR(max),ev.value)
END 
WHEN 1 THEN 
'##########'
END + ',
@description=N'''+ ev.description + ''''
as tsql_EnvVarcreate
FROM SSISDB.catalog.folders f
INNER JOIN SSISDB.catalog.environments e on e.folder_id = f.folder_id
INNER JOIN SSISDB.catalog.environment_variables ev on ev.environment_id = e.environment_id
WHERE f.name = @FolderName
AND e.name = @EnvName
  1. Copy the result of the script (column tsql_EnvCopy) in a copy it in a SSMS Query window connected to the server where your want to deploy your environment.
  2. Adapt the script if needed.
  • On the server where you want to create the new environment, be sure that:
    • The folder with the defined FolderName is existing.
    • FolderName/EnvName combination is not already existing on the server.
    • Change the values of the @FolderName or/and @EnvName if needed.
  • Change the values of the sensitive parameters:

As you can imagine the parameter values cannot be retrieved when they are set as sensitive. If you have such parameter replace the values ########## with the real ones before running the script.

  1. Run the script and you’re done

Cet article Copy or Migrate a SSISDB environment est apparu en premier sur Blog dbi services.

Compile additional packages for Oracle VM Server

Fri, 2019-01-11 03:13

I needed a special package on my OVM Server 3.4.6.
The package is called fio and is needed to do some I/O performance tests.
Unfortunately, OVM Server does not provide any package for compiling software and installing additional software to your OVM Server is also not supported.
But there is a solution:

Insatll a VM with Oracle VM Server 3.4.6 and added the official OVM SDK repositories:


rm -f /etc/yum.repos.d/*
echo '
[ovm34] name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleVM/OVM3/34_latest/x86_64/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
[ol6_latest] name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
[ol6_addons] name=Oracle Linux $releasever Add ons ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/addons/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
[ol6_UEKR4] name=Latest Unbreakable Enterprise Kernel Release 4 for Oracle Linux $releasever ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/UEKR4/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1 ' > /etc/yum.repos.d/ovm-sdk.repo

Now install the necessary packages and compile your software:

On OVM 3.4 SDK VM
yum install -y gcc make zlib-devel libaio libaio-devel
wget https://codeload.github.com/axboe/fio/zip/master
unzip master
cd fio-master
./configure
make

Copy the compiled executable “fio” to your OVM Server or to an attached NFS share.
Run the program and do what you wanna do.

In my case I will run several different performance tests, but that is a story for an other blog post.

Reference: Oracle VM 3: How-to build an Oracle VM 3.3/3.4 SDK platform (Doc ID 2160955.1)

Cet article Compile additional packages for Oracle VM Server est apparu en premier sur Blog dbi services.

PostgreSQL 12, pg_stat_statements_reset for userid, queryid and dbid

Fri, 2019-01-11 00:29

PostgreSQL 12 will give you more control on resetting statistics gathered by pg_stat_statements. When you check the documentation for PostgreSQL 11 (as linked in the previous sentence) you will see that the function has the following signature:

pg_stat_statements_reset() returns void

This means your only choice is to reset all the statistics. Today this commit landed and this will give you more control on which statistics to reset. The signature of the function now looks like this:

pg_stat_statements_reset(userid Oid, dbid Oid, queryid bigint) returns void

There are three new parameters for controlling what to reset: The user id, the database id and the id of a specific query. By default all of them are 0 meaning the the function will behave as in previous versions: Discarding all the statistics. Lets create two users, two databases and a table in each so we will have something in pg_stat_statements we can work with:

postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create user u1 with login password 'u1'" postgres
CREATE ROLE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create user u2 with login password 'u2'" postgres
CREATE ROLE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create database db1 with owner = u1" postgres
CREATE DATABASE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create database db2 with owner = u2" postgres
CREATE DATABASE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create table t1 (a int)" -U u1 db1
CREATE TABLE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create table t1 (a int)" -U u2 db2
CREATE TABLE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "insert into t1 select * from generate_series(1,100)" -U u1 db1
INSERT 0 100
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "insert into t1 select * from generate_series(1,100)" -U u2 db2
INSERT 0 100
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "select count(*) from t1" -U u1 db1
 count 
-------
   100
(1 row)
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "select count(*) from t1" -U u2 db2
 count 
-------
   100
(1 row)

We should be able to see the statements in pg_stat_statements but before doing that lets check the dbids:

postgres@pgbox:/home/postgres/ [PGDEV] oid2name 
All databases:
    Oid  Database Name  Tablespace
----------------------------------
  16394            db1  pg_default
  16395            db2  pg_default
  13569       postgres  pg_default
  13568      template0  pg_default
      1      template1  pg_default

What do we see for our two databases?

postgres=# select userid,dbid,queryid,calls,query from pg_stat_statements where dbid in (16394,16395);
 userid | dbid  |       queryid        | calls |                        query                        
--------+-------+----------------------+-------+-----------------------------------------------------
  16392 | 16394 |  7490503619681577402 |     3 | set client_encoding to 'unicode'
  16393 | 16395 |   843119317166481275 |     1 | insert into t1 select * from generate_series($1,$2)
  16392 | 16394 | -3672942776844552312 |     1 | insert into t1 select * from generate_series($1,$2)
  16393 | 16395 |  7490503619681577402 |     3 | set client_encoding to 'unicode'
  16392 | 16394 |  5583984467630386743 |     1 | select count(*) from t1
  16393 | 16395 |  4983979802666994390 |     1 | select count(*) from t1
  16393 | 16395 |  6842879890091936614 |     1 | create table t1 (a int)
  16392 | 16394 |  6842879890091936614 |     1 | create table t1 (a int)

We should be able to reset the statistics for a specific query:

postgres=# select userid,dbid,queryid,calls,query from pg_stat_statements where dbid in (16394,16395) and queryid = 6842879890091936614;
 userid | dbid  |       queryid       | calls |          query          
--------+-------+---------------------+-------+-------------------------
  16393 | 16395 | 6842879890091936614 |     1 | create table t1 (a int)
  16392 | 16394 | 6842879890091936614 |     1 | create table t1 (a int)
(2 rows)
postgres=# select pg_stat_statements_reset(0, 0, 6842879890091936614);
 pg_stat_statements_reset 
--------------------------
 
(1 row)

postgres=# select userid,dbid,queryid,calls,query from pg_stat_statements where dbid in (16394,16395) and queryid = 6842879890091936614;
 userid | dbid | queryid | calls | query 
--------+------+---------+-------+-------
(0 rows)

Notice that this of course resets the statistics for both statements as they have the same queryid. You could specify the userid and/or dbid as well to reset just one of them. Nice new feature.

Cet article PostgreSQL 12, pg_stat_statements_reset for userid, queryid and dbid est apparu en premier sur Blog dbi services.

Documentum 7+ internal error during installation or upgrade DBTestResult7092863812136784595.tmp

Thu, 2019-01-10 02:35

This blog will go straight to the topic. When upgrading/installing your content server to 7+, you may experience an internal error with a popup telling you to look into a file called something like: DBTestResult7092863812136784595.tmp

In fact, the installation process failed to test the database connection. Even if it managed to find your schema previously. In the file you’ll find something like:

 Last SQL statement executed by DB was:

#0  0x00000033b440f33e in waitpid () from /lib64/libpthread.so.0
#1  0x00000000004835db in dmExceptionManager::WalkStack(dmException*, int, siginfo*, void*) ()
#2  0x0000000000483998 in dmExceptionHandlerProc ()
#3  <signal handler called>
#4  0x00007f3d8c0e7d85 in ber_flush2 () from /dctm/product/7.3/bin/liblber-2.4.so.2
#5  0x00007f3d8bebb00b in ldap_int_flush_request () from /dctm/product/7.3/bin/libldap-2.4.so.2
#6  0x00007f3d8bebb808 in ldap_send_server_request () from /dctm/product/7.3/bin/libldap-2.4.so.2
#7  0x00007f3d8bebbb30 in ldap_send_initial_request () from /dctm/product/7.3/bin/libldap-2.4.so.2
#8  0x00007f3d8beab828 in ldap_search () from /dctm/product/7.3/bin/libldap-2.4.so.2
#9  0x00007f3d8beab952 in ldap_search_st () from /dctm/product/7.3/bin/libldap-2.4.so.2
#10 0x00007f3d898f93b2 in nnflqbf () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#11 0x00007f3d898ef124 in nnflrne1 () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#12 0x00007f3d898fe5b6 in nnfln2a () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#13 0x00007f3d886cffc0 in nnfgrne () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#14 0x00007f3d887f4274 in nlolgobj () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#15 0x00007f3d886ce43f in nnfun2a () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#16 0x00007f3d886ce213 in nnfsn2a () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#17 0x00007f3d8875f7f1 in niqname () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#18 0x00007f3d88612d06 in kpplcSetServerType () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#19 0x00007f3d8861387b in kpuatch () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#20 0x00007f3d893e9dc1 in kpulon2 () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#21 0x00007f3d892e15f2 in OCILogon2 () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#22 0x0000000000555232 in DBConnection::Connect(DBString const*, DBString const*, DBString const*) ()
#23 0x00000000005555e4 in DBConnection::DBConnection(DBString const&, DBString const&, DBString const&, DBString const&, DBStats*, dmListHead*, int, int volatile*) ()
#24 0x000000000055f6ff in DBDataBaseImp::DBDataBaseImp(DBString const&, DBString const&, DBString const&, DBString const&, DBStats*, DBDataBase*, dmListHead*, int, int volatile*) ()
#25 0x0000000000545aaf in DBDataBase::DBDataBase(DBStats*, DBString const&, DBString const&, DBString const&, DBString const&, dmListHead*, int, int volatile*) ()
#26 0x0000000000466bd8 in dmServer_Dbtest(int, char**) ()
#27 0x00000033b3c1ed1d in __libc_start_main () from /lib64/libc.so.6
#28 0x0000000000455209 in _start ()
Tue Jan  8 16:18:15 2019 Documentum Internal Error: Assertion failure at line: 1459 in file: dmexcept.cxx

Not so precise right?

In fact, it’s pretty simple. The installer failed to use your tnsnames.ora file because LDAP auth is set with a higher priority. For those who don’t know, the tnsnames.ora holds your database connection information. You won’t be able to connect documentum without it, as documentum will try to locate it.

Sometimes, depending on how the DBA installed the oracle client on the machine, LDAP identification may be set prior to the tnsnames identification. So you have two possibilities:

  • Edit sqlnet.ora to set TNSNAMES before LDAP.
  • Rename ldap.ora to something else so that the Oracle Client doesn’t find it and fall back to TNSNAMES. I recommend this way as if the DBA patches the Client, the sqlnet.ora may be set back to LDAP in priority.

For info, these files are located in $ORACLE_HOME/network/admin, by default they are installed under the Oracle user install owner. So to edit the files you must be root or ask the DBAs to do it for you.

Cet article Documentum 7+ internal error during installation or upgrade DBTestResult7092863812136784595.tmp est apparu en premier sur Blog dbi services.

How to build a dummy resource plan

Tue, 2019-01-08 01:54

Often switching off resource plan feature of Oracle database is desired.
Unfortunately setting parameter RESOURCE_MANAGER_PLAN to empty string does not help, because it does not disable switching to another resource plan, e.g. with dbms_scheduler.

Following procedure prevents this switching.

1. Creation of a dummy resource plan, e.g. TEST_PLAN:


BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PLAN(
plan =>'TEST_PLAN',
comment => 'plan for TEST',
mgmt_mth => 'EMPHASIS',
active_sess_pool_mth => 'ACTIVE_SESS_POOL_ABSOLUTE',
parallel_degree_limit_mth => 'PARALLEL_DEGREE_LIMIT_ABSOLUTE');
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
plan =>'TEST_PLAN',
group_or_subplan => 'OTHER_GROUPS',
comment => 'dummy directive'
);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

The only limitation is the number of available CPUs specified in parameter CPU_COUNT, resource manager enforces this limitation. The resource plan INTERNAL_PLAN is for internal use, so I would not use it.

2. Setting of parameter resource_manager_plan:


alter system set resource_manager_plan='FORCE:TEST_PLAN';

This setting prevents switching to another resource plan.

Caveat:

With this setting, deletion of specified resource plan (e.g. TEST_PLAN) causes database to crash:


alter system set resource_manager_plan='FORCE:TEST_PLAN';
BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.DELETE_PLAN('TEST_PLAN');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
BEGIN
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 73473
Session ID: 24 Serial number: 17611

Cet article How to build a dummy resource plan est apparu en premier sur Blog dbi services.

Using Ansible to bring up a three node Patroni cluster in minutes

Mon, 2019-01-07 01:48

Automation is key today, nobody wants to do the same tasks over and over and again. Cloud without automation is not even possible. There are several tools around that help with automation and one of the most popular is Ansible. We already have several posts about Ansible on our blog platform but this one will be rather long. Setting up PostgreSQL high available architectures is our daily business and we as well try to automate as much as possible. We do not only automate to save time, even more important we automate to avoid human errors. What we will share with this post is how you could use Ansible to bring up a three node Patroni cluster from scratch.

Disclaimer: Please see what we show here as a kind of template. You might need to adjust several bits to fit into your environment, other bits for sure can be solved more elegant by using advanced features of Ansible. Anyway, using this template you should be able to bring up one PostgreSQL master instance, two replicas, Patroni and HAProxy in minutes on CentOS 7. This should work the same for Red Hat 7 but if you want to do the same on Debian based systems or SUSE you for sure need to adjust some of the Ansible tasks. This post does not explain how Ansible works nor does it explain what Patroni or HAProxy is.

The starting point is a CentOS minimal installation with just the postgres user and group created and sudo permissions for postgres. That’s it:

postgres@patroni1 ~]$ id -a
uid=1000(postgres) gid=1000(postgres) groups=1000(postgres) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[postgres@patroni1 ~]$ lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.5.1804 (Core) 
Release:	7.5.1804
Codename:	Core
[postgres@patroni1 ~]$ sudo cat /etc/sudoers | grep postgres
postgres	ALL=(ALL)	NOPASSWD: ALL
[postgres@patroni1 ~]$ 

This is the Ansible directory structure on my workstation:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ tree
.
├── _commands.sh
├── _init_dirs.sh
├── patroni
└── roles
    ├── common
    │   ├── files
    │   │   └── PostgreSQL-DMK-17-09.4.zip
    │   ├── handlers
    │   ├── meta
    │   ├── tasks
    │   │   └── main.yml
    │   ├── templates
    │   │   └── compile.sh.j2
    │   └── vars
    └── patroni
        ├── files
        │   ├── etcd.service
        │   └── patroni.service
        ├── handlers
        ├── meta
        ├── site.retry
        ├── site.yml
        ├── tasks
        │   └── main.yml
        ├── templates
        │   ├── etcd.conf.j2
        │   ├── haproxy.cfg.j2
        │   ├── hosts.j2
        │   ├── keepalived.conf.j2
        │   └── patroni.yml.j2
        └── vars

You can use the _init_dirs.sh script to create that but it is pretty much the Ansible default anyway:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat _init_dirs.sh 
#!/bin/bash
touch patroni
mkdir roles/
mkdir roles/common
mkdir roles/common/tasks
mkdir roles/common/handlers
mkdir roles/common/templates
mkdir roles/common/files
mkdir roles/common/vars
mkdir roles/common/meta
mkdir roles/patroni
mkdir roles/patroni/tasks
mkdir roles/patroni/handlers
mkdir roles/patroni/templates
mkdir roles/patroni/files
mkdir roles/patroni/vars
mkdir roles/patroni/meta

What you always need with Ansible is the inventory and in our case it looks like this:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat patroni 
[patroni-servers]
192.168.22.240 keepalived_role=MASTER keepalived_priority=102 ansible_hostname=patroni1 ansible_hostname_fqdn=patroni1.it.dbi-services.com
192.168.22.241 keepalived_role=SLAVE keepalived_priority=101 ansible_hostname=patroni2 ansible_hostname_fqdn=patroni2.it.dbi-services.com
192.168.22.242 keepalived_role=SLAVE keepalived_priority=100 ansible_hostname=patroni3 ansible_hostname_fqdn=patroni3.it.dbi-services.com

[patroni-servers:vars]
postgresql_version=11.1
postgresql_major_version=11
dmk_postgresql_version=11/db_1
etcd_vserion=3.3.10
postgres_user=postgres
postgres_group=postgres
dmk_version=17-09.4
cluster_name=PG1
blank=' '
virtual_ip=192.168.22.245

As you can see there are three machines and several variables defined. The *dmk* stuff if for our management kit, just ignore/delete that for your environment.

We have two roles, one common and one for Patroni. The common role is responsible for doing the common stuff and can be used for single instance PostgreSQL deployments as well so lets start with this one:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/common/tasks/main.yml 
- name: Install all dependencies for PostgreSQL 
  yum: name={{item}} state=present
  with_items:
   - gcc
   - openldap-devel
   - python-devel
   - readline-devel
   - redhat-lsb
   - bison
   - flex
   - perl-ExtUtils-Embed
   - zlib-devel
   - crypto-utils
   - openssl-devel
   - pam-devel
   - libxml2-devel
   - libxslt-devel
   - openssh-clients
   - bzip2
   - net-tools
   - wget
   - screen
   - unzip
   - sysstat
   - xorg-x11-xauth
   - systemd-devel
   - bash-completion

- name: Remove iwl packages
  yum: name={{item}} state=removed
  with_items:
   - iwl*

- name: upgrade all packages
  yum:
    name: '*'
    state: latest

- file:
    path: /u01/app/{{ postgres_user }}/local
    state: directory
    mode: 0700
    recurse: yes
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- file:
    path: /u01
    owner: "{{ postgres_user }}"
    group: "{{ postgres_user }}"
    mode: 0700

- file:
    path: /u01/app
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0700

- file:
    path: /u01/app/{{ postgres_user }}
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0700

- file:
    path: /u02/pgdata/
    state: directory
    mode: 0700
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- file:
    path: /u02/pgdata/{{ postgresql_major_version }}
    state: directory
    mode: 0700
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- file:
    path: /u02/pgdata/{{ postgresql_major_version }}/{{ cluster_name }}
    state: directory
    mode: 0700
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- file:
    path: /u99/pgdata/
    state: directory
    mode: 0700
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- file:
    path: /etc/pgtab
    state: touch
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0600

- name: check if PostgreSQL source code exists
  stat: 
    path: /home/{{ postgres_user }}/source.tar.bz2
  register: source_available

- name: Download the PostgreSQL source code if it is not already there
  get_url:
    url: https://ftp.postgresql.org/pub/source/v{{ postgresql_version }}/postgresql-{{ postgresql_version }}.tar.bz2
    dest: /home/{{ postgres_user }}/source.tar.bz2
    mode: 0775
  when: source_available.stat.exists == false

- name: Check if PostgreSQL is already installed
  stat:
    path: /u01/app/{{ postgres_user }}/product/{{ dmk_postgresql_version }}/bin/postgres
  register: postrgresql_is_installed

- name: extract the sources when PostgreSQL is not already installed
  shell: cd /home/{{ postgres_user }}; tar -axf source.tar.bz2
  become: yes
  become_user: "{{ postgres_user }}"
  when: postrgresql_is_installed.stat.exists == false

- template:
    src: compile.sh.j2
    dest: /home/{{ postgres_user }}/postgresql-{{ postgresql_version }}/compile.sh
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0700

- name: Install PostgreSQL from source code
  shell: cd /home/{{ postgres_user }}/postgresql-{{ postgresql_version }}; ./compile.sh
  become: yes
  become_user: "{{ postgres_user }}"
  when: postrgresql_is_installed.stat.exists == false

- name: check if DMK for PostgreSQL source code exists
  stat:
    path: /u01/app/{{ postgres_user }}/local/PostgreSQL-DMK-{{ dmk_version }}.zip
  register: dmk_source_available

- name: check if DMK for PostgreSQL is extracted
  stat:
    path: /u01/app/{{ postgres_user }}/local/dmk/bin/dmk.bash
  register: dmk_extracted

- name: Copy DMK source distribution
  copy:
    src: PostgreSQL-DMK-{{ dmk_version }}.zip
    dest: /u01/app/{{ postgres_user }}/local/PostgreSQL-DMK-{{ dmk_version }}.zip
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0700
  when: dmk_source_available.stat.exists == false

- name: extract DMK
  shell: cd /u01/app/{{ postgres_user }}/local; unzip PostgreSQL-DMK-{{ dmk_version }}.zip
  become: yes
  become_user: "{{ postgres_user }}"
  when: dmk_extracted.stat.exists == false

- name: check if DMK is installed
  stat:
    path: /home/{{ postgres_user }}/.DMK_HOME
  register: dmk_installed

- lineinfile:
    path: /etc/pgtab
    line: 'pg{{ postgresql_version }}:/u01/app/{{ postgres_user }}/product/{{ dmk_postgresql_version }}:dummy:9999:D'
    create: no
  when: dmk_installed.stat.exists == false

- name: Execute DMK for the first time
  shell: /u01/app/{{ postgres_user }}/local/dmk/bin/dmk.bash; cat /u01/app/{{ postgres_user }}/local/dmk/templates/profile/dmk.postgres.profile >> /home/{{ postgres_user }}/.bash_profile
  become: yes
  become_user: "{{ postgres_user }}"
  when: dmk_installed.stat.exists == false

This should be more or less self explaining so we will only summarize what it does:

  • Install required packages for compiling PostgreSQL from source
  • Remove the iwl* packages
  • Update all packages to the latest release
  • Create the directory structure
  • Download the PostgreSQL source code, compile and install
  • Install our DMK

As said, this role can be included in any other PostgreSQL setup as it only does basic stuff. There is one template used here, which is compile.sh.j2:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/common/templates/compile.sh.j2 
PGHOME=/u01/app/postgres/product/{{ dmk_postgresql_version }}
SEGSIZE=2
BLOCKSIZE=8

./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
	    --with-systemd "
make -j 2 all
make install
cd contrib
make -j 2 install

This one is our standard way of bringing PostgreSQL onto the system and the only parameter is the PostgreSQL version we use for the directory name. No magic, simple stuff and that’s it for the common role.

Coming to the Patroni role. Here is it:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/tasks/main.yml 
---

- name: check if epel rpm already is there
  stat:
    path: /root/epel-release-latest-7.noarch.rpm
  register: epel_rpm_available

- name: Download the EPEL rpm
  get_url:
    url: http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    dest: /root/epel-release-latest-7.noarch.rpm
    mode: 0440
  when: epel_rpm_available.stat.exists == false

- name: check if epel repository is already installed
  stat:
    path: /etc/yum.repos.d/epel.repo
  register: epel_installed


- name: Install the EPEL rpm
  shell: yum localinstall -y /root/epel-release-latest-7.noarch.rpm
  args: 
    warn: false
  when: epel_installed.stat.exists == false

- name: Install all dependencies for Patroni
  yum: name={{item}} state=present
  with_items:
   - python-pip
   - PyYAML
   - bind-utils
   - keepalived
   - haproxy

# create the hosts file
- template:
    src: hosts.j2
    dest: /etc/hosts
    owner: root
    group: root
    mode: 0644

- name: Create the file to load the watchdog module
  file:
    path: /etc/modules-load.d/softdog.conf
    state: touch

- name: Add the watchdog module
  shell: modprobe softdog

- name: Change ownershhip of the watchdog device
  shell: chown postgres /dev/watchdog
  args:
    warn: false

- name: check if etcd sources already exist
  stat:
    path: /home/{{ postgres_user }}/etcd-v{{ etcd_vserion }}-linux-amd64.tar.gz
  register: etcd_source_available

- name: Download etcd
  get_url:
    url: https://github.com/etcd-io/etcd/releases/download/v{{ etcd_vserion }}/etcd-v{{ etcd_vserion }}-linux-amd64.tar.gz
    dest: /home/{{ postgres_user }}/etcd-v{{ etcd_vserion }}-linux-amd64.tar.gz
    mode: 0755
  when: etcd_source_available.stat.exists == false

- name: check if etcd is available in DMK
  stat:
    path: /u01/app/{{ postgres_user }}/local/dmk/bin/etcd
  register: etcd_copied_to_dmk

- name: extract etcd
  shell: cd /home/{{ postgres_user }}/; tar -axf etcd-v{{ etcd_vserion }}-linux-amd64.tar.gz
  become: yes
  become_user: "{{ postgres_user }}"
  when: etcd_copied_to_dmk.stat.exists == false

- name: copy etcd to DMK
  shell: cp /home/{{ postgres_user }}/etcd-v{{ etcd_vserion }}-linux-amd64/etcd* /u01/app/{{ postgres_user }}/local/dmk/bin/
  become: yes
  become_user: "{{ postgres_user }}"
  when: etcd_copied_to_dmk.stat.exists == false

- template:
    src: etcd.conf.j2
    dest: /u01/app/{{ postgres_user }}/local/dmk/etc/etcd.conf
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0700

- name: Copy the etcd systemd service file
  copy:
    src: etcd.service
    dest: /etc/systemd/system/etcd.service
    owner: root
    group: root
    mode: 0755

- file:
    path: /u02/pgdata/etcd
    state: directory
    mode: 0700
    recurse: yes
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- name: force systemd to reread configs
  systemd:
    daemon_reload: yes

- name: Enable the systemd etcd service
  systemd:
    name: etcd
    enabled: yes

- name: Start the systemd etcd service
  shell: systemctl start etcd.service

- name: check if patroni is alraedy installed
  stat:
    path: /home/{{ postgres_user }}/.local/bin/patroni
  register: patroni_is_installed

- name: install and upgrade pip
  shell: pip install --upgrade pip
  when: patroni_is_installed.stat.exists == false
 
- name: install and upgrade setuptools
  become: yes
  become_user: "{{ postgres_user }}"
  shell: pip install --upgrade --user setuptools
  when: patroni_is_installed.stat.exists == false

- name: install psycopg2-binary
  become: yes
  become_user: "{{ postgres_user }}"
  shell: pip install --user psycopg2-binary
  when: patroni_is_installed.stat.exists == false

- name: install patroni
  become: yes
  become_user: "{{ postgres_user }}"
  shell: pip install --user patroni[etcd]
  when: patroni_is_installed.stat.exists == false

- file:
    src: /home/{{ postgres_user }}/.local/bin/patroni
    dest: /u01/app/{{ postgres_user }}/local/dmk/bin/patroni
    owner: "{{ postgres_user }}"
    group: "{{ postgres_user }}"
    state: link

- file:
    src: /home/{{ postgres_user }}/.local/bin/patronictl
    dest: /u01/app/{{ postgres_user }}/local/dmk/bin/patronictl
    owner: "{{ postgres_user }}"
    group: "{{ postgres_user }}"
    state: link

- template:
    src: patroni.yml.j2
    dest: /u01/app/{{ postgres_user }}/local/dmk/etc/patroni.yml
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0600

- name: Copy the patroni systemd service file
  copy:
    src: patroni.service
    dest: /etc/systemd/system/patroni.service
    owner: root
    group: root
    mode: 0755

- name: force systemd to reread configs 
  systemd:
    daemon_reload: yes

- name: Enable the systemd etcd service
  systemd:
    name: patroni
    enabled: yes

# add the instance to /etc/pgtab so DMK is aware of if
- lineinfile:
    path: /etc/pgtab
    line: '{{ cluster_name }}:/u01/app/{{ postgres_user }}/product/{{ dmk_postgresql_version }}:/u02/pgdata/{{ postgresql_major_version }}/{{ cluster_name }}:5432:N'

- template:
    src: haproxy.cfg.j2
    dest: /etc/haproxy/haproxy.cfg
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0600

- name: Enable the systemd haproxy service
  systemd:
    name: haproxy
    enabled: yes

# we need to set this so haproxy can be started
- name: Set selinux context for ha proxy
  shell: setsebool -P haproxy_connect_any=1

- template:
    src: keepalived.conf.j2
    dest: /etc/keepalived/keepalived.conf
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0600
  with_items:
    - { role: "{{ hostvars[inventory_hostname].keepalived_role }}" , priority: "{{ hostvars[inventory_hostname].keepalived_priority }}" }

What it does:

  • Install the Extra Packages for Enterprise Linux (EPEL)
  • Install the dependencies for Patroni, HAProxy
  • Create the /etc/hosts file
  • Enable the watchdog service
  • Download and install etcd
  • Integrate etcd into systemd
  • Install Patroni, create the configuration files and integrate it into systemd
  • Install and configure HAProxy

This role uses several templates. The first one is used to create /etc/hosts:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/templates/hosts.j2 
#jinja2: trim_blocks:False
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

{%- for h in ansible_play_hosts %}
{{ hostvars[h]['ansible_enp0s8']['ipv4']['address'] }} {{ hostvars[h]['ansible_hostname'] }}
{% endfor %}

The second one is used to create the etcd configuration:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/templates/etcd.conf.j2 
name: {{ ansible_hostname }}
data-dir: /u02/pgdata/etcd
initial-advertise-peer-urls: http://{{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:2380
listen-peer-urls: http://{{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:2380
listen-client-urls: http://{{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:2379,http://localhost:2379
advertise-client-urls: http://{{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:2379
initial-cluster:{{ blank }} {%- for h in ansible_play_hosts %}
{{ hostvars[h]['ansible_hostname'] }}=http://{{ hostvars[h]['ansible_enp0s8']['ipv4']['address'] }}:2380{% if not loop.last %},{% endif %}
{% endfor %}

The third one creates the Patroni configuration:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/templates/patroni.yml.j2 
scope: {{ cluster_name }}
#namespace: /service/
name: {{ ansible_hostname }}

restapi:
  listen: {{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:8008
  connect_address: {{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:8008
#  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
#  authentication:
#    username: username
#    password: password

# ctl:
#   insecure: false # Allow connections to SSL sites without certs
#   certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#   cacert: /etc/ssl/certs/ssl-cacert-snakeoil.pem

etcd:
  host: 127.0.0.1:2379

bootstrap:
  # this section will be written into Etcd:///config after initializing new cluster
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      use_slots: true
      parameters:
        wal_level: 'hot_standby'
        hot_standby: "on"
        wal_keep_segments: 8
        max_replication_slots: 10
        wal_log_hints: "on"
        listen_addresses: '*'
        port: 5432
        logging_collector: 'on'
        log_truncate_on_rotation: 'on'
        log_filename: 'postgresql-%a.log'
        log_rotation_age: '1440'
        log_line_prefix: '%m - %l - %p - %h - %u@%d - %x'
        log_directory: 'pg_log'
        log_min_messages: 'WARNING'
        log_autovacuum_min_duration: '60s'
        log_min_error_statement: 'NOTICE'
        log_min_duration_statement: '30s'
        log_checkpoints: 'on'
        log_statement: 'ddl'
        log_lock_waits: 'on'
        log_temp_files: '0'
        log_timezone: 'Europe/Zurich'
        log_connections: 'on'
        log_disconnections: 'on'
        log_duration: 'on'
        client_min_messages: 'WARNING'
        wal_level: 'replica'
        hot_standby_feedback: 'on'
        max_wal_senders: '10'
        shared_buffers: '128MB'
        work_mem: '8MB'
        effective_cache_size: '512MB'
        maintenance_work_mem: '64MB'
        wal_compression: 'off'
        max_wal_senders: '20'
        shared_preload_libraries: 'pg_stat_statements'
        autovacuum_max_workers: '6'
        autovacuum_vacuum_scale_factor: '0.1'
        autovacuum_vacuum_threshold: '50'
        archive_mode: 'on'
        archive_command: '/bin/true'
        wal_log_hints: 'on'
#      recovery_conf:
#        restore_command: cp ../wal_archive/%f %p

  # some desired options for 'initdb'
  initdb:  # Note: It needs to be a list (some options need values, others are switches)
  - encoding: UTF8
  - data-checksums

  pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
  - host replication replicator 192.168.22.0/24 md5
  - host all all 192.168.22.0/24 md5
#  - hostssl all all 0.0.0.0/0 md5

  # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh

  # Some additional users users which needs to be created after initializing new cluster
  users:
    admin:
      password: admin
      options:
        - createrole
        - createdb
    replicator:
      password: postgres
      options:
        - superuser

postgresql:
  listen: {{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:5432
  connect_address: {{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:5432
  data_dir: /u02/pgdata/{{ postgresql_major_version }}/{{ cluster_name }}/
  bin_dir: /u01/app/{{ postgres_user }}/product/{{ dmk_postgresql_version }}/bin
#  config_dir:
  pgpass: /u01/app/{{ postgres_user }}/local/dmk/etc/pgpass0
  authentication:
    replication:
      username: replicator
      password: postgres
    superuser:
      username: postgres
      password: postgres
  parameters:
    unix_socket_directories: '/tmp'

watchdog:
  mode: automatic # Allowed values: off, automatic, required
  device: /dev/watchdog
  safety_margin: 5

tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

Then we have the configuration template for HAProxy:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/templates/haproxy.cfg.j2 
#jinja2: trim_blocks:False
global
    maxconn 100

defaults
    log global
    mode tcp
    retries 2
    timeout client 30m
    timeout connect 4s
    timeout server 30m
    timeout check 5s

listen stats
    mode http
    bind *:7000
    stats enable
    stats uri /
    # stats auth haproxy:haproxy
    # stats refresh 10s

listen {{ cluster_name }}
    bind *:5000
    option httpchk
    http-check expect status 200
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
    {%- for h in ansible_play_hosts %}
    server postgresql_{{ hostvars[h]['ansible_enp0s8']['ipv4']['address'] }}_5432 {{ hostvars[h]['ansible_enp0s8']['ipv4']['address'] }}:5432 maxconn 100 check port 8008
    {% endfor %}

Finally the template for keepalived:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/templates/keepalived.conf.j2 
vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    interface enp0s8
    state {{ item.role }} 
    virtual_router_id 51
    priority {{ item.priority }}
    virtual_ipaddress {
      {{ virtual_ip }}
  }
  track_script {
    chk_haproxy
  }
}

What is left are the systemd service files. The one for etcd:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/files/etcd.service 
#
# systemd integration for etcd 
# Put this file under /etc/systemd/system/etcd.service
#     then: systemctl daemon-reload
#     then: systemctl list-unit-files | grep etcd
#     then: systemctl enable etcd.service
#

[Unit]
Description=dbi services etcd service
After=network.target

[Service]
User=postgres
Type=notify
ExecStart=/u01/app/postgres/local/dmk/bin/etcd --config-file /u01/app/postgres/local/dmk/etc/etcd.conf
Restart=always
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target

For Patroni:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/files/patroni.service 
#
# systemd integration for patroni 
# Put this file under /etc/systemd/system/patroni.service
#     then: systemctl daemon-reload
#     then: systemctl list-unit-files | grep patroni
#     then: systemctl enable patroni.service
#

[Unit]
Description=dbi services patroni service
After=etcd.service syslog.target network.target

[Service]
User=postgres
Group=postgres
Type=simple
ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog
ExecStart=/u01/app/postgres/local/dmk/bin/patroni /u01/app/postgres/local/dmk/etc/patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=process
Restart=no
TimeoutSec=30

[Install]
WantedBy=multi-user.target

The last bit is the site definition which combines all of the above.

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/site.yml 
---
# This playbook deploys a three node patroni PostgreSQL cluster with HAProxy

- hosts: patroni-servers
  become: true
  become_user: root

  roles:
    - common
    - patroni

Once all of that is in place the palybook can be executed:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ ansible-playbook -i ../patroni patroni/site.yml -u postgres

This runs for a couple of minutes as especially upgrading all the operating system packages and comling PostgreSQL will take some time. Once it completed you only need to reboot the systems and your cluster is ready:

postgres@patroni1:/home/postgres/ [pg11.1] patronictl list 
+---------+----------+----------------+--------+---------+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | Lag in MB |
+---------+----------+----------------+--------+---------+-----------+
|   PG1   | patroni1 | 192.168.22.240 | Leader | running |       0.0 |
|   PG1   | patroni2 | 192.168.22.241 |        | running |       0.0 |
|   PG1   | patroni3 | 192.168.22.242 |        | running |       0.0 |
+---------+----------+----------------+--------+---------+-----------+

HAProy is running as well on all three nodes and you can check that by pointing your browser to any of the hosts on port 7000:
Selection_062

Hope that helps.

Cet article Using Ansible to bring up a three node Patroni cluster in minutes est apparu en premier sur Blog dbi services.

ODA: The Patch version must be 12.2.1.4.0

Fri, 2019-01-04 09:43

When trying to patch your virtualized ODA with patch 12.1.2.12.0 and immediately after that with 12.2.1.4.0, you might consider to load the patches like this:


[root@XX1 Oracle_Soft_Repo]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p26433712_1212120_Linux-x86-64_1of2.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.
[root@XX1 Oracle_Soft_Repo]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p26433712_1212120_Linux-x86-64_2of2.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.
[root@XX1 Oracle_Soft_Repo]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p28216780_122140_Linux-x86-64_1of3.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.
[root@XX1 Oracle_Soft_Repo]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p28216780_122140_Linux-x86-64_2of3.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.
[root@XX1 Oracle_Soft_Repo]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p28216780_122140_Linux-x86-64_3of3.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.

After that, a verification attempt shows following strange error:


[root@XX1 tmp]# oakcli update -patch 12.1.2.12.0 --verify
ERROR : The Patch version must be 12.2.1.4.0

Reason is that only the newest patch version is saved in a flag file, so that less recent versions are not known:


[root@XX1 ~]# cat /opt/oracle/oak/pkgrepos/System/VERSION
version=12.2.1.4.0

To install patch 12.1.2.12.0, both patches have to be removed from repository first:


[root@XX1 ~]# oakcli manage cleanrepo --ver 12.2.1.4.0
Deleting the following files...
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OAK/12.2.1.4.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0306/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/HMP/2.4.1.0.11/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/ASR/5.7.7/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.2.0.1.180417/Patches/27674384
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.1.0.2.180417/Patches/27338029
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.4.180417/Patches/27338049
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OEL/6.9/Patches/6.9.1
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OVS/12.2.1.4.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/GI/12.2.0.1.180417/Base

[root@XX1 ~]# oakcli manage cleanrepo --ver 12.1.2.12.0
Deleting the following files...
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OAK/12.1.2.12.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0291/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/HMP/2.3.5.2.8/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/ASR/5.5.1/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.1.0.2.170814/Patches/26609783
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.4.170814/Patches/26609445
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.3.15/Patches/20760997
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.2.12/Patches/17082367
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OEL/6.8/Patches/6.8.4
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OVS/12.1.2.12.0/Base

After that, patch 12.1.2.1.12.0 has to be reloaded into repository:


[root@XX1 ~]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p26433712_1212120_Linux-x86-64_1of2.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.
[root@XX1 ~]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p26433712_1212120_Linux-x86-64_2of2.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.

Now repository is aware of patch 12.1.2.12.0:


[root@XX1 ~]# cat /opt/oracle/oak/pkgrepos/System/VERSION
version=12.1.2.12.0

[root@XX1 ~]# oakcli update -patch 12.1.2.12.0 --verify

INFO: 2018-11-29 14:14:34: Reading the metadata file now...

Component Name Installed Version Proposed Patch Version

--------------- ------------------ -----------------

Controller_INT 4.650.00-7176 Up-to-date

Controller_EXT 13.00.00.00 Up-to-date

Expander 0018 Up-to-date

SSD_SHARED {

[ c1d20,c1d21,c1d22, A29A Up-to-date

c1d23 ]

[ c1d16,c1d17,c1d18, A29A Up-to-date

c1d19 ]

}

HDD_LOCAL A7E0 Up-to-date

HDD_SHARED {

[ c1d0,c1d1,c1d2,c1d PAG1 Up-to-date

3,c1d4,c1d5,c1d6,c1d

8,c1d9,c1d10,c1d11,c

1d12,c1d13,c1d15 ]

[ c1d7,c1d14 ] PD51 PAG1

}

ILOM 3.2.9.23 r116695 Up-to-date

BIOS 30110000 Up-to-date

IPMI 1.8.12.4 Up-to-date

HMP 2.3.5.2.8 Up-to-date

OAK 12.1.2.11.0 12.1.2.12.0

OL 6.8 Up-to-date

OVM 3.4.3 Up-to-date

GI_HOME 12.1.0.2.170418(2517 12.1.0.2.170814(2660

1037,25942196) 9783,26609945)

DB_HOME 12.1.0.2.170418(2517 12.1.0.2.170814(2660

1037,25942196) 9783,26609945)

After installation of patch 12.1.2.12.0, patch 12.2.1.4.0 has to be loaded into repository and after that it can be installed.

Conclusion: It is only possible to load one patch into repository, before loading a newer patch, the former patch has to be installed.

Cet article ODA: The Patch version must be 12.2.1.4.0 est apparu en premier sur Blog dbi services.

Azure SQL Managed instances, Business Critical Tier and high availability

Fri, 2018-12-28 12:08

Last blog of this year 2018 with SQL databases and Cloud Azure and this is not trivial choice. <Speculation mode> I would say year 2019 could be important for all stakeholders. Cloud transformation has begun in many forms at customer shops and we heard about some big actors who are coming in Switzerland including Azure or Google. Data Platforms services will not be an exception in some degree of magnitude … <end of speculation mode>. I briefly introduced SQL Managed instances on a previous write-up and this service will likely not be available immediately when Azure datacenters will start next year in Switzerland. But I keep an eye on it because I believe it may address a lot of customer requests. So, I remembered to attend a session about SQL Server Managed Instances (SQL MI) a couple of months ago at SQL Saturday Madrid (October 2018) and a question raised about high-availability capabilities in such infrastructure especially if a SQL MI could be part of SQL Server availability groups. At this moment the answer was it is not supported because there is no need to use Always-On to achieve HA within the Azure region. Every instance is inherently highly available as that is one of the core PaaS capabilities in SQL Database including redundancy with Azure storage accounts as well as active and passive compute nodes. I admit I partially agreed because it means we restrict HA scenarios on fully PaaS architectures and I had in mind hybrid scenarios where we might include a SQL MI in AG as standby server for DR for instance. But anyway, at the moment of the session, 2 MI options were available including General Purpose and Business Critical in Preview only and on December 4th 2018 the latter was finally available in GA.

150 - 0 - banner

Let’s say Business Critical tier differs from General Purpose tiers in terms of HA because it provides built-in availability groups behind the scene and auto-failover groups for disaster recovery on a remote Azure region as well.

The interesting point is Business Critical tier enables offloading reporting queries on a hidden RO replica by default. Bear in mind replication is asynchronous between replicas meaning data loss and delay are possible on RO replicas.

As normal AG scenarios, if we specify application intent to read only in connection string transparent redirection takes place. I did a try on my SQL MI after upgrading on Business Critical Tier as shown below:

$ (az sql mi show -g sql-mi-rg -n sql-mi-dbi2 | ConvertFrom-Json).Sku.Tier
BusinessCritical

 

From SSMS, I just switched my connection from READ WRITE to READ ONLY intent and I was redirected transparently to the corresponding replica as follows:

150 - 1 - RO connection

Note that the AG configuration is hidden to users and sysadmins and there is no way to get information from usual system views as follows:

150 - 2 - AG config

However, we may get information of different states of underlying AG replication including 4 replicas as described in the Microsoft documentation.

SELECT 
	d.name as [db_name],
	drs.replica_id,
	drs.group_database_id,
	drs.is_local,
	drs.is_primary_replica,
	drs.synchronization_state_desc AS syncro_state,
	drs.synchronization_health_desc AS synchro_health,
	drs.database_state_desc AS db_state,
	drs.log_send_queue_size,
	drs.redo_queue_size
FROM sys.dm_hadr_database_replica_states AS drs
join sys.databases AS d ON d.database_id = drs.database_id

 

150 - 3 - AG state

 

Auto failover groups is a complementary solution to address DR scenarios and to protect from a failure to of Azure region. Its implementation requires more work and more Azure resources … a challenge for next year!

Season’s greetings!!

 

 

Cet article Azure SQL Managed instances, Business Critical Tier and high availability est apparu en premier sur Blog dbi services.

EDB Failover Manager 3.3 With Virtual Ip Address

Mon, 2018-12-24 04:47

In a previous blog , I talked about installing and configuring Edb Failover Manager with PostgreSQL. The configuration of a Virtual Ip Address was not discussed in this previous article.In this blog I am going to show how we can implement a virtual IP with EFM.
We suppose that
-The standby is already built
-EFM is configured and running (see here for the configuration)
Below our configuration ( Iam using PostgreSQL 11.1 with Centos 7)

[root@dbi-pg-tools efm-3.3]# /usr/edb/efm-3.3/bin/efm cluster-status efm
Cluster Status: efm

        Agent Type  Address              Agent  DB       VIP
        -----------------------------------------------------------------------
        Witness     192.168.22.100       UP     N/A
        Master      192.168.22.101       UP     UP
        Standby     192.168.22.102       UP     UP

Allowed node host list:
        192.168.22.101 192.168.22.102 192.168.22.100

Membership coordinator: 192.168.22.101

Standby priority host list:
        192.168.22.102

Promote Status:

        DB Type     Address              XLog Loc         Info
        --------------------------------------------------------------
        Master      192.168.22.101       0/301BE38
        Standby     192.168.22.102       0/301BE38

        Standby database(s) in sync with master. It is safe to promote.
[root@dbi-pg-tools efm-3.3]#

To resume we have three servers
-dbi-pg-tools : the witness with IP 192.168.22.100
-dbi-pg-essentials : the master with IP 192.168.22.101
-dbi-pg-essentials_2 : the standby with IP 192.168.22.102
Let’s say that we want to configure a virtual IP 192.168.22.105.
The first step is to update the efm.properties on both nodes with following lines

virtualIp=192.168.22.105
virtualIp.interface=enp0s9   -- name of my interface
virtualIp.prefix=24
virtualIp.single=true

Then restart efm on both nodes.

[root@dbi-pg-essentials network-scripts]#  service efm-3.3 start

Running again the status command should show the VIP address. Restarting efm should be enough but if ever the virtual IP is not shown we can use following command to add the virtual IP.

[root@dbi-pg-essentials network-scripts]# /usr/edb/efm-3.3/bin/efm_address add4 enp0s9 192.168.22.105/24

Then the status should be

[root@dbi-pg-tools ~]# /usr/edb/efm-3.3/bin/efm cluster-status efm
Cluster Status: efm

        Agent Type  Address              Agent  DB       VIP
        -----------------------------------------------------------------------
        Witness     192.168.22.100       UP     N/A      192.168.22.105
        Master      192.168.22.101       UP     UP       192.168.22.105*
        Standby     192.168.22.102       UP     UP       192.168.22.105

Allowed node host list:
        192.168.22.101 192.168.22.102 192.168.22.100

Membership coordinator: 192.168.22.101

Standby priority host list:
        192.168.22.102

Promote Status:

        DB Type     Address              XLog Loc         Info
        --------------------------------------------------------------
        Master      192.168.22.101       0/301BEE0
        Standby     192.168.22.102       0/301BEE0

        Standby database(s) in sync with master. It is safe to promote.
[root@dbi-pg-tools ~]#

We can manually ping the virtual IP from both servers. There should not exist any loss packets

[root@dbi-pg-tools ~]# ping -q -c3 -w5 192.168.22.105
PING 192.168.22.105 (192.168.22.105) 56(84) bytes of data.

--- 192.168.22.105 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.278/0.366/0.528/0.116 ms
[root@dbi-pg-tools ~]#

And from the master we can see that the VIP address is assigned to enp0s9

[root@dbi-pg-essentials network-scripts]# ifconfig enp0s9                       
enp0s9: flags=4163  mtu 1500
        inet 192.168.22.105  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 08:00:27:53:1c:ed  txqueuelen 1000  (Ethernet)
        RX packets 3394  bytes 397433 (388.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3096  bytes 571448 (558.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@dbi-pg-essentials network-scripts]#

So we can use the virtual address to connect to the primary database

[postgres@dbi-pg-tools ~]$ psql -h 192.168.22.105
Password for user postgres:
psql (11.1)
Type "help" for help.

postgres=# \c sample
You are now connected to database "sample" as user "postgres".
sample=# insert into mytab values(1);
INSERT 0 1
sample=#

And now let’s do a switchover

[root@dbi-pg-tools ~]# /usr/edb/efm-3.3/bin/efm promote efm -switchover
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.
[root@dbi-pg-tools ~]#

A few moment after the switchover, we can see that the vip is now assigned to the new master

[root@dbi-pg-essentials_2 efm-3.3]# ifconfig enp0s9
enp0s9: flags=4163  mtu 1500
        inet 192.168.22.105  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 08:00:27:4d:4d:4d  txqueuelen 1000  (Ethernet)
        RX packets 1235  bytes 149947 (146.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 837  bytes 525953 (513.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

That we can verify by running the status command

[root@dbi-pg-tools ~]# /usr/edb/efm-3.3/bin/efm cluster-status efm
Cluster Status: efm

        Agent Type  Address              Agent  DB       VIP
        -----------------------------------------------------------------------
        Witness     192.168.22.100       UP     N/A      192.168.22.105
        Standby     192.168.22.101       UP     UP       192.168.22.105
        Master      192.168.22.102       UP     UP       192.168.22.105*

Allowed node host list:
        192.168.22.101 192.168.22.102 192.168.22.100

Membership coordinator: 192.168.22.102

Standby priority host list:
        192.168.22.101

Promote Status:

        DB Type     Address              XLog Loc         Info
        --------------------------------------------------------------
        Master      192.168.22.102       0/301E260
        Standby     192.168.22.101       0/301E260

        Standby database(s) in sync with master. It is safe to promote.
[root@dbi-pg-tools ~]#
Conclusion :

In this blog we have seen how we can configure a virtual IP with EFM. Hope that will help

Cet article EDB Failover Manager 3.3 With Virtual Ip Address est apparu en premier sur Blog dbi services.

ODA : Free up space on local filesystems

Fri, 2018-12-21 10:02
Introduction

When you work on ODA you sometimes get struggled with local filesystem free space. ODA has terabytes of space on data disks, but local disks are still limited to a raid-1 array of 2x 480GB disks. And only few GB are dedicated to / and /u01 filesystems. You do not need hundreds of GB on these filesystems, but I think that you prefer to keep at least 20-30% of free space. And if you plan to patch your ODA, you surely need more space to pass all the steps without reaching dangerous level of filling. Here is how to grab free space on these filesystems.

Use additional purgeLogs script

PurgeLogs script is provided as an additional tool from Oracle. It should have been available with oakcli/odacli but it’s not. Download it from MOS note 2081655.1. As this tool is not part of the official ODA tool, please test it before using it on a production environment. It’s quite easy to use, put the zip in a folder, unzip it, and run it with root user. You can use this script with a single parameter that will clean up all the logfiles for all the Oracle products aged of a number of days:


df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 29G 23G 56% /
df -h /u01/
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb1 92G 43G 45G 50% /u01

cd /tmp/
unzip purgeLogs.zip
du -hs /opt/oracle/oak/log/*
11G /opt/oracle/oak/log/aprhodap02db0
4.0K /opt/oracle/oak/log/fishwrap
232K /opt/oracle/oak/log/test
./purgeLogs -days 1

--------------------------------------------------------
purgeLogs version: 1.43
Author: Ruggero Citton
RAC Pack, Cloud Innovation and Solution Engineering Team
Copyright Oracle, Inc.
--------------------------------------------------------

2018-12-20 09:20:06: I adrci GI purge started
2018-12-20 09:20:06: I adrci GI purging diagnostic destination diag/asm/+asm/+ASM1
2018-12-20 09:20:06: I ... purging ALERT older than 1 days

2018-12-20 09:20:47: S Purging completed succesfully!
du -hs /opt/oracle/oak/log/*
2.2G /opt/oracle/oak/log/aprhodap02db0
4.0K /opt/oracle/oak/log/fishwrap
28K /opt/oracle/oak/log/test


df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 18G 34G 35% /
df -h /u01/
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb1 92G 41G 47G 48% /u01

In this example, you just freed up about 13GB. If your ODA is composed of 2 nodes, don’t forget to use the same script on the other node.

Truncate hardware log traces

Hardware related traces are quietly filling up the filesystem if your ODA is running since a long time. This traces are located under /opt/oracle/oak/log/`hostname`/adapters. I don’t know if each model has this kind of behaviour but this was an example on an old X4-2 running for 3 years now.

cd /opt/oracle/oak/log/aprhodap02db0/adapters
ls -lrth
total 2.2G
-rw-r--r-- 1 root root 50M Dec 20 09:26 ServerAdapter.log
-rw-r--r-- 1 root root 102M Dec 20 09:27 ProcessorAdapter.log
-rw-r--r-- 1 root root 794M Dec 20 09:28 MemoryAdapter.log
-rw-r--r-- 1 root root 110M Dec 20 09:28 PowerSupplyAdapter.log
-rw-r--r-- 1 root root 318M Dec 20 09:30 NetworkAdapter.log
-rw-r--r-- 1 root root 794M Dec 20 09:30 CoolingAdapter.log
head -n 3 CoolingAdapter.log
[Mon Apr 27 18:02:28 CEST 2015] Action script '/opt/oracle/oak/adapters/CoolingAdapter.scr' for resource [CoolingType] called for action discovery
In CoolingAdapter.scr
[Mon Apr 27 18:07:28 CEST 2015] Action script '/opt/oracle/oak/adapters/CoolingAdapter.scr' for resource [CoolingType] called for action discovery
head -n 3 MemoryAdapter.log
[Mon Apr 27 18:02:26 CEST 2015] Action script '/opt/oracle/oak/adapters/MemoryAdapter.scr' for resource [MemoryType] called for action discovery
In MemoryAdapter.scr
[Mon Apr 27 18:07:25 CEST 2015] Action script '/opt/oracle/oak/adapters/MemoryAdapter.scr' for resource [MemoryType] called for action discovery

Let’s purge the oldest lines in these files:

for a in `ls *.log` ; do tail -n 200 $a > tmpfile ; cat tmpfile > $a ; rm -f tmpfile; done
ls -lrth
total 176K
-rw-r--r-- 1 root root 27K Dec 20 09:32 CoolingAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 ProcessorAdapter.log
-rw-r--r-- 1 root root 30K Dec 20 09:32 PowerSupplyAdapter.log
-rw-r--r-- 1 root root 29K Dec 20 09:32 NetworkAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 MemoryAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 ServerAdapter.log

2GB of traces you’ll never use! Don’t forget the second node on a HA ODA.

Purge old patches in the repository: simply because they are useless

If you successfully patched your ODA at least 2 times, you can remove the oldest patch in the ODA repository. As you may know, patches are quite big in size because they include a lot of things. So it’s a good practise to remove the oldest patches when you have successfuly patched your ODA. To identify if old patches are still on your ODA, you can dig into folder /opt/oracle/oak/pkgrepos/orapkgs/. Purge of old patches is easy:

df -h / >> /tmp/dbi.txt
oakcli manage cleanrepo --ver 12.1.2.6.0
Deleting the following files...
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OAK/12.1.2.6.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95000N/SF04/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95001N/SA03/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/WDC/WD500BLHXSUN/5G08/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H101860SFSUN600G/A770/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST360057SSUN600G/0B25/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H106060SDSUN600G/A4C0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109060SESUN600G/A720/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/HUS1560SCSUN600G/A820/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA6SUN200G/A29A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA4SUN400G/A29A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/ZeusIOPs-es-G3/E12B/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF2EUSUN73G/9440/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0291/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4370-es-M2/3.0.16.22.f-es-r100119/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109090SESUN900G/A720/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF4EUSUN200G/944A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240AS60SUN4.0T/A2D2/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240B520SUN4.0T/M554/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7280A520SUN8.0T/P554/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4170-es-M3/3.2.4.26.b-es-r101722/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4-2/3.2.4.46.a-es-r101689/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X5-2/3.2.4.52-es-r101649/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/HMP/2.3.4.0.1/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/ASR/5.3.1/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.1.0.2.160119/Patches/21948354
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.4.160119/Patches/21948347
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.3.15/Patches/20760997
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.2.12/Patches/17082367
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OEL/6.7/Patches/6.7.1
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OVS/12.1.2.6.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/GI/12.1.0.2.160119/Base
df -h / >> /tmp/dbi.txt
cat /tmp/dbi.txt
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 28G 24G 54% /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 21G 31G 41% /

Increase /u01 filesystem with remaining space

This only concern ODAs in bare metal. You may have noticed that not all the disk space is allocated to your ODA local filesystems. On modern ODAs, you have 2 M2 SSD of 480GB each in a RAID1 configuration for the system, and only half of the space is allocated. As the appliance is using LogicalVolumes, you can extend very easily the size of your /u01 filesystem.

This is an example on a X7-2M:


vgdisplay
--- Volume group ---
VG Name VolGroupSys
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 446.00 GiB
PE Size 32.00 MiB
Total PE 14272
Alloc PE / Size 7488 / 234.00 GiB
Free PE / Size 6784 / 212.00 GiB
VG UUID wQk7E2-7M6l-HpyM-c503-WEtn-BVez-zdv9kM


lvdisplay
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolRoot
LV Name LogVolRoot
VG Name VolGroupSys
LV UUID icIuHv-x9tt-v2fN-b8qK-Cfch-YfDA-xR7y3W
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:00 +0100
LV Status available
# open 1
LV Size 30.00 GiB
Current LE 960
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:0
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolU01
LV Name LogVolU01
VG Name VolGroupSys
LV UUID ggYNkK-GfJ4-ShHm-d5eG-6cmu-VCdQ-hoYzL4
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:07 +0100
LV Status available
# open 1
LV Size 100.00 GiB
Current LE 3200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:2
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolOpt
LV Name LogVolOpt
VG Name VolGroupSys
LV UUID m8GvKZ-zgFF-2gXa-NSCG-Oy9l-vTYd-ALi6R1
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:30 +0100
LV Status available
# open 1
LV Size 60.00 GiB
Current LE 1920
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:3
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolSwap
LV Name LogVolSwap
VG Name VolGroupSys
LV UUID 9KWiYw-Wwot-xCmQ-uzCW-mILq-rsPz-t2X2pr
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:44 +0100
LV Status available
# open 2
LV Size 24.00 GiB
Current LE 768
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:1
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolDATA
LV Name LogVolDATA
VG Name VolGroupSys
LV UUID oTUQsd-wpYe-0tiA-WBFk-719z-9Cgd-ZjTmei
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:55:25 +0100
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:4
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolRECO
LV Name LogVolRECO
VG Name VolGroupSys
LV UUID mJ3yEO-g0mw-f6IH-6r01-r7Ic-t1Kt-1rf36j
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:55:25 +0100
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:5

212GB are available. Let’s take 100GB for extending /u01:


lvextend -L +100G /dev/mapper/VolGroupSys-LogVolU01
Size of logical volume VolGroupSys/LogVolU01 changed from 100.00 GiB (3200 extents) to 200.00 GiB.
Logical volume LogVolU01 successfully resized.

Filesystem needs to be resized:

resize2fs /dev/mapper/VolGroupSys-LogVolU01
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/mapper/VolGroupSys-LogVolU01 is mounted on /u01; on-line resizing required
old_desc_blocks = 7, new_desc_blocks = 13
Performing an on-line resize of /dev/mapper/VolGroupSys-LogVolU01 to 52428800 (4k) blocks.
The filesystem on /dev/mapper/VolGroupSys-LogVolU01 is now 52428800 blocks long.

Now /u01 is bigger:

df -h /dev/mapper/VolGroupSys-LogVolU01
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01
197G 77G 111G 41% /u01

Conclusion

Don’t hesitate to clean up your ODA before having to deal with space pressure.

Cet article ODA : Free up space on local filesystems est apparu en premier sur Blog dbi services.

PAM Authentication With PostgreSQL

Fri, 2018-12-21 08:52

PostgreSQl supports many authentication methods. The PAM authentication method operates similarly to password except that it uses PAM (Pluggable Authentication Modules) as the authentication mechanism. The user must exist in the database before PAM can be used for authentication.
In this blog I will configure PAM authentication for a PostgreSQL cluster 11 running on a centos 7.

postgres=# select version();
                                                 version

--------------------------------------------------------------------------------
-------------------------
 PostgreSQL 11.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (R
ed Hat 4.8.5-28), 64-bit
(1 row)

postgres=#

[root@dbi-pg-essentials ~]# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core)
[root@dbi-pg-essentials ~]# hostname
dbi-pg-essentials

We suppose that PostgreSQL is already installed with the PAM module. This should be the case if the installation was done with yum. If you decide to install using the sources, be sure to configure with the option –with-pam
With the installation this should exist a service named postgresql in the /etc/pam.d directory. If not you have to create a service for postgresql.

[postgres@dbi-pg-essentials pam.d]# pwd
/etc/pam.d
[root@dbi-pg-essentials pam.d]# ls -l postgresql
-rw-r--r--. 1 root root 71 Nov  7 12:37 postgresql
[root@dbi-pg-essentials pam.d]#

The first step is then to configure PostgreSQL to accept PAM authentication. Like other authentication methods, we have to add the corresponding entries in the pg_hba.conf file

[postgres@dbi-pg-essentials data]$ grep pamservice pg_hba.conf
host    all             all            192.168.22.0/24        pam pamservice=postgresql
[postgres@dbi-pg-essentials data]$

We can note the option pamservice=postgresql. Don’t forget to reload or to restart your cluster after modifying the pg_hba.conf file.
In my case I also had to change the permissions of /etc/shadow file to following

[postgres@dbi-pg-essentials data]$ ls -l /etc/shadow
-r--r--r-- 1 root root 988 Dec 21 11:20 /etc/shadow
[postgres@dbi-pg-essentials data]$

And the configuration is done. For the test let’s create a linux user named for example usrpam in the server

[root@dbi-pg-essentials ~]# useradd -m usrpam
[root@dbi-pg-essentials ~]# passwd usrpam
Changing password for user usrpam.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.
[root@dbi-pg-essentials ~]#

As specified earlier, the user should exist in the database before PAM authentication can be used. So let’s create the same user in PostgreSQL but without password

postgres=# create user usrpam with login;
CREATE ROLE
postgres=#

And now the usrpam should be able to connect from any client in the network 192.168.22.0.

[usr2@dbi-pg-essentials_2 ~]$  psql -h dbi-pg-essentials -U usrpam -d postgres  
Password for user usrpam:
psql (11.1)
Type "help" for help.

postgres=> select user;
  user
--------
 usrpam
(1 row)

postgres=>

Cet article PAM Authentication With PostgreSQL est apparu en premier sur Blog dbi services.

Deploying SQL Server on K8s with Helm charts

Tue, 2018-12-18 10:06

During the last DockerCon EU in Barcelona, I heard a lot about Helm with K8s architectures. It was also a good opportunity to write about it after attending to this conference.

blog 149 - 0 - banner

In a nutshell, Helm is a package manager for K8s and you may think of it like the other ones available on the Linux side with apt, yum or zypper to cite few of them. Helm charts are a key component of the Helm architecture and make deployments easy, standardized and reusable and this is definitely what I was looking for our current CI/CD pipeline implementation for DMK maintenance tool.

Helm matters for enterprise-scale deployments by addressing common challenges with the following (non-exhaustive) list of capabilities

  • Helm charts can be shared easily across the enterprise or with contributors over the world from GitHub repository.
  • Using helm charts allow to get quickly a specific environment for testing
  • Existing charts can be authored for specific deployments regarding the context
  • The easy deployment and deletion of applications make the Helm adoption easier
  • Production- ready packages are possible and eliminate deployment errors due to incorrect configuration files and reduce the complexity of maintaining application catalog

In my case, it’s been a while since I have in mind to simplify my first SQL Server container deployments on K8s with a complex YAML file including a lot of objects like services, pods, secrets and persistent volumes with Helm charts. One additional motivation was the capability to change in-flight some preconfigured settings in the deployment when I wanted to switch from my minikube environment to my AKS cluster on Azure.

In this first write-up I used a custom dbi services image for SQL Server (a production-ready docker image) and I decided to use this image as based of my custom Helm chart. First of all, let’s say I didn’t start from scratch and I used the mssql-linux stable chart available of GitHub but obviously I customized it for my own requirements:

  • The custom dbi services image for SQL Server includes the creation of the flexible architecture and I had to update the persistence volume and claims configuration with this new storage map.
  • The custom image leverages the deployment of our DMK maintenance tool (optional) that includes different SQL objects to perform maintenance of customer databases (basically update stats, rebuild index and backup tasks). So, I needed to add a parameter to enable or not the deployment of this tool inside the pod.
  • TSQL scripts are also executed during the container startup and they apply different server level configuration, configure tempdb database files placement and add some trace flags to meet our best practices. But no real impact on the helm chart here.
  • An “application” user may be created (optional) and will be part of the db_creator server role according to the least privilege principle. In most cases we consider an application doesn’t need sysadmin privileges even on a SQL Server pod and more generally speaking on microservice architectures. So as already done for the DMK parameter described previously, I had to add another one parameter for creating this user when the pod is spin up.

Let’s first begin with my helm chart hierarchy folder which includes important files including Chart.yaml, values.yaml and deployment.yaml.

[dab@DBI-LT-DAB:#]> tree /f
…
T:.
│   .helmignore
│   Chart.yaml
│   values.yaml
│
├───charts
└───templates
        deployment.yaml
        NOTES.txt
        pvc-app.yaml
        pvc-backup.yaml
        pvc-data.yaml
        pvc-tempdb.yaml
        pvc-tranlog.yaml
        secret.yaml
        service.yaml
        _helpers.tpl

 

Let’s focus on the deployment.yaml file and the customized part within the spec.containers.env section related to my docker image specifications:

  • MSSQL_USER, MSSQL_USER_PASSWORD are environment variables related to my “application” user
  • DMK environment variable enables deployment of the DMK maintenance tool

In addition, the environment variables related to the database file placement have been customized for master, tempdb and user databases according to my flexible architecture specifications with:

  • /u00 (for application files)
  • /u01 (for user data and system database files)
  • /u02 (for transaction log files)
  • /u03 (for tempdb database files)
  • /u98 (for backup files).

MountPaths and persistent volume claims section have also been updated accordingly as shown below:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ template "mssql.fullname" . }}
  labels:
    app: {{ template "mssql.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
{{- if .Values.deployment.annotations }}
  annotations:
{{ toYaml .Values.deployment.annotations | indent 4 }}
{{- end }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ template "mssql.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ template "mssql.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: ACCEPT_EULA
              value: "{{ .Values.acceptEula.value | upper }}"
            - name: MSSQL_PID
              value: "{{ .Values.edition.value }}"
            - name: MSSQL_SA_PASSWORD
              valueFrom:
               secretKeyRef:
                 name: {{ template "mssql.fullname" . }}-sa-secret
                 key: sapassword
            - name: MSSQL_USER
              value: "{{ .Values.usersql.value }}"
            - name: MSSQL_USER_PASSWORD
              valueFrom:
               secretKeyRef:
                 name: {{ template "mssql.fullname" . }}-user-secret
                 key: userpassword
            - name: DMK
              value: "{{ .Values.DMK.value }}"
            - name: MSSQL_MASTER_DATA_FILE
              value: /u01/sqlserverdata/mssqlserver/master.mdf
            - name: MSSQL_MASTER_LOG_FILE
              value: /u01/sqlserverdata/mssqlserver/mastlog.ldf
            - name: MSSQL_DATA_DIR
              value: /u01/sqlserverdata/mssqlserver
            - name: MSSQL_LOG_DIR
              value: /u02/sqlserverlog/mssqlserver
            - name: MSSQL_TEMPDBDATA_DIR
              value: /u03/sqlservertempdb/mssqlserver
            - name: MSSQL_TEMPDBLOG_DIR
              value: /u03/sqlservertempdb/mssqlserver
            - name: MSSQL_BACKUP_DIR
              value: /u98/sqlserver/backup/mssqlserver
            - name: MSSQL_ERROR_LOG
              value: /u00/app/sqlserver/admin/mssqlserver/log
            - name: MSSQL_DUMP_DIR
              value: /u00/app/sqlserver/admin/mssqlserver/dump
            - name: MSSQL_TCP_PORT
              value: "{{ .Values.service.port.value }}"
            - name: MSSQL_LCID
              value: "{{ .Values.lcid.value }}"
            - name: MSSQL_COLLATION
              value: "{{ .Values.collation.value }}"
            - name: MSSQL_ENABLE_HADR
              value: "{{ .Values.hadr.value }}"
            {{ if .Values.resources.limits.memory }}
            - name: MSSQL_MEMORY_LIMIT_MB
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            {{ end }}
          ports:
            - name: mssql
              containerPort: {{ .Values.service.port.value }}
          volumeMounts:
            - name: data
              mountPath: /u01
            - name: transactionlog
              mountPath: /u02
            - name: tempdb
              mountPath: /u03
            - name: backup
              mountPath: /u98 
            - name: app
              mountPath: /u00
          livenessProbe:
            tcpSocket:
               port: mssql
            initialDelaySeconds: {{ .Values.livenessprobe.initialDelaySeconds }}
            periodSeconds: {{ .Values.livenessprobe.periodSeconds }}
          readinessProbe:
            tcpSocket:
               port: mssql
            initialDelaySeconds: {{ .Values.readinessprobe.initialDelaySeconds }}
            periodSeconds: {{ .Values.readinessprobe.periodSeconds }}
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
    {{- end }}
      volumes:
      - name: master
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingMasterClaim }}
          claimName: {{ .Values.persistence.existingMasterClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-master
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: data
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingDataClaim }}
          claimName: {{ .Values.persistence.existingDataClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-data
          {{- end -}}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: transactionlog
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingTransactionLogClaim }}
          claimName: {{ .Values.persistence.existingTransactionLogClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-translog
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: tempdb
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingTempdbClaim }}
          claimName: {{ .Values.persistence.existingTempdbClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-tempdb
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: backup
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingBackupClaim }}
          claimName: {{ .Values.persistence.existingBackupClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-backup
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: app
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingApppClaim }}
          claimName: {{ .Values.persistence.existingAppClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-app
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}

 

Referring to my flexible architecture, I added 2 YAML files that contain the new persistent volumes definition for respectively pvc-app for /u00 (app) and pvc-tempdb for /u03 (tempdb).

Here the content of my persistent volume claim for tempdb for instance:

{{- if and .Values.persistence.enabled (not .Values.persistence.existingTempdbClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ template "mssql.fullname" . }}-tempdb
  labels:
    app: {{ template "mssql.fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
{{- if .Values.persistence.annotations }}
  annotations:
{{ toYaml .Values.persistence.annotations | indent 4 }}
{{- end }}
spec:
  accessModes:
    - {{ .Values.persistence.tempdbAccessMode | quote }}
  resources:
    requests:
      storage: {{ .Values.persistence.tempdbSize | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
  storageClassName: ""
{{- else }}
  storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

 

I added to the secret.yaml to include a section dedicated to my “application” user password

---
apiVersion: v1
kind: Secret
metadata:
  name: {{ template "mssql.fullname" . }}-user-secret
  labels:
    app: {{ template "mssql.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
type: Opaque
data:
  {{ if .Values.userpassword }}
  userpassword:  {{ .Values.userpassword.value | b64enc | quote }}
  {{ else }}
  userpassword: {{ randAlphaNum 20 | b64enc | quote }}
{{ end }}

 

Note the helm chart allows you to define your own password or if empty it will generate an random password instead.

Finally, the values.yaml file contains predefined values for my release deployment

# General parameters
acceptEula: 
  value: "Y"
edition: 
  value: "Developer"
DMK: 
  value: "N"
collation: 
  value: SQL_Latin1_General_CP1_CI_AS
lcid: 
  value: 1033
hadr: 
    value: 0
# User parameters
sapassword: 
  value: Password1
usersql: 
  value: dbi_user
userpassword: 
  value: Password2
# Image parameters
image:
  repository: dbi/mssql-server-linux
  tag: 2017-CU12
  pullPolicy: IfNotPresent
# Service parameters
service:
  type: 
    value: LoadBalancer
  port: 
    value: 1433
  annotations: {}
deployment:
  annotations: {}
# Volumes & persistence parameters
persistence:
  enabled: true
  # existingDataClaim:
  # existingTransactionLogClaim:
  # existingBackupClaim:
  # existingMasterClaim:
  # existingAppClaim:
  # existingTempdbClaim:
  storageClass: ""
  dataAccessMode: ReadWriteOnce
  dataSize: 5Gi
  transactionLogAccessMode: ReadWriteOnce
  transactionLogSize: 5Gi
  tempdbAccessMode: ReadWriteOnce
  tempdbSize: 5Gi
  backupAccessMode: ReadWriteOnce
  backupSize: 5Gi
  masterAccessMode: ReadWriteOnce
  masterSize: 5Gi
  appAccessMode: ReadWriteOnce
  appSize: 5Gi
# Probe parameters
livenessprobe:
  initialDelaySeconds: 20
  periodSeconds: 15
readinessprobe:
  initialDelaySeconds: 20
  periodSeconds: 15
# Resourcep parameters
resources:
  limits:
  #  cpu: 100m
    memory: 5Gi
  # requests:
  #  cpu: 100m
  #  memory: 2Gi
nodeSelector: {}
  # kubernetes.io/hostname: minikube

Let’s install my environment release from the helm command below:

$ helm install --name sqlhelm . --set DMK.value=Y --set service.port.value=1451

 

Pretty simple right? Note also that I may change predefined parameter values according to my context very easily. For instance, the DMK maintenance tool is not installed by default when the container is spin up by default and I changed it by explicitly setup the DMK.value to Y. The same applies for the SQL Server port exposed through the service, by default 1433 changed to 1451 in my helm command.

The result is as follows:

LAST DEPLOYED: Mon Dec 17 23:23:26 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
sqlhelm-dbi-mssql-linux-app       Bound   pvc-5faffb52-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-backup    Bound   pvc-5fb0c43a-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-data      Bound   pvc-5fb32657-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-tempdb    Bound   pvc-5fb680fe-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-translog  Bound   pvc-5fbb9350-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s

==> v1/Service
NAME                     TYPE          CLUSTER-IP   EXTERNAL-IP  PORT(S)         AGE
sqlhelm-dbi-mssql-linux  LoadBalancer  10.99.4.205  localhost    1451:32569/TCP  8m57s

==> v1beta2/Deployment
NAME                     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
sqlhelm-dbi-mssql-linux  1        1        1           1          8m57s

==> v1/Pod(related)
NAME                                      READY  STATUS   RESTARTS  AGE
sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr  1/1    Running  0         8m56s

==> v1/Secret
NAME                                 TYPE    DATA  AGE
sqlhelm-dbi-mssql-linux-user-secret  Opaque  1     8m57s
sqlhelm-dbi-mssql-linux-sa-secret    Opaque  1     8m57s

 

This command provides a picture of the deployed components and their different status including the persistent volume claims, my SQL Server pod, the service that exposes the SQL Server port and the K8s secrets for sa and my “application” user passwords. This picture is available at any moment by executing the following command:

$ helm status sqlhelm

 

We may also retrieve a list of existing releases from the following helm command:

$ helm ls sqlhelm
NAME    REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
sqlhelm 1               Mon Dec 17 23:23:26 2018        DEPLOYED        dbi-mssql-linux-1.0.0   1.0             default

 

It’s worth noting that each resource is identified by labels (a very powerful feature on K8s) and we may easily get components installed and related to my release by filtering by the corresponding label (app or release) as follows:

$ kubectl get all -l release=sqlhelm
NAME                                           READY     STATUS    RESTARTS   AGE
pod/sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr   1/1       Running   1          16h

NAME                              TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/sqlhelm-dbi-mssql-linux   LoadBalancer   10.99.4.205   localhost     1451:32569/TCP   16h

NAME                                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/sqlhelm-dbi-mssql-linux   1         1         1            1           16h

NAME                                                 DESIRED   CURRENT   READY     AGE
replicaset.apps/sqlhelm-dbi-mssql-linux-67c4898dfb   1         1         1         16h

 

Let’s just take a look at my SQL Server pod log and let’s focus on the different custom steps applied during the startup of the corresponding pod. All the custom steps are well executed with input values from the values.yaml files.

kubectl logs sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr

======= 2018-12-17 22:29:44 Configuring tempdb database files placement OK =======
======= 2018-12-17 22:29:44 Configuring max server memory =======
2018-12-17 22:29:45.01 spid51      Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
2018-12-17 22:29:45.03 spid51      Configuration option 'max server memory (MB)' changed from 2147483647 to 3840. Run the RECONFIGURE statement to install
…
======= 2018-12-17 22:29:45 Configuring max server memory OK =======
======= 2018-12-17 22:29:45 Creating login dbi_user =======
======= 2018-12-17 22:29:45 Creating login dbi_user OK =======
======= 2018-12-17 22:29:45 Installing DMK =======
Changed database context to 'master'.
Creating dbi_tools...
======= 2018-12-17 22:30:08 Installing DMK OK =======
======= MSSQL CONFIG COMPLETED =======

 

Finally let’s connect from mssql-cli utility to my SQL Server pod and let’s check if everything is ok from a configuration perspective:

master> select name as logical_name, physical_name
....... from sys.master_files;
+----------------+-------------------------------------------------+
| logical_name   | physical_name                                   |
|----------------+-------------------------------------------------|
| master         | /u01/sqlserverdata/mssqlserver/master.mdf       |
| mastlog        | /u01/sqlserverdata/mssqlserver/mastlog.ldf      |
| tempdev        | /u03/sqlservertempdb/mssqlserver/tempdb.mdf     |
| templog        | /u03/sqlservertempdb/mssqlserver/templog.ldf    |
| tempdbdev_2    | /u03/sqlservertempdb/mssqlserver/tempdb2.ndf    |
| tempdbdev_3    | /u03/sqlservertempdb/mssqlserver/tempdb3.ndf    |
| tempdbdev_4    | /u03/sqlservertempdb/mssqlserver/tempdb4.ndf    |
| modeldev       | /u01/sqlserverdata/mssqlserver/model.mdf        |
| modellog       | /u01/sqlserverdata/mssqlserver/modellog.ldf     |
| MSDBData       | /u01/sqlserverdata/mssqlserver/MSDBData.mdf     |
| MSDBLog        | /u01/sqlserverdata/mssqlserver/MSDBLog.ldf      |
| dbi_tools      | /u01/sqlserverdata/mssqlserver/dbi_tools.mdf    |
| dbi_tools_log  | /u02/sqlserverlog/mssqlserver/dbi_tools_log.ldf |
+----------------+-------------------------------------------------+

 

The database file placement meets my flexible architecture requirements. The DMK maintenance tool is also deployed correctly with the dbi_tools database as show below:

master> use dbi_tools;
Commands completed successfully.
Time: 0.253s
dbi_tools> select name as table_name
.......... from sys.tables;
+-----------------------------------+
| table_name                        |
|-----------------------------------|
| dbi_maintenance_task_logs         |
| dbi_maintenance_task_details_logs |
| dbi_maintenance_configuration     |
| __RefactorLog                     |
+-----------------------------------+

 

sa and dbi_user (name by default in my template) logins are available for sysadmin and classical user connections.

master> select name AS login_name
....... from sys.server_principals
....... where type = 'S' and name not like '##%##';
+--------------+
| login_name   |
|--------------|
| sa           |
| dbi_user     |
+--------------+

 

Deployment is done successfully! It was a brief overview of Helm capabilities with SQL Server and other write-ups will come soon!

Happy deployment!

 

Cet article Deploying SQL Server on K8s with Helm charts est apparu en premier sur Blog dbi services.

Red Hat Enterprise Linux 8 – Application Streams

Thu, 2018-12-13 09:05

You may have heard that : Red Hat Enterprise Linux 8 is available for downloading in beta version since few weeks…
You want to download it ? Click here.
RH_EnterpriseLinux8beta_stacked_RGB_BlueA significant change coming with this new version is the way the applications packages are provided. As you know, up to RHEL7 packages were downloaded via repositories listed in .repo files located by default under /etc/yum.repos.d/. This is still the same with RHEL8, but two new major repositories are available in the default redhat.repo files.

In order to get access to them we must of course register my system to a Red Hat Subscription…
[root@rhel8beta1 ~]# subscription-manager register --username xxx.yyy@zzz.com
Registering to: subscription.rhsm.redhat.com:443/subscription
Password:
The system has been registered with ID: e42829a5-8a8e-42d3-a69a-07a1499e9b0e
The registered system name is: rhel8beta1
[root@rhel8beta1 ~]#

…and attach it to a Pool (here the Pool will be chosen automatically) :
[root@rhel8beta1 ~]# subscription-manager attach --auto
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux for x86_64 Beta
Status: Subscribed
[root@rhel8beta1 ~]#

As the /etc/yum.repos.d/redhat.repo is now available, let’s check which repository does it contain :
[root@rhel8beta1 ~]# grep -B1 name /etc/yum.repos.d/redhat.repo
[rhel-8-for-x86_64-rt-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (Debug RPMs)
--
[rhel-8-for-x86_64-rt-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (Source RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-source-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (Source RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-debug-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (Debug RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-source-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (Source RPMs)
--
[rhel-8-for-x86_64-appstream-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (Source RPMs)
--
[rhel-8-for-x86_64-nfv-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (Source RPMs)
--
[rhel-8-for-x86_64-nfv-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (Source RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (Source RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (Source RPMs)
--
[rhel-8-for-x86_64-appstream-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
--
[rhel-8-for-x86_64-rt-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (RPMs)
--
[rhel-8-for-x86_64-appstream-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-debug-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (Debug RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs)
--
[rhel-8-for-x86_64-nfv-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (Source RPMs)
[root@rhel8beta1 ~]#

AppStream ? BaseOS ? What’s that… This is what we will discover in this blog.

First of all we can see that both are enabled by default :
[root@rhel8beta1 ~]# subscription-manager repos --list-enabled
+----------------------------------------------------------+
Available Repositories in /etc/yum.repos.d/redhat.repo
+----------------------------------------------------------+
Repo ID: rhel-8-for-x86_64-baseos-beta-rpms
Repo Name: Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs)
Repo URL: https://cdn.redhat.com/content/beta/rhel8/8/x86_64/baseos/os
Enabled: 1


Repo ID: rhel-8-for-x86_64-appstream-beta-rpms
Repo Name: Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Repo URL: https://cdn.redhat.com/content/beta/rhel8/8/x86_64/appstream/os
Enabled: 1
[root@rhel8beta1 ~]#

BaseOS

Content in BaseOS is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations“.
This is how Red Hat define it. Over a thousand of packages are available from the BaseOS repository :
[root@rhel8beta1 ~]# yum --disablerepo "*" --enablerepo "rhel-8-for-x86_64-baseos-beta-rpms" list available | wc -l
1145
[root@rhel8beta1 ~]#

You can get the full list here.
Basically, those packages are system-related and are mainly used to manage and configure the OS and services (such as NetworkManager, Chrony, Dracut, aso…). In other words most of them are intended for use by system administrators. So nothing very new here except for the fact that they are all grouped in a unique dedicated repository.

AppStream

The second repository contains much more packages (full list here) :
[root@rhel8beta1 /]# yum --disablerepo "*" --enablerepo "rhel-8-for-x86_64-appstream-beta-rpms" list available | wc -l
4318
[root@rhel8beta1 /]#

Application Stream provides additional user space applications, runtime languages and databases. It replaces the “extra” Repos and the Software Collection. All the content in AppStream is available in two formats : the well known RPM format and a brand new one called “module” which an extension to the RPM format.
A module is a set of RPM packages that are linked together. For exemple, if you want to check which packages are concerned by the Postgresql module, you must use the new “yum module” command :
[root@rhel8beta1 /]# yum module list postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:47:42 ago on Mon Nov 26 15:13:03 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
postgresql 10 [d] client, default [d] postgresql module
postgresql 9.6 client, default [d] postgresql module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 /]#

By the way, YUM is no longer the default Packages Manager with RHEL8. The command is still available but it’s actually an alias of the DNF tool (coming from Fedora) :
[root@rhel8beta1 ~]# which yum
/usr/bin/yum
[root@rhel8beta1 ~]# ll /usr/bin/yum
lrwxrwxrwx. 1 root root 5 Oct 15 10:25 /usr/bin/yum -> dnf-3
[root@rhel8beta1 ~]#

If you want to have a look to the main usage differences between YUM and DNF, check that :
[root@rhel8beta1 /]# man yum2dnf
YUM2DNF(8)


NAME
yum2dnf - Changes in DNF compared to YUM
[...] [...]

Let’s go back to our modules. Here is how you can check the packages contained in a module :
[root@rhel8beta1 /]# yum module info postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs) 2.9 kB/s | 4.1 kB 00:01
Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs) 2.9 kB/s | 4.1 kB 00:01
Name : postgresql
Stream : 10 [d] Version : 20180813131250
Context : 9edba152
Profiles : client, default [d] Default profiles : default
Repo : rhel-8-for-x86_64-appstream-beta-rpms
Summary : postgresql module
Description : This postgresql module has been generated.
Artifacts : postgresql-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-contrib-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-docs-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-plperl-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-plpython3-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-pltcl-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-server-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-server-devel-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-static-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-test-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-test-rpm-macros-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-upgrade-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-upgrade-devel-0:10.5-1.el8+1546+27ad5f8e.x86_64


Name : postgresql
Stream : 9.6
Version : 20180813131400
Context : 9edba152
Profiles : client, default [d] Default profiles : default
Repo : rhel-8-for-x86_64-appstream-beta-rpms
Summary : postgresql module
Description : This postgresql module has been generated.
Artifacts : postgresql-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-contrib-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-docs-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-plperl-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-plpython3-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-pltcl-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-server-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-server-devel-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-static-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-test-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-test-rpm-macros-0:9.6.10-1.el8+1547+210b7007.x86_64


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 /]#

The above output shows that a module can contain several streams. Each stream represent a different version of the application.
Moreover, a module can have couple of profiles. A profile is a set of certain RPM packages selected to be installed together for a particular use-case (server, client, development, minimal install, aso…).

To install an application from the default stream and with the default profile, add the ‘@’ character before the application name :
[root@rhel8beta1 /]# yum install @postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:25:55 ago on Mon Nov 26 16:27:13 2018.
Dependencies resolved.
=======================================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================================
Installing group/module packages:
postgresql-server x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 5.1 M
Installing dependencies:
libpq x86_64 10.5-1.el8 rhel-8-for-x86_64-appstream-beta-rpms 188 k
postgresql x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 1.5 M
Installing module profiles:
postgresql/default
Enabling module streams:
postgresql 10


Transaction Summary
=======================================================================================================================================================================================
Install 3 Packages


Total download size: 6.7 M
Installed size: 27 M
Is this ok [y/N]: y
[...] [...] [root@rhel8beta1 /]#

You can also use the “yum module install postgresql” command.

Quick check :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 10.5
[root@rhel8beta1 ~]#

And if you want to install Postgres from an oldest stream and with another profile (here Postgres 9.6 client only) :
[root@rhel8beta1 /]# yum install @postgresql:9.6/client
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:33:45 ago on Mon Nov 26 16:27:13 2018.
Dependencies resolved.
=======================================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================================
Installing group/module packages:
postgresql x86_64 9.6.10-1.el8+1547+210b7007 rhel-8-for-x86_64-appstream-beta-rpms 1.4 M
Installing dependencies:
libpq x86_64 10.5-1.el8 rhel-8-for-x86_64-appstream-beta-rpms 188 k
Installing module profiles:
postgresql/client
Enabling module streams:
postgresql 9.6


Transaction Summary
=======================================================================================================================================================================================
Install 2 Packages


Total download size: 1.6 M
Installed size: 5.8 M
Is this ok [y/N]: y
[...] [...] [root@rhel8beta1 /]#

Check :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 9.6.10
[root@rhel8beta1 ~]#


[root@rhel8beta1 ~]# yum module list --enabled
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:04:43 ago on Thu Dec 13 08:34:05 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
container-tools 1.0 [d][e] default [d] Common tools and dependencies for container runtimes
postgresql 9.6 [e] client [i], default [d] [i] postgresql module
satellite-5-client 1.0 [d][e] gui, default [d] Red Hat Satellite 5 client packages
virt rhel [d][e] default [d] Virtualization module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 ~]#

Hummm… Only the module 9.6 is enabled ? Let’s try to enable the version 10 :
[root@rhel8beta1 ~]# yum module enable postgresql:10
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:08:06 ago on Thu Dec 13 07:52:30 2018.
Dependencies resolved.
======================================================================================================================================
Package Arch Version Repository Size
======================================================================================================================================
Switching module streams:
postgresql 9.6 -> 10


Transaction Summary
======================================================================================================================================


Is this ok [y/N]: y
Complete!


Switching module streams does not alter installed packages (see 'module enable' in dnf(8) for details)
[root@rhel8beta1 ~]#

It’s better now :
[root@rhel8beta1 ~]# yum module list --enabled
Failed to set locale, defaulting to C
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:13:22 ago on Thu Dec 13 08:34:05 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
container-tools 1.0 [d][e] default [d] Common tools and dependencies for container runtimes
postgresql 10 [d][e] client [i], default [d] [i] postgresql module
satellite-5-client 1.0 [d][e] gui, default [d] Red Hat Satellite 5 client packages
virt rhel [d][e] default [d] Virtualization module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 ~]#

But…
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 9.6.10
[root@rhel8beta1 ~]#

…still using 9.6 :-(
After switching from one module to another, we must upgrade the corresponding packages :
[root@rhel8beta1 ~]# yum distro-sync
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:18:40 ago on Thu Dec 13 08:34:05 2018.
Dependencies resolved.
=========================================================§=====================================================================================
Package Arch Version Repository Size
==============================================================================================================================================
Upgrading:
postgresql x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 1.5 M
postgresql-server x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 5.1 M


Transaction Summary
==============================================================================================================================================
Upgrade 2 Packages


Total download size: 6.5 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64.rpm 1.3 MB/s | 5.1 MB 00:03
(2/2): postgresql-10.5-1.el8+1546+27ad5f8e.x86_64.rpm 371 kB/s | 1.5 MB 00:04
--------------------------------------------------------------------------------------------------------------- --------------------------------
Total 1.6 MB/s | 6.5 MB 00:04
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Running scriptlet: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/1
Upgrade: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64
Upgrading : postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/4
Upgrade: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64
Upgrade: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64
Running scriptlet: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Upgrading : postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Running scriptlet: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Upgrade: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64
Upgraded: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Cleanup : postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Upgraded: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Upgraded: postgresql-9.6.10-1.el8+1547+210b7007.x86_64
Cleanup : postgresql-9.6.10-1.el8+1547+210b7007.x86_64 4/4
Upgraded: postgresql-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-9.6.10-1.el8+1547+210b7007.x86_64 4/4
Verifying : postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/4
Verifying : postgresql-9.6.10-1.el8+1547+210b7007.x86_64 2/4
Verifying : postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 3/4
Verifying : postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 4/4


Upgraded:
postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64


Complete!
[root@rhel8beta1 ~]#

And now it’s fine :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 10.5
[root@rhel8beta1 ~]#

So what ?

That was only a first quick try with the AppStream fonctionality. What we can remember here is that with this new way to manage packages we can benefit from parallel availability of multiple versions of software. This is due to the disassociation from the kernel space (BaseOS) – which is still managed in a traditional way, and the user space (AppStream) – which is now deployed in the form of “containerized” applications.
Up to now, when we wanted to upgrade an application to a given version, we had to think about the inter-dependency between this application and the other one that we didn’t want to update. With RHEL8, we can now upgrade one while keeping the other in its current version.

Cet article Red Hat Enterprise Linux 8 – Application Streams est apparu en premier sur Blog dbi services.

Understand Oracle Text at a glance

Thu, 2018-12-13 00:19

What is Oracle Text?

Oracle Text provides indexing, word and theme searching, and viewing capabilities for text in query applications and document classification applications.

Oracle text activation for a user

create user ORATXT identified by oratxt ;
grant ctxapp to ORATXT ;
grant execute on ctxsys.ctx_cls to ORATXT ;
grant execute on ctxsys.ctx_ddl to ORATXT ;
grant execute on ctxsys.ctx_doc to ORATXT ;
grant execute on ctxsys.ctx_output to ORATXT ;
grant execute on ctxsys.ctx_query to ORATXT ;
grant execute on ctxsys.ctx_report to ORATXT ;
grant execute on ctxsys.ctx_thes to ORATXT ;
grant execute on ctxsys.ctx_ulexer to ORATXT ;

Oracle Text configuration and usage

To design an Oracle Text application, first determine the type of queries you expect to run. This enables you to choose the most suitable index for the task. There are 4 use cases with Oracle Text:

  1. Document Collection Applications
    • The collection is typically static with no significant change in content after the initial indexing run. Documents can be of any size and of different formats, such as HTML, PDF, or Microsoft Word. These documents are stored in a document table. Searching is enabled by first indexing the document collection.
    • Queries usually consist of words or phrases. Application users can specify logical combinations of words and phrases using operators such as OR and AND. Other query operations can be used to improve the search results, such as stemming, proximity searching, and wildcarding.
    • An important factor for this type of application is retrieving documents relevant to a query while retrieving as few non-relevant documents as possible. The most relevant documents must be ranked high in the result list.
    • The queries for this type of application are best served with a CONTEXT index on your document table. To query this index, the application uses the SQL CONTAINS operator in the WHERE clause of a SELECT statement.
    • Example of searching
    • SQL> select score(1), doc_id, html_content from docs where contains(html_content, 'dbi', 1) > 0;
       
      SCORE(1) ID HTML_CONTENT
      ---------- ---------- -----------------------------------------------------------
      4 1 <HTML>dbi services provide various IT services</HTML>
      4 9 <HTML>You can become expert with dbi services</HTML>
      4 3 <HTML>The compaany dbi services is in Switzerland.</HTML>

  2. Catalog Information Applications
    • The stored catalog information consists of text information, such as book titles, and related structured information, such as price. The information is usually updated regularly to keep the online catalog up to date with the inventory.
    • Queries are usually a combination of a text component and a structured component. Results are almost always sorted by a structured component, such as date or price. Good response time is always an important factor with this type of query application.
    • Catalog applications are best served by a CTXCAT index. Query this index with the CATSEARCH operator in the WHERE clause of a SELECT statement.
    • Example of searching
    • SQL> select product, price from auction where catsearch(title, 'IT', 'order by price')> 0;
       
      PRODUCT PRICE
      ----------------------------------- ----------
      IT Advice 1 hour 499
      Course IT management 3999
      License IT monitoring 199
      IT desk 810

  3. Document Classification Applications
    • In a document classification application, an incoming stream or a set of documents is compared to a pre-defined set of rules. When a document matches one or more rules, the application performs some action. For example, assume there is an incoming stream of news articles. You can define a rule to represent the category of Finance. The rule is essentially one or more queries that select document about the subject of Finance. The rule might have the form ‘stocks or bonds or earnings’.
    • When a document arrives about a Wall Street earnings forecast and satisfies the rules for this category, the application takes an action, such as tagging the document as Finance or e-mailing one or more users.
    • To create a document classification application, create a table of rules and then create a CTXRULE index. To classify an incoming stream of text, use the MATCHES operator in the WHERE clause of a SELECT statement. See Figure 1-5 for the general flow of a classification application.
    • SQL> select category_id, category_name
      from categories
      where matches(blog_string, 'Dbi services add value to your IT infrastructure by providing experts in different technologies to cover all your needs.');
       
      QUERY_ID QUERY_STRING
      ---------- -----------------------------------
      9 Expertise
      2 Advertisement
      6 IT Services

  4. XML Search Applications
    • An XML search application performs searches over XML documents. A regular document search usually searches across a set of documents to return documents that satisfy a text predicate; an XML search often uses the structure of the XML document to restrict the search.
      Typically, only that part of the document that satisfies the search is returned. For example, instead of finding all purchase orders that contain the word electric, the user might need only purchase orders in which the comment field contains electric.

In conclusion, there is various uses cases for which Oracle Text will help you with text indexation. Before implementing, verify which will best suit your need. Also, it may be interesting to compare with an external text indexer like Solr which is also able to index your database via a JDBC driver.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

Cet article Understand Oracle Text at a glance est apparu en premier sur Blog dbi services.

Transparent Data Encryption – Certificate in master database: is it really a good practice?

Tue, 2018-12-11 02:00

As encryption solution in SQL Server, Transparent Data Encryption (TDE) is simple and quick to set up. That’s why this is a common encryption mechanism.

TDE encrypts data with a certificate at the page level, before SQL Server writes on the disk. It is supposed to protect your environment from some scenarios, where SQL Server files (backups or data) are stolen.
By default the certificate used for encryption is stored in the master database. But is it really a good practice?

Let’s see with a common scenario:
– One MSSQL instance where TDE is enabled for one database
– One MSSQL instance without any certificate

Configuring TDE for a database

Create a master key:

USE master
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'P@$$w0rd'
GO

Create a certificate to use for TDE:

USE master
GO
CREATE CERTIFICATE CertinMaster
   WITH SUBJECT = 'Self-Signed Certificate in Master',   
   EXPIRY_DATE = '20241231';  
GO  

Certificate

Create a database encryption key in the database you want to encrypt, protected by the certificate:

USE dummy
GO
CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256 ENCRYPTION BY SERVER CERTIFICATE CertinMaster
GO

Enable encryption on the database:

USE master
GO
ALTER DATABASE dummy SET ENCRYPTION ON
GO

tde

At this step:
– TDE is configured for dummy database
– Certificate used for encryption is located in the master database

Restoring encrypted database to an another instance – first try

After backing up dummy database and copying the backup to another MSSQL instance, an error is displayed when trying to restore this database with required certificate:

Restore_tde_witout_certificate_impossible

This example shows how TDE protects data from a scenario where someone has robbed your backup file.
A similar error appears if an attach file procedure is used.

Restoring encrypted database to an another instance – second try

After backing up master database, let’s try to restore it on the other MSSQL environment.
Restoring a master database is a bit more complicated than a user database, but not impossible.

First it is mandatory to set the instance in Single User mode, by adding Trace Flag ‘-m’ at startup parameters.
PowerShell code executed on target server:

$Path = "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.ARCTURUS\MSSQLServer\Parameters\";
New-ItemProperty -Path $Path -PropertyType String -Name SQLArg3 -Value "-m"

Note: depending on how many startup parameters you have (default is 3), adjust accordingly the item property to SQLArgX.

A restart of the instance is necessary to apply the Trace Flag. Do not start the agent service, otherwise it will connect to the instance in single user mode.

As only one connection to the instance is authorized in this mode, PowerShell is more suited to do the job for the restore:

$Query= "RESTORE DATABASE master FROM DISK = 'C:\Mountpoints\Backup\master.bak' WITH REPLACE";
Invoke-Sqlcmd -ServerInstance KERRIGAN\ARCTURUS -Query $Query"

The SQL Server is automatically stopped.

First remove the Single user mode:

$Path = "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.ARCTURUS\MSSQLServer\Parameters\";
Remove-ItemProperty -Path $Path -Name SQLArg3"

After disabling Single User mode, SQL Server services (Engine + Agent) can start again.

After query the new master, it is effectively holding the certificate:
Certificate_Copied

Finally after trying to restore the dummy database on this instance, it works without any trouble: data are freely accessible.

Conclusion

For a complete protection, I do not recommend to use a simple form of Transparent Data Encryption. As a best practice, I do recommend to store the encryption key on a third-party system using Extensible Key Management (EKM).

 

Cet article Transparent Data Encryption – Certificate in master database: is it really a good practice? est apparu en premier sur Blog dbi services.

Foglight

Mon, 2018-12-10 11:37

As many Oracle Database Administrators, I am a regular user of Oracle Enterprise Manager in order to monitor or administer my client’s databases. I am lucky because most of them have the diagnostic pack and the tuning pack :=).

But we always have to be curious and discover new products. As described by my colleague Nicolas Penot, I installed Foglight:

https://blog.dbi-services.com/foglight-monitoring-solution-for-databases-part-01/

https://blog.dbi-services.com/foglight-monitoring-solution-for-databases-part-02/

The installation was successful and very quick to realize, remember when you have to install OEM 13c, if you well know the product and have a powerful server, if you can install in half a day , you are pretty efficient.

I successfully discovered two Oracle databases (version 12.2.0.1 and 18.3.0.0). As you can see in the picture below, you can also monitor MySQL , Postgres, DB2 , and SQL Server.

fg1

 

The first summary database page shows a lot of useful information (process activity, memory activity, logical reads … a.s.o:

fg2

 

fg3

We have the possibility to navigate in different menus for example the Storage Tablespace Summary:

fg4

We have the possibility to display the datafiles I/O summary:

fg5

 

We can display the pluggable databases and to have an overview of workload metrics with nice graphs:

fg6

 

fg7

 

Foglight also displays the alert.log file error messages with four categories (Informational, Critical Warning or Fatal):

We create false critical messages in the alert.log file, I used dbms_system.ksdwrt() to generate false ORA-00600 or ORA_04031 error messages:

fg8

 

You can visualize , enable or disable the alarms, edit and modify the collections, the configuration is intuitive:

fg9

The report menu is also interesting, many templates for different databases are defined:

fg10

It is very difficult to make a choice between Foglight and OEM. My first opinion (but I did not test Foglight in an active production environment) is that Foglight is a monitoring solution for many types of databases (MySQL, Oracle, Postgres, SQL Server, DB2), thus OEM is a monitoring and administration tool for Oracle databases.

Effectively you can add SQL server or MySQL plugin , Medora plugin for Postgres in OEM  in order to administer SQL Server, MySQL or Postgres databases, but you will never reach the powerful functionalities offered by OEM to monitor or administer Oracle databases.

The big advantage is their threshold, notification functionality and the reporting facilities which both OEM and Foglight have.  I did not test a lot SQL PI in Foglight, and I’m asking myself if like Oracle we have the possibility to have an historical graph of what happened the week before, if we have the possibility to display the execution plan and of course to run the SQL tuning advisor to offer another execution plan for example ?

The time dedicated to administer Foglight seems to be equivalent to OEM. You cannot install those enterprise products and let them live, like OEM Foglight seems to be a complex system and requires a significant time to run effectively.

If you do not have a lot of databases and not a big budget, you can use simple SQL scripts to keep you alerted. You have several different types of database to monitor, it might be a good idea to test and use Foglight. If most of your databases are Oracle based and you have paid for the tuning and diagnostic pack, use OEM to fully benefit of the administration and monitoring console.

 

Cet article Foglight est apparu en premier sur Blog dbi services.

First immersion in the Docker Conference EU 2018

Sat, 2018-12-08 12:02

In short, a very interesting event for both Devs and Ops. Every day was organized around workshops, hands-on-labs sessions and Hallway tracks. My colleague Mehdi Bada and I tried to attend as much sessions as possible but obviously it was difficult to cover all the topics.

blog 149 - 0 - dockercon small

Anyway, workshops and hands-on-labs were very interesting especially if you like to mix theory and practice. But I had to admit sometimes we got in trouble to keep up the pace of some workshops. Regarding the workshop we ran into what I call the “Context switch” issue between following the trainer’s explanation and doing exercises at the same time :) The migrating .NET applications to Docker workshop with Elton Stoneman (Docker) was one that comes I mind in this case :)

As database specialists at dbi services we obviously had a special focus on storage-oriented topics and we were interested in attending sessions and workshops on this topic including Use Cases and Practical Solutions for Docker Container Storage on Swarm and K8s session with Don Stewart (Docker) and Mark Church (Docker) as well as Container Storage Panel Q&A with Ed Beauvais (Oracle), Chris Brandon (Storage OS Inc) and Keith Hudgins (Docker). We got an overview of different possible solutions to implement as file-based, block-based and object-based storage in order to address different pattern workloads including fileserver, OLTP, BigData etc. Container Storage Landscape is large and vendor-specific actually but Docker storage team announced some plans to introduce first snapshot / restore capabilities and to provide an CSI (Common Storage Interface) to offer a simple community driven approach and a more predictable and functional interface for most common use cases as well. Let’s see what’s happen in the future but my guess (speculation mode) is that for “specific” applications like databases, vendor storage drivers will likely remain the most viable option when performance will be at the heart of concerns.

blog 149 - 1 - storage drivers

Even if containers are formally design to handle stateless applications it is not uncommon to see databases in such infrastructure nowadays. After all databases are also (special) applications, right? I was already convinced by the fact that containerization infrastructure was now enough mature to handle database workloads, these sessions reinforced my strong belief that Docker Swarm or K8s are production database ready from a storage perspective at least.

We also got the opportunity to attend to workshops and sessions around container orchestrator topics including mainly Docker Swarm and K8s orchestration. It was interesting to see that the same question often raised by attendees during these sessions: Do we have to use Swarm over K8s and vice-versa-ca? This is also a question we are going to ask for a dbi services internal project by the way and obviously, there is no black-or-white response. What is certain is that Docker Swarm remains important for customers as confirmed by Steve Singh during the first general session on Tuesday 4th December 2018. We got feedback from customer stories like Citizens bank that an orchestrator choice depends on different factors and in the context of this customer, they are using the both from Docker EE :) We also attended to other interesting Swam and K8s infrastructure topics including Swarm Orchestration – features and workflows by Bret Fisher (Docker Captain) and Container Networking for Swarm and Kubernetes in Docker Enterprise by Guillaume Morini (Docker) as well. Finally, and probably one of my favorite workshops was troubleshooting with sysdig by Michael Ducy. Sysdig is part of well-known monitoring / troubleshooting tools for containers in the market. Let’s say that it was a subtle combination between deep dive immersion of Linux kernel principals and practical scenarios about using sysdig tools to fix container issues as confirmed by my working desktop below:

blog 149 - 2- WS sysdig

In addition to sessions, workshops and hands-on-labs, new announcements were done at the DockerCon EU 2018, during general sessions with Steve Singh (CEO) and Scott Johnston (Chief Product Officer) as main speakers.

blog 149 - 3 - dockercon general session 1

First general session announcements include new innovative tools including docker-app, docker-assemble and enhancement of docker stack support for both Swarm and Kubernetes since Docker EE 2.0 and probably the most expected one: Docker Desktop Enterprise. It turns out that the adoption of Docker Desktop from developers was a real success but not really designed to scale to Enterprise-class environment and this is basically what Docker Desktop enterprise product is supposed to address.

It was also an opportunity to get some interesting figures about Docker (EE) adoption across the world:

  • 1M of new developer
  • 5M of new applications
  • 1B of containers downloaded every week
  • 650+ customers on docker EE
  • 76.4% of companies running mission-critical apps in containers in production

The last one is by far my favorite because it highlights that most of Docker environments are not anymore developer-scoped limited environments. Moreover, it is worth noting that the other following figures seem to point out that Docker is not a visionary developer whim anymore and it drives a strong adoption for customer due to an interesting ROI:

  • 69% differentiating products and services vs competitors
  • 70% => increasing sales of product
  • 71% bringing products to market faster

Finally, let’s finish with the second general session that was more Docker community-oriented and I know how important community may be for contribution and to bring people for interaction as well. As Microsoft with MVPs, Docker Captains are the Docker counterpart and were thanked for their wonderful contribution during this event. But obviously contribution is beyond MVPs, ACEs or Docker captains and Kal De (CTO, EVP, Product Development) explained how to contribute to different Docker projects and showed then contribution figures from the community through GitHub:

  • Compose 1 MM monthly
  • 25K new compose files published on GitHub per week
  • 14K GitHub contributors – 280+ people

This first immersion in the Docker World conference was definitely a good experience and a great opportunity to feel the emphasis around Docker and future directions made by the company. I also appreciated discussions and feedbacks from some attendees during network track to prepare our future challenges on this topic.

 

 

 

Cet article First immersion in the Docker Conference EU 2018 est apparu en premier sur Blog dbi services.

PostgreSQL : Get my database server name

Fri, 2018-12-07 16:32

I was looking for a build-in function to get the hostname of the server hosting my PostgreSQL cluster.
But seems that there is no build-in function. Looking in the extensions, I find the extension hostname which can allow to get the database server host name.
In this this blog I am explaining how to install and how to use it. The installation is very easy. The first step is to download it here .

After let’s go to the directory where the archive was decompressed and let’s run the command make

master/pg-hostname-master/ [PG1] ls
Changes  doc  hostname.control  Makefile  META.json  README.md  sql  src  test
14:46:26 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-master/pg-hostname-master/ [PG1] ls
Changes  doc  hostname.control  Makefile  META.json  README.md  sql  src  test
14:46:48 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-

master/pg-hostname-master/ [PG1] make
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -I. -I./ -I/u01/app/postgres/product/95/db_0/include/server -I/u01/app/postgres/product/95/db_0/include/internal -D_GNU_SOURCE -I/usr/include/libxml2   -c -o src/hostname.o src/hostname.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -L/u01/app/postgres/product/95/db_0/lib -Wl,--as-needed -Wl,-rpath,'/u01/app/postgres/product/95/db_0/lib',--enable-new-dtags  -shared -o src/hostname.so src/hostname.o
cp sql/hostname.sql sql/hostname--1.0.0.sql
14:46:55 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-

Once done let’s run the command make install

pg-hostname-master/ [PG1] make install
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/lib'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/doc/extension'
/bin/install -c -m 644 .//hostname.control '/u01/app/postgres/product/95/db_0/share/extension/'
/bin/install -c -m 644 .//sql/hostname--1.0.0.sql .//sql/hostname--unpackaged--1.0.0.sql  '/u01/app/postgres/product/95/db_0/share/extension/'
/bin/install -c -m 755  src/hostname.so '/u01/app/postgres/product/95/db_0/lib/'
/bin/install -c -m 644 .//doc/hostname.mmd '/u01/app/postgres/product/95/db_0/share/doc/extension/'
14:47:29 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-master/pg-hostname-master/ [PG1]

If everything is ok we should now have the extension in our $PGHOME/share/extension

15:02:39 postgres@dbi-pg-essentials:[PG1] ls -ltra *hostname*
-rw-r--r--. 1 postgres postgres 140 Dec  5 14:47 hostname.control
-rw-r--r--. 1 postgres postgres  96 Dec  5 14:47 hostname--1.0.0.sql
-rw-r--r--. 1 postgres 

And that’s all. We just have now have to install the extension in the database

postgres=# CREATE EXTENSION hostname;
CREATE EXTENSION

And then now we can have the hostname of our the server

postgres=# SELECT hostname();
     hostname
-------------------
 dbi-pg-essentials
(1 row)

postgres=#

Cet article PostgreSQL : Get my database server name est apparu en premier sur Blog dbi services.

PostgreSQL 12 : New option –socketdir for pg_upgrade

Fri, 2018-12-07 16:30

PostgreSQL 12 is under development but tests can be done. Steps to install this version can be found in this dbi blog . Many thanks to Daniel.
While reading the documentation I found that there is a new parameter for pg_upgrade. This new parameter is –socketdir.
Why this parameter?
In fact the path name of a UNIX socket is allowed to be maximally 108 chars long. Before PostgreSQL 12, the default directory for the sockets created for the temporary postmasters started by pg_upgrade was the current directory. But depending of the current directory the pathname might be very long for a socket name. In PostgreSQL 12 The default location is still the current working directory, but the parameter socketdir now allows us to specify another location

To better understand I am going to upgrade from PostgreSQL 10 to PostgreSQL 12

20:59:44 postgres@dbi-pg-essentials:/u02/pgdata/PG12TEST/ [PG12TEST] psql -U postgres -d postgres
psql (12devel dbi services build)
Type "help" for help.

postgres=# select version();
                                                            version
---------------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11
(1 row)

postgres=#

Now let’s create a very deeply nested directory

mkdir -p ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd

And let’s do a cd into this new directory and let’s try to do an upgrade

$ export PGDATAOLD=/u02/pgdata/PG3
$ export PGBINOLD=/u01/app/postgres/product/10/db_1/bin
$ export PGDATANEW=/u02/pgdata/PG12TEST
$ export PGBINNEW=/u01/app/postgres/product/12dev/db_0/bin

When running the pg_upgrade with the check option, we got following errors

$ pg_upgrade -c
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: Unix-domain socket path "/home/postgres/ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/.s.PGSQL.50432" is too long (maximum 107 bytes)

could not connect to source postmaster started with the command:
"/u01/app/postgres/product/10/db_1/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/u02/pgdata/PG3" -o "-p 50432 -b  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/home/postgres/ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd'" start
Failure, exiting
$

Seems that the pathname for the socket is very long.
And if we use this new parameter, we can specify a new location for the sockets. And we can see that the checks are now successful

$ pg_upgrade -c --socketdir=/home/postgres/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDs                               ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

*Clusters are compatible*

And then we can upgrade the cluster using this new parameter and still staying in this new created directory

$ pg_upgrade  --socketdir=/home/postgres/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDs                               ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_xact to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh
$
Conclusion :

In this blog we have seen the new option –socketdir for pg_upgrade. It’s a good thing to know that this parameter exists, but in most case the current working directory should be ok for an upgrade

Cet article PostgreSQL 12 : New option –socketdir for pg_upgrade est apparu en premier sur Blog dbi services.

Pages