Wim Coekaerts

Subscribe to Wim Coekaerts feed
Oracle Blogs
Updated: 10 hours 42 min ago

Oracle Ksplice patch for CVE-2018-3620 and CVE-2018-3646 for Oracle Linux UEK r4

Wed, 2018-08-15 10:52

There was an Intel disclosure yesterday of a set of vulnerabilities around L1TF. You can read a summary here.

We released, as you can see from the blog, a number of kernel updates for Oracle Linux and a Ksplice patch for the same.  I wanted to take the opportunity again to show off how awesome Oracle Ksplice is.

The kernel patch we have for L1TF was about 106 different patches together. 54 files changed, 2079 insertions(+), 501 deletions(-). About 1.2Mb binary size of the ksplice kernel module for this patch. All this went into a single Ksplice patch!

Applied in a few microseconds. On one server I have in Oracle Cloud, I always run # uptrack-upgrade manually, on another server I have autoinstall=yes.

# uptrack-upgrade The following steps will be taken: Install [1vao34m9] CVE-2018-3620, CVE-2018-3646: Information leak in Intel CPUs under terminal fault. Go ahead [y/N]? y Installing [1vao34m9] CVE-2018-3620, CVE-2018-3646: Information leak in Intel CPUs under terminal fault. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.18.1.el7uek

My other machine was up to date automatically and I didn't even know it.  I had to run # uptrack-show and it already had it applied. No reboot, no impact on my stuff I run here. Just autonomously done. Patched. Current.

Folks sometimes ask me about other live patch abilities from some other vendors. Well,  We have the above for every errata kernel released since the spectre/meltdown CVEs (as this is a layer on top of that code) at the same time as the kernel RPMs were released, as an integrated service. 'nuf said.

Oh and everyone in Oracle Cloud, remember, the Oracle Ksplice tools (uptrack) are installed in every OL image by default and you can run this without any additional configuration (or additional charges).

Oracle Ksplice for Oracle Linux in Oracle Cloud

Thu, 2018-08-09 11:38

My favorite topic.. Ksplice! Just a friendly reminder that every Oracle Linux instance in Oracle Cloud comes with Oracle Ksplice installed/enabled by default at no additional cost beyond basic compute.

When you run an OL instance, the uptrack tools are on the base image. (uptrack-upgrade, uptrack-uname, etc..). The config file (/etc/uptrack/uptrack.conf) contains an access-key that enables any cloud instance to talk to our Ksplice service without registration. So as soon as you log into your system you can run # uptrack-upgrade or # uptrack-show .

uptrack doesn't run automatically, by default.  You are expected to manually type # uptrack-upgrade . What this does is the following: it goes to our service and looks at which Ksplice patches are available for your running kernel and asks if you want to install them. if you add - y then  it will just go ahead and install whatever is available without prompting you.

uptrack-show lists the patches that are already applied on your running kernel/system.

uptrack-uname shows the 'effective' kernel version. What this means is which kernel version you are effectively updated to with relevant CVEs and critical issues.

Here's a concrete example of my OCI instance:


# uname -a Linux devel 4.1.12-124.14.5.el7uek.x86_64 #2 SMP Fri May 4 15:26:53 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux

My instance runs UEK R4 (4.1.12-124.14.5) that's the actual RPM that's installed and the actual kernel that I booted the instance with.


# uptrack-uname -a Linux devel 4.1.12-124.15.1.el7uek.x86_64 #2 SMP Tue May 8 16:27:00 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux

I already ran uptrack-upgrade before so a number of patches are already applied and installed up to the same level as 4.1.12-124.15.1. So instead of installing the 4.1.12-124.15.1 kernel-uek RPM and rebooting, when I ran uptrack-upgrade a while back, it got me right to that level without affecting my availability one bit.

I did not enable auto-install so since I ran that command a while back, I have not done it again, a good number of (some serious) CVE's have been fixed and released since so it's time to update... but I so hate reboots! luckily.. no need.

What's already installed? Let's see...


# uptrack-show Installed updates: [1zkgpvff] KAISER/KPTI enablement for Ksplice. [1ozdguag] Improve the interface to freeze tasks. [nw9iml90] CVE-2017-15129: Use-after-free in network namespace when getting namespace ids. [i9x5u5uf] CVE-2018-5332: Out-of-bounds write when sending messages through Reliable Datagram Sockets. [dwwke2ym] CVE-2017-7294: Denial-of-service when creating surface using DRM driver for VMware Virtual GPU. [cxke2gao] CVE-2017-15299: Denial-of-service in uninstantiated key configuration. [nwtwa8b3] CVE-2017-16994: Information leak when using mincore system call. [hfehp9m0] CVE-2017-17449: Missing permission check in netlink monitoring. [7x9spq2j] CVE-2017-17448: Unprivileged access to netlink namespace creation. [lvyij5z2] NULL pointer dereference when rebuilding caches in Reliable Datagram Sockets protocol. [s31vmh6q] CVE-2017-17741: Denial-of-service in kvm_mmio tracepoint. [3x6jix1s] Denial-of-service of KVM L1 nested hypervisor when exiting L2 guest. [d22dawa6] Improved CPU feature detection on microcode updates. [fszq2l5k] CVE-2018-3639: Speculative Store Bypass information leak. [58rtgwo2] Device Mapper encrypted target Support big-endian plain64 IV. [oita8o1p] CVE-2017-16939: Denial-of-service in IPSEC transform policy netlink dump. [qenhqrfo] CVE-2017-1000410: Information leak in Bluetooth L2CAP messages. [965vypan] CVE-2018-10323: NULL pointer dereference when converting extents-format to B+tree in XFS filesystem. [drgt70ax] CVE-2018-8781: Integer overflow when mapping memory in USB Display Link video driver. [fa0wqzlw] CVE-2018-10675: Use-after-free in get_mempolicy due to incorrect reference counting. [bghp5z31] Denial-of-service in NFS dentry invalidation. [7n6p7i4h] CVE-2017-18203: Denial-of-service during device mapper destruction. [okbvjnaf] CVE-2018-6927: Integer overflow when re queuing a futex. [pzuay984] CVE-2018-5750: Information leak when registering ACPI Smart Battery System driver. [j5pxwei9] CVE-2018-5333: NULL pointer dereference when freeing resources in Reliable Datagram Sockets driver. Effective kernel version is 4.1.12-124.15.1.el7uek

so the above patches were installed last time. Quite a few! All applied, without affecting availability.

Ok, what else is available... a whole bunch, best apply them!


# uptrack-upgrade The following steps will be taken: Install [f9c8g2hm] CVE-2018-3665: Information leak in floating point registers. Install [eeqhvdh8] Repeated IBRS/IBPB noise in kernel log on Xen Dom0 or old microcode. Install [s3g55ums] DMA memory exhaustion in Xen software IO TLB. Install [nne9ju4x] CVE-2018-10087: Denial-of-service when using wait() syscall with a too big pid. Install [3xsxgabo] CVE-2017-18017: Use-after-free when processing TCP packets in netfliter TCPMSS target. Install [rt4hra3j] CVE-2018-5803: Denial-of-service when receiving forged packet over SCTP socket. Install [2ycvrhs6] Improved fix to CVE-2018-1093: Denial-of-service in ext4 bitmap block validity check. Install [rjklau8v] Incorrect sequence numbers in RDS/TCP. Install [qc163oh5] CVE-2018-10124: Denial-of-service when using kill() syscall with a too big pid. Install [5g4kpl3f] Denial-of-service when removing USB3 device. Install [lhr4t7eg] CVE-2017-7616: Information leak when setting memory policy. Install [mpc40pom] CVE-2017-11600: Denial-of-service in IP transformation configuration. Install [s77tq4wi] CVE-2018-1130: Denial-of-service in DCCP message send. Install [fli7048b] Incorrect failover group parsing in RDS/IP. Install [lu9ofhmo] Kernel crash in OCFS2 Distributed Lock Manager lock resource initialization. Install [dbhfmo13] Fail-over delay in Reliable Datagram Sockets. Install [7ag5j1qq] Device mapper path setup failure on queue limit change. Install [8l28npgh] Performance loss with incorrect IBRS usage when retpoline enabled. Install [sbq777bi] Improved fix to Performance loss with incorrect IBRS usage when retpoline enabled. Install [ls429any] Denial-of-service in RDS user copying error. Install [u79kngd9] Denial of service in RDS TCP socket shutdown. Go ahead [y/N]? y Installing [f9c8g2hm] CVE-2018-3665: Information leak in floating point registers. Installing [eeqhvdh8] Repeated IBRS/IBPB noise in kernel log on Xen Dom0 or old microcode. Installing [s3g55ums] DMA memory exhaustion in Xen software IO TLB. Installing [nne9ju4x] CVE-2018-10087: Denial-of-service when using wait() syscall with a too big pid. Installing [3xsxgabo] CVE-2017-18017: Use-after-free when processing TCP packets in netfliter TCPMSS target. Installing [rt4hra3j] CVE-2018-5803: Denial-of-service when receiving forged packet over SCTP socket. Installing [2ycvrhs6] Improved fix to CVE-2018-1093: Denial-of-service in ext4 bitmap block validity check. Installing [rjklau8v] Incorrect sequence numbers in RDS/TCP. Installing [qc163oh5] CVE-2018-10124: Denial-of-service when using kill() syscall with a too big pid. Installing [5g4kpl3f] Denial-of-service when removing USB3 device. Installing [lhr4t7eg] CVE-2017-7616: Information leak when setting memory policy. Installing [mpc40pom] CVE-2017-11600: Denial-of-service in IP transformation configuration. Installing [s77tq4wi] CVE-2018-1130: Denial-of-service in DCCP message send. Installing [fli7048b] Incorrect failover group parsing in RDS/IP. Installing [lu9ofhmo] Kernel crash in OCFS2 Distributed Lock Manager lock resource initialization. Installing [dbhfmo13] Fail-over delay in Reliable Datagram Sockets. Installing [7ag5j1qq] Device mapper path setup failure on queue limit change. Installing [8l28npgh] Performance loss with incorrect IBRS usage when retpoline enabled. Installing [sbq777bi] Improved fix to Performance loss with incorrect IBRS usage when retpoline enabled. Installing [ls429any] Denial-of-service in RDS user copying error. Installing [u79kngd9] Denial of service in RDS TCP socket shutdown. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.17.2.el7uek Done!

I now have a total of 46 Ksplice updates applied on this running kernel.


# uptrack-uname -a Linux devel 4.1.12-124.17.2.el7uek.x86_64 #2 SMP Tue Jul 17 20:28:07 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux

current to the 'latest' UEKR4 version in terms of CVEs

Now we don't provide driver 'updates' or so in these patches only critical fixes and security fixes. So the kernel is not -identical- to the 4.1.12-17.2 in every sense. But it certainly is on your current system as it's related to bad things that could happen!

Since I don't want to forget running the update, I am going to just enable Ksplice to run through a cron job. Just edit /etc/uptrack/uptrack.conf and change autoinstall = no to autoinstall = yes.

A few other things:

When Ksplice patches are installed and you do end up doing a reboot, the installed patches will be automatically applied again right at boot time if you reboot into the same original kernel. Note - it will not automatically go look for new patches.

If you want to also go check for new updates, you can comment out #upgrade_on_reboot = yes  this will make that happen.

I removed all installed Ksplice updates (online, using # uptrack-remove --all) and now will time reapplying all 46:


# time uptrack-upgrade -y ... real 0m11.705s user 0m4.273s sys 0m4.807s

So 11.7 seconds to apply all 46. Each patch gets applied one after the other, there is no system halt for that long at all, for each individual patch it just halts for a few us (not noticeable) and then has a short pause to continue to the next but this pause is just the uptrack tool, not your server instance.

So enable autoinstall, enable upgrade_on_reboot=yes and you have an Oracle Linux system that you can just leave running and you automatically are current with CVEs/critical fixes without having to worry...Autonomous Oracle Linux patching. Pretty cool!

Some vendors are trying to offer 'live patching' but those things don't come even close. It validates the importance of this technology and feature set,  it's not anywhere near a viable alternative.

Have fun!


Oracle Linux containers security

Wed, 2018-08-01 13:05

I recently did a short webcast that talked about Oracle Linux & Containers and some suggestions around best practices and some security considerations.

The webcast had just a few slides and some of the feedback I received was that there could have been more textual assist to the talking so I promised I would write up a few things that came up during the webcast. Here it is:

We have been providing Oracle Linux along with great support for nearly 12 years. During those years, we have added many features and enhancements. Through upstream contributions, picked up by the various open source projects that are distributed as part of Oracle Linux (in particular UEK) or additional features/services such as Oracle Ksplice or DTrace (released under GPL), etc...

In terms of virtualization, we’ve been contributing to Xen since 2005+.  Xen is the hypervisor used in Oracle VM. A bit more recently, we are also heavily focus on kvm and qemu in Linux.  Of course, we have Oracle VM VirtualBox. So a lot of virtualization work has been going on for a very long time and will continue to be the case for a very long time. We have many developers working on this full time (and upstream).

Container work:

We were early adopters of lxc and were one of the first, if not the first, to certify lxc with enterprise applications such as our database or applications. This was before Docker existed.

Lxc was the initial push to  mainstreaming container support in Linux.  It helped push a lot of projects in the Linux kernel around resource management, namespace support, all the cgroups work,... lots of isolation support really got a big start around this time. Many developers contributed to it and certainly a bunch of openvz concepts got proposed to get merged into the mainline kernel.

A few years after lxc, Docker came to the forefront and really made containers popular - talk about mainstream… and again, we ended up providing Docker from the very beginning and saw a lot of potential in the concept of lightweight small images on Linux for our product set.

Today - everyone talks about Kubernetes, Docker or Docker-alternatives such as Rkt and microservices. We provide Oracle Container Services for use with Kubernetes and Oracle Container Runtime for Docker support to our customers as part of Oracle Linux subscriptions. Oracle also has various Oracle Cloud services that provide Kubernetes and Docker orchestration and automation. And, of course, we do a lot testing and supporting  many Oracle products running in these isolation environments.

The word isolation is very important.

For many years I have been using the world isolation when it comes to containers, not virtualization. There is a big distinction.

Running containers in a Linux environment is very different from running Solaris Zones, or running VMs with kvm or Xen. Kvm or Xen, that’s "real" virtualization. You create a virtual compute environment and boot an entire operating system inside (it has a virtual bios, boots a kernel from a virtual disk, etc). Sure-  there are some optimizations and tricks around paravirtualization but for the most part it’s a Virtual Machine on a real machine. The way Solaris Zones is implemented  is also not virtualization, since you share the same host kernel amongst all zones etc, But - the Solaris Zones  implementation is done as full fledged feature. It’s a full-on isolation layer inside Oracle Solaris top to bottom. You create a zone and the kernel does it all for you right then and there: it creates a completely separate OS container for you, with all the isolation provided across the board. It’s great. Has been around for a very long time, is used widely by almost every Oracle Solaris user and it works great. It provides a very good level of isolation for a complete operating system environment. Just like a VM provides a full virtual hardware platform for a complete operating system environment.

Linux containers, on the other hand, are implemented very differently. A container is created through using a number of different Linux kernel features and you can provide isolation at different layers. So you can create a Linux container that acts very, very similar to a Solaris zone but you can also create a Linux container that has a tremendous amount of sharing amongst other containers or just other processes. The Linux resource manager and various namespace implementations let you pick and choose. You can share what you want, and you can isolate what you want. You have a PID namespace, IPC namespace, User Namespace, Net namespace ,... each of these can be used in different ways or combined in different ways. So there’s no CONTAINER config option in linux, no container feature but there are tools, libraries, programs that use these namespaces and cgroups to create something that looks like a complete isolated environment akin to zones.

Tools like Docker and lxc do all the "dirty work" for you, so to speak. They also provide you with options to change that isolation level up and down.

Heck, you can  create a container environment using bash!  Just echo some values to a bunch of cgroups files and off you go. It’s incredibly flexible.

Having this flexibility is great as it allows for things like Docker (just isolated a process, not a whole operating environment). You don’t have to start with /bin/init or /bin/systemd and bring up all the services. You can literally just start httpd and it sees nothing but itself in its process namespace. Or… sure… you can start /bin/init and you get a whole environment, like what you get by default with lxc.

I think Docker (and things like Docker - Rkt,..) is the best user of all these namespace enhancements in the Linux kernel. I also think that, because the Linux kernel developers implemented resource and namespace management the way they did, it allowed for a project like Docker to take shape. Otherwise, this would have been very difficult to conceive. It allowed us to really enter a new world of… just start an app, just distribute the app with the libraries it needs, isolate an app from everything else, package things as small as possible as a complete standalone unit…

This,in turn, really helped the microservices concept because it makes micro really... micro... Docker-like images give a lot more flexibility to application developers because now you can have different applications running on the same host that have different library needs or different versions of the  same application without having to mess with PATH settings and carving out directories and seeing one big mess of things… Sure, you can do that with VMs… but the drawback of a VM is (typically) that you bring in an entire OS (kernel, operating environment) to then start an app. This can cause a lot of overhead. Process isolation along with small portable images gives you an incredibly amount of flexibility and...sharing...

With that flexibility also comes responsibility - whereas one would have in the order of 10-20 VMs on a given server, you can run maybe 30-40-50 containerized OS environments (using lxc) but you could run literally 1000s of application containers using docker. They are, after all, just a bunch of OS processes with some namespaces and isolation. And if all they run is the application itself, without the surrounding OS supported services, you have much less overhead per app than traditional containers.

If you run very big applications that need 100% performance and power and the best ‘isolation’... you run a single app on a single physical server.

If you have a lot of smaller apps, and you’re not worried about isolation you can just run those apps on a single physical server. Best performance, harder to manage.

If you have a lot of smaller environments that you need to host with different OSs or different OS levels,.. You typically just run tons of VMs on a physical server. Each VM boots its own kernel, has its own virtual disk, memory etc. and you can scale.. 4-16 typical.

If you want to have the best performance where you don’t need that high isolation of separate kernels and independent OS releases down the kernel version (or even something like Windows and Linux  or Oracle Linux  and Ubuntu etc)... then you can consider containers. Super light weight, super scalable and portable.

The image can range from an OS image (all binaries installed, all libraries like a vm or physical OS install) or… just an app binary, or an app binary + libraries it needs. If you create a binary that is statically linked, you can have a container that's exactly 1 file. Isn't that awesome?

Working on Operating Systems at a company that is also a major cloud provider is really great. It gives us direct access to scale. Very, very large scale... and also a direct requirement around security. As a cloud provider we have to work very, very hard towards ensuring security in a multi-tenant environment. Protect customers data from one another. Deploying systems in isolation in an enterprise can be at a reasonable scale and of course security is very important or should be but the single tenancy aspect reduces the complexity to a certain extend.

Oracle Linux is used throughout Oracle cloud as the host for running VMs, as the host for running container services or other services, in our PaaS, SaaS stacks, etc. We work very closely with the cloud development teams to provide the fastest, most scalable solutions without compromising security. We want VMs to run as fast possible, we want to provide container services, but we also make sure that a container running for tenant A doesn’t, in any way, expose any data to a container running for tenant B.

So let’s talk a little bit about security around all this. Security breaches are up. A significant increase of data breaches every month, hacking attempts… just start a server or a VM with a public IP on the internet and watch your log files - within a few minutes you see login attempts and probes. It’s really frightening.

Enterprises used to have 100s maybe 1000s of servers - you have to keep the OS and applications current with security fixes. While reasonably large, still manageable… then add in virtualization and you increase by a  factor the number of instances (10000+)… so you drastically increase your exposure … and then you go another factor or couple of factors up  to microservices and containers - deployed across huge numbers of servers… security becomes increasingly more important and more difficult. 100000+... Do you even know where they run, what they run, who owns them?

On top of all that - in the last 8 or so months: Spectre and Meltdown.  Removing years of assumptions and optimizations everyone has relied upon. We suddenly couldn't trust VMs on the same host being isolated well enough, or processes from snooping on other processes, without applying code changes on the OS side or even in some cases in the applications to prevent exposure.

Patches get introduced. Performance drops.. And it’s not always clear to everyone what the potential exposure is and where you have to really worry and where you might not have to worry too much.

When it comes to container security, there are different layers:

Getting images / content from external (or even internal sites)

There are various places where developers can download 3rd party container images. Whereas in the past one would download source code for some project or download a specific application… these container images (let’s call them docker images) are now somewhat magical blackboxes you download a filesystem layer, or a set of layers. There are tons of files inside but you don’t typically look around, you pull an image and start it… not quite knowing what’s inside… these things get downloaded onto a laptop.. Executed… and … do you know what’s inside? Do you know what it’s doing? Have these been validated? Scanned?

Never trust what you just download from random sites. Make sure you download things that are signed, or have been checksummed and come from reputable places. Good companies will run vulnerability scanners such as Clair or Qualys as part of the process, make sure developers have good security coding practices in place. When you download an image published on Oracle Container Registry, it contains code that we built, compiled, tested, scanned, put together.  When you download something from a random site, that might not be the case.

One problem: it is very easy to get things from the outside world.. # docker pull,  by default, goes to Docker hub.. Companies can’t easily put development environments in place that prevent you from doing that. One thing we are working on with Oracle Containers Runtime using Docker is adding support for access control to Docker image repos. You can lock down which repos are accessible and which aren’t. . for instance: your Docker repo list can be an internal site only, not Docker hub.

When building container images you should always run some form of image scanner.

We are experimenting with Notary - use Notary to digitally sign content so that you  can verify images that are pulled down. We are looking at providing a Notary service and the tools for you to build your own.

Building images

Aside from using Clair or Qualys in your own CI/CD environment, you also have to make sure that you update the various layers (OS, library layer, application layer(s)) with the latest patches. Security errata are released on a regular basis. With normal OS’s whether bare metal or VMs, sysadmins run management software that easily updates packages on a regular basis and keeps things up to date. It’s relatively easy to do so and it is easy to see what is installed on a given server. There might be an availability impact when it comes to kernel updates but for the most part it is a known problem...  Updating containers, while technically, you can argue, it’s easy… just rebuild your images… it does mean that you have to go to all servers running these containers and bring them down and back up. You can’t just update a running image. The ability to do anything at runtime is much more limited than when you run an OS instance with an application. From a security point of view, you have to consider that. Before you start deploying containers at scale, you have to decide on your patch strategy. How often do you update your images, how do you distribute these images, how do you know all the containers that are running and which versions they run, which layers are they running etc.. sorting this out after a critical vulnerability hits will introduce delays and have a negative impact and potentially create large exposure.

So - have a strategy in place to update your OS and application layers with security fixes, have a strategy in place on how to distribute these new image updates and refresh your container farm.

Lock down

If you are a sophisticated user/developer, you have the ability to really add very fine grained controls. With Docker you have options like privileged containers: giving extra access to devices and resources. Always verify that anything that is started privileged has been reviewed by a few people. Docker also provides Linux Capabilities control such as mknod or setgid or chroot or nice etc.. look at your default capabilities that are defined and where possible, remove any and all that are not absolutely needed.

Look into the use of SELinux policies.  While SELinux operates at the host level only, it provides you with an additional security blanket. Create policies to restrict access to files or operations.

There is no SELinux namespace support yet.  This is an important project to work on, we started investigating this, so that you can use SELnux within a container in its own namespace, with its own local container policies.

Something we use a lot as well inside Oracle: seccomp. Seccomp lets you filter syscalls (white list). Now, when you really lock down your syscalls and have a large list, there can be a bit of a performance penalty… We’re doing development work to help improve seccomp’s filter handling in the kernel. This will show up in future versions of upstream Linux and also in our UEK kernel.

What’s nice with seccomp is that if you have an app and you know exactly which few syscalls are required, you can enforce that it will only ever be allowed to access / execute those systemcalls and nothing else will get through in case a rogue library would magically get loaded and try to do something.

So if you are really in need for the highest level of lockdown, a combination of these 3 is ideal. Use seccomp to restrict your system calls exposed to your container, use SELinux policies to control access to processes that are running and what they can do with labels, use capabilities alongside / on top of seccomp to prevent privileged commands to run and run everything non-privileged.

The third major part is the host OS.

You can lock down your container images and such, but remember that these instances all run (typically) on a Linux server. This server runs an OS kernel, OS libraries (glibc)... and security vulnerability fixes need to be applied. Always ensure that you apply errata on the host OS…  I would always recommend customers to use Oracle Ksplice with Oracle Linux

Oracle Ksplice is a service that provides the ability for users to apply critical fixes (whether bugs or vulnerabilities) while the system is up and running with no impact to the applications (or containers).

While not every update can be provided as an online patch, we’ve had a very, very high success rate. Even very complex code changes been fixed or changed using Ksplice.

We have two areas that we can address. Kernel – the original functionality since 2009 and also since a number of years, a handful of userspace libraries. We are in particular focused on those libraries that are in the critical path – glibc being the most obvious one along with openssl.

While some aspects of security are the ability to lock down systems and reduce the attack surface, implement best practices, protect source of truth, prevent unauthorized access as much as possible, etc… if applying security fixes is difficult and have a high impact on availability, most companies / admins will take their time to apply them. Potentially waiting weeks or months or even longer to schedule downtime. Keep in mind that with Ksplice we provide the ability to ensure your host OS (whether using kvm or just containers) can be patched while all your VMs and/or containers continue to run without any impact whatsoever. We have a unique ability to significantly reduce the service impact of staying current with security fixes.

Some people will be quick to say that live migration can help with upgrading VM hosts by migrating VM guest off to another server and reboot the host that was freed up – while that’s definitely a possibility, it’s not always possible to offer live migrate capabilities at scale. It’s certainly difficult in a huge cloud infrastructure.

In the world of containers where we are talking about a 10-100 fold or even more number of instances running per server, this is even more critical. Also, there is no live migration yet for containers. There’s some experimental work but not production quality to migrate a container/Docker instance / Kubernetes pod from one server to another.

As we look more into the future with Ksplice: we are looking at more userspace library patching and see how can make that scale on a container level  - the ability to apply , for instance, glibc fixes within container instances directly without downtime. This is a very difficult problem to solve because there can be 100’s of different versions of glibc running and we also have to ensure images are updated on the fly so that a new instance will be ‘patched’ at startup. This is a very dynamic environment.

This brings me to a final project we are working on in the container world:

Project Kata is a hybrid model of deploying applications with the flexibility and ease of use (small, low overhead) of containers and with the security level of VMs.  The scalability of Kata containers is somewhere in between VMs and native containers. Order of low 1000s not high 1000s. Startup time is incredibly fast. Starting a VM typically take 20-30 seconds, starting Docker instances takes in the order of few milliseconds. Starting a Kata container takes between half a second and 3 seconds depending on the task you run.  A Kata container effectively creates a hardware virtualization context (like kvm uses) and boots a very,  very optimized Linux kernel, that can start up in a fraction of a second, with tiny ramdisk image that can execute the binaries in your container image. It provides enough sharing on the host to scale but it also provides a nice clean virtualization context that helps isolation between processes.

Most, if not all, cloud vendors run container services inside VMs for a given tenant. So the containers are isolated from other tenants through a VM context. But that provides a bit more overhead than is ideal. We would like to be able to provide containers that run as native and low overhead as possible.,.. We are looking into providing a preview for developers and users to play with this. Oracle Linux with UEKR5.  We have a Kata container kernel built that boots in a fraction of a second and we created a tiny package that executes a Docker instance on an Oracle Linux host. It’s experimental,  we are evaluating the advantages and disadvantages (how secure is the kernel memory sharing, how good is performance at scale, how transparent is it to run normal docker images in these kata containers, are they totally compatible etc etc).

Lots of exciting technology work happening.

bbcp and rclone for Oracle Linux

Fri, 2018-07-13 10:20

Last week we packaged up a few more RPMs for Oracle Linux 7 that will help make life easier for Cloud users.

bbcp  in ol7_developer:

# yum install bbcp

bbcp is what I would call ssh on steroids. If you want to copy files from a local node to a remote node (say in Oracle Cloud) then this is a great tool. It might require some tuning but the idea is that you can open up parallel TCP streams. When you do large file transfers this should be able to give you a bit of a performance boost. I would also recommend using UEK5 and enable BBR as the congestion control algo. (see an old blog entry). The combination of enabling BBR (only has to be done on one of the 2 nodes (src or dest)) and using bbcp to copy large files using parallel streams should provide you the best throughput. By making this into an RPM for OL, it makes it easily available for everyone to use.

rclone 1.42 in ol7_developer

# yum install rclone

rclone is a very cool command line tool to move files around from/to local storage and cloud object storage. This works very well with Oracle Cloud Infrastructure's Object Storage. Now that it's packaged as an RPM with OL you can just install it directly from the command line instead of having to go download a file from a website. rclone works like scp.

Example could be  # rclone copy localdir ocistorage:remotedir

In order to configure rclone for Oracle Cloud Infrastructure's Object Storage, you have to create an "Amazon S3 Compatible API Key". This generates a secret key that you have to use during rclone config along with the access key (looks like an OCID in Object Storage   ocid1.credential.oc1.<string>) .

Configuration example:

# sudo yum install -y rclone

-> In the OCI console you go to Identity -> Users -> User Details -> Amazon S3 Compatible API Key and generate a new Secret Key.

-> copy the secret key because you need that to configure rclone, and you will also need the  Access Key (which is an OCID)

-> configure rclone on your OL7 client.

Example :

# rclone config

-> type n (new remote) and give it a name

name> ocistorage

Type of storage to configure.

-> type 3  (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio))

Choose your S3 provider.

type 8 (Any other s3 compatible provider)

-> Next type 1 (1 / Enter AWS credentials in the next step) 

For access key provide the ocid

-> access_key_id> ocid1.credential.....

For the secret access key use your secret key that was just generated.

secret_access_key> tyjXhM7eUuB2v........

Region to connect to.

-> hit enter

For endpoint (example, phoenix) enter a https url

example :  https://orclwim.compat.objectstorage.us-phoenix-1.oraclecloud.com

my tenant name is orclwim  so replace it with your tenant name.

The end point URLs are





Location Constraint hit enter

and ACL hit enter

type y OK to store the settings

you should get something like

Current remotes:

Name                 Type
====                 ====
ocistorage           s3


That's it - we have some code changes pending that will include oracle and the endpoints in rclone but those are being reviewed still.


Oracle Linux 7 for ARM is now Generally Available

Sun, 2018-06-24 13:01

We released Oracle Linux 7 for ARM a few days ago. General Availability. We have been making previews available for a few months now but the time has come to put support behind it and make clear to customers and partners that this is a real product, not just a preview.

A few specific things:

- This is a 64-bit version only. We do not intend to support ILP 32. Our focus is on making sure we can provide a high quality server product to run now and in the future, serious applications and I think it's fair to say that ILP32 would just be more work with little added value to reach that goal. So OL7 is a very clean 64-bit only distribution.

- Oracle Linux 7 update 5 is the base level of OL7 for ARM. We have done a lot of work to ensure that it's very close to x86(x64). Our ARM packages are built off of the same source RPMs as the x86 version and that allows us to have as little, if any deviation between the 2 architectures. We want it to be as seamless as possible to go from one architecture to the other. We will make the same errata available across the architectures and where it makes sense, have the same repo names and structure.

- Our ARM port uses UEK5 only. The other distribution kernels are still a bit in flux on ARM because their x86 kernel is a bit older and ARM is still undergoing a decent amount of churn. For us, with the UEK model, it was a lot easier to align the 2 architectures and it worked out perfectly fine timing wise. UEK5 is 4.14.x mainline Linux based. So we have the same kernel, same source-base on x86 as well as arm. That means dtrace is there, ksplice support is there, etc...  Errata for one architecture, when relevant on the other will be released at same time. Again - streamline it as much as possible so that our customers and partners that have both x86 and arm architectures won't really notice any difference at all. 

Also, UEK5 on x86 is built with the default gcc version that comes with OL7 (gcc 4.8). However on ARM we decided to build with gcc7.3. and... UEK5 ARM is built with 64k page size.

- As with x86, Oracle Linux for ARM is freely downloadable. We have installable ISO images. Errata will also be freely available. It can be used in test, dev or production, we have no restrictions on that. If you want support, you get a support subscription, just like on x86, otherwise you can use it as much as you want. No auth keys, no private repos. Just simple public https://yum.oracle.com for errata. Of course the source code as well.

- Since a lot of enhancements have gone into the toolchain (compiler, glibc, ...) we decided to provide a gcc7.3 environment with OL7/ARM. The Software Collection 3.0 repo on ARM contains the 'Oracle ARM toolset'. Which is basically gcc 7.3 and related items. The toolchain team is doing a lot of work with ARM optimizations. (as is the kernel team for that matter).

- Hardware partners : Right now we have validated and work closely with our partners Ampere Computing and Cavium. The majority of our testing and validation happens on these platforms and chips.

- ISVs. In order to build out a very viable server/cloud platform for ARM. We (as everyone else) need our ISV partner ecosystem to follow us. This is one reason we decided to go GA. We want to ensure we show that we are serious about this platform and that helps partners move forward as well. Internally we have already worked with the MySQL team to provide MySQL 8.0 for ARM. We are also doing work on Java optimizations and looking at other products.

- Cloud-'native'... docker for Oracle Linux/ARM is there - we have Oracle Linux images on docker hub (in case you didn't know...). You will see k8s show up etc..

- Basics/beginnings of EPEL. A lot of our users on x86 use a lot of EPEL packages. As many of you already know, we started rebuilding (not modifying) the EPEL packages so that they are (1) signed by us (2) come from the same repo source as the base OL (easier to have a single download location) (3) allows us to easily make all our RPMs available for Oracle Cloud users on the 'internal' cloud network. We are going to expand this to ARM as well so that we slowly increase the ARM/EPEL repo. This will take some time.

- We have a Raspberry Pi 3B and 3B+ image that is still pre-GA with UEK5 and grub. Expect to see an update to the GA code-base in the near future. RPI3 is more of a 'fun' and easy way to get to play with OL7/ARM, we don't see it (sorry) as a production target.

Go download it, play with it, have fun...

and thanks to my team at Oracle for making this happen and also a shout out to our partners for their contributions (Ampere Computing folks! and Cavium folks!)





Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7

Thu, 2018-06-21 10:08

Yesterday we released the 5th version of our "UEK" package for Oracle Linux 7 (UEKR5). This kernel version is based on a 4.14.x mainline Linux kernel. One of the nice things is that 4.14 is an upstream Long Term Stable kernel version as well as maintained by gregkh.

UEKR5 is a 64-bit only kernel. We released it on x86(-64) and ARM64 (aarch64) and it is supported starting with Oracle Linux 7.

Updating to UEK5 is easy - just add the UEKR5 yum repo and update. We have some release notes posted here and a more detailed blog here.

A lot of new stuff  in UEKR5... we also put a few extra tools in the yum repo that let you make use of these newer features where tool updates are needed. xfsprogs, btrfsprogs, ixpdimm libraries pmemsdk, updated dtrace utils updated bcache, updated iproute etc.

For those that don't remember, we launched the first version of our kernel for Oracle Linux back in 2010 when we launched the 8 socket Exadata system. We have been releasing a new Linux kernel for Oracle Linux on a regular basis ever since. Every Exadata system, in fact every Oracle Engineered system that runs Linux uses Oracle Linux and uses one of the versions of UEK inside. So for customers, it's the most tested kernel out there, you can run the exact same OS software stack as we run, on our biggest and fastest database servers, on-premises or in the cloud, and in fact, run the exact same OS software stack as we run inside Oracle Cloud in general. That's pretty unique compared to other vendors where the underlying stack is a black box. Not here.

10/2010 - 2.6.32 [UEK] OL5/OL6 03/2012 - 2.6.39 [UEKR2] OL5/OL6 10/2013 - 3.8 [UEKR3] OL6/OL7 01/2016 - 4.1 [UEKR4] OL6/OL7 06/2018 - 4.14 [UEKR5] OL7/

The source code for UEKR5 (as has been the case since day 0) is fully available publicly, the entire git repo is there with changelog, all the patches are there with all the changelog history - not just some tar file with patchfiles on top of tar files to obfuscate? things for some reason. It's all just -right there-. In fact we recently even moved our kernel gitrepo to github.

Have at it.



Mon, 2018-06-04 20:20

I will write up some examples on this later but for now... here's the changelog:

The oci-utils package is used to manage block volumes and VNICs and is available for use with Oracle Linux 7 images in Oracle Cloud (excludes support for OCI-C). The latest release (oci-utils-0.6-34.el7) is available in the Oracle Linux 7 developer channel on YUM. The following changes/additions have been made in this release (0.6): - Support added for API access through Instance Principals - Support added for root using a designated user's OCI config files and keys - oci_utils API automatically detects authentication method to be used - ocid can discover secondary IP addresses and CHAP user/password using OCI API calls, if the Python SDK is configured or if Instance Principals is used - network proxy support for making SDK calls - configuration files for ocid: /etc/oci-utils.d/* - support configuring the various functions of ocid individually, including refresh frequency or turning them off completely. - ocid saves state and restores all volumes and VNIC configuration after reboot - oci-network-config: new option: --detach-vnic - oci-iscsi-config: new option: --destroy-volume - oci-utils APIs are now thread safe - NEW tool: oci-image-cleanup - a script that runs a set of cleanup steps to prepare the instance for a custom image - oci-kvm utility rejects attempts to create guests if the required virtualization support is not enabled in the image it is being executed on




Some tips for using Oracle Linux in Oracle Cloud

Mon, 2018-05-28 11:44

Creating an Oracle Linux instance in Oracle Cloud Infrastructure is easy. For the most part it is the same as creating your own image from the install media but we have done a few extra things that are very useful and you should know about :)

- with recent images, the yum repo file points to a local OCI mirror of yum.oracle.com (and a few repos that are only available on linux.oracle.com for subscribers - but since all OCI users' instances are technically   subscribers -> remember - Oracle Linux support is included with OCI instances at no additional cost or no extra button to click or anything)

So downloading RPMs or using yum on an OCI instance is very, very fast and it does not incur any network traffic to the outside world.

- a number of repos are enabled by default - ol7_UEKR4, _developer, _developer_EPEL, _ksplice _latest _optional_latest _addons _software collections. This gives you direct access to a ton of Oracle Linux related packages out of the box. But consider looking at a number of other repos that we have not enabled by default.  All you have to do is change enabled=0 to enabled=1 in /etc/yum.repos.d/public-yum-ol7.repo. Example : ol7_preview Alternatively you can enable a repo from the yum commandline : yum --enablerepo=ol7_preview <option>

The reason we don't enable these by default is that some of the packages in these channels are newer but, in some cases, pre-releases or developer versions of packages and we want to default to the "GA" versions but you are more than welcome to add these other packages of course. For instance, By default docker-engine gets you 17.06 but... if you want 17.12, then that's in the ol7_preview channel. So if you're looking for something new, don't forget to go look there before manually downloading stuff from a random 3rd party site. We might already have it available.

Other channels include nodejs8, gluster312, php72, MySQL8, developer_UEKR5 etc... Take a look at the repo file. You can always browse the repo content on https://yum.oracle.com. And if you want to see what's added on a regular basis, go check out the yum.oracle.com what's new page.  Anyway having EPEL and software collections gives you quick access to a very wide range of packages. Again, no need to download a yum repo rpm or download packages with wget or what not. Easy to create a development environment and deployment environment.

- some tools are installed by default. For instance an OCI OL instance comes with oci-utils pre-installed. oci-utils contains a number of command lines tools that make it very easy to work with attached block volumes, handle instance metadata, find your public-ip easily, configure your secondary VNICs. I wrote a blog entry about this a few months ago.

- easy access to OCI toolkits:

Want to use terraform? No problem, no need to download stuff, just get it from our yum repo. # yum install terraform terraform-provider-oci  We are typically just a few days behind the tagged releases of both terraform and the oci provider.

Want to use the OCI SDK and OCI CLI? # yum install python-oci-cli python-oci-sdk done. Same as with terraform, these packages are updated at most a few days after the github projects have release tags. No need to mess with updates or adding dependency RPMs. We take care of it and we update them for you

Using Oracle Ksplice for CVE-2018-8897 and CVE-2018-1087

Thu, 2018-05-10 17:15
Just the other day I was talking about using ksplice again and then just after these 2 new CVEs hit that are pretty significant. So, another quick # uptrack-upgrade and I don't have to worry about these CVEs any more.  Sure beats all those rebooting 'other' Linux OS servers. [root@vm1-phx opc]# uname -a Linux vm1-phx 4.1.12-112.16.4.el7uek.x86_64 #2 SMP Mon Mar 12 23:57:12 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux [root@vm1-phx opc]# uptrack-uname -a Linux vm1-phx 4.1.12-124.14.3.el7uek.x86_64 #2 SMP Mon Apr 30 18:03:45 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux [root@vm1-phx opc]# uptrack-upgrade The following steps will be taken: Install [92m63il8] CVE-2018-8897: Denial-of-service in KVM breakpoint handling. Install [3rt72vtm] CVE-2018-1087: KVM guest breakpoint privilege escalation. Go ahead [y/N]? y Installing [92m63il8] CVE-2018-8897: Denial-of-service in KVM breakpoint handling. Installing [3rt72vtm] CVE-2018-1087: KVM guest breakpoint privilege escalation. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.14.5.el7uek

Oracle Ksplice and Oracle Linux reminder

Tue, 2018-05-08 22:37

For those of you that keep up with my blog and twitter musings... you know how much I love Ksplice. This morning I was connecting to one of my cloud VMs and did an uptrack-upgrade as it had been a while and I hadn't turned on automatic ksplice updates on this node. I was pleasantly reminded of the awesomeness that is Ksplice. 

Here's the output, a kernel from 2-MAR-2018, no reboot, just a quick # uptrack-upgrade and look at all the stuff that I am now protected against. A few seconds, no impact on apps, done. Now I know that there are some other projects out there that talk about being able to patch something here or there. But nothing comes even close to this. Not in terms of service, not in terms of patch complexity, not in terms of easy of use, etc, etc etc.

Remember, everyone using Oracle Linux in Oracle Cloud has full use of ksplice included at no extra cost and no extra configuration, every Oracle Linux instance is configured out of the box to use this. 

No other cloud provider has this service for their OSs. No other OS vendor provides this as a service for their own product at this level of sophistication and certainly not in any cloud environment. Best place to run Linux, best place to run Oracle Linux, all integrated, inclusive ... in Oracle Cloud Infrastructure.. Yes this is/sounds like marketing but.. fact is, it works and it's there.

[root@vm1-phx opc]# uname -a Linux vm1-phx 4.1.12-112.16.4.el7uek.x86_64 #2 SMP Mon Mar 12 23:57:12 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux [root@vm1-phx opc]# uptrack-upgrade The following steps will be taken: Install [q0j0yb6c] KAISER/KPTI enablement for Ksplice. Install [afoeymft] Improve the interface to freeze tasks. Install [bohqh05m] CVE-2017-17052: Denial-of-service due to incorrect reference counting in fork. Install [eo2kqthd] Weakness when checking the keys in the XTS crypto algorithm. Install [nq1xhhj5] CVE-2018-7492: Denial-of-service when setting options for RDS over Infiniband socket. Install [b1gg8wsq] CVE-2017-7518: Privilege escalation in KVM emulation subsystem. Install [lzckru19] Information leak when setting crypto key using RNG algorithm. Install [npbx6wcr] Deadlock while queuing messages before remote node is up using RDS protocol. Install [4fmvm11y] NULL pointer dereference when using bind system call on RDS over Infiniband socket. Install [3eilpxc9] CVE-2017-14051: Denial-of-service in qla2xxx sysfs handler. Install [385b9ve0] Denial-of-service in SCSI Lower Level Drivers (LLD) infrastructure. Install [aaaqchtz] Denial-of-service when creating session in QLogic HBA Driver. Install [d0apeo6x] CVE-2017-16646: Denial-of-service when using DiBcom DiB0700 USB DVB devices. Install [5vzbq8ct] CVE-2017-15537: Information disclosure in FPU restoration after signal. Install [6qv3bfyi] Kernel panic in HyperV guest-to-host transport. Install [35rms9ga] Memory leak when closing VMware VMXNET3 ethernet device. Install [5gdk22so] Memory corruption in IP packet redirection. Install [6m4jnrwq] NULL pointer dereference in Hyper-V transport driver on allocation failure. Install [owihyva9] CVE-2018-1068: Privilege escalation in bridging interface. Install [buc7tc4q] Data-loss when writing to XFS filesystem. Install [kef372kx] Denial-of-service when following symlink in ext4 filesystem. Install [hb1vibbw] Denial-of-service during NFS server migration. Install [4cqic4y6] Denial-of-service during RDS socket operation. Install [4av6l7rd] Denial-of-service when querying ethernet statistics. Install [8irqvffd] Denial-of-service in Hyper-V utilities driver. Install [5ey3jcat] Denial-of-service in Broadcom NetXtreme-C/E network adapter. Install [npapntll] Denial-of-service when configuring SR-IOV virtual function. Install [s9mkcqwb] NULL pointer dereference during hardware reconfiguration in Cisco VIC Ethernet NIC driver. Install [470l2f6x] Kernel panic during asynchronous event registration in LSI Logic MegaRAID SAS driver. Install [cb7q8ihy] Kernel crash during PCI hotplug of Emulex LightPulse FibreChannel driver. Install [tztxs6wf] Kernel crash during Emulex LightPulse FibreChannel I/O. Install [o7drldhw] NULL pointer dereference during Emulex LightPulse FibreChannel removal. Install [t8a1epky] Hard lockup in Emulex LightPulse FibreChannel driver. Install [8du7f5q4] Deadlock during abort command in QLogic QLA2XXX driver. Install [rghn5nkz] Kernel crash when creating RDS-over-IPv6 sockets. Install [taix4vnz] CVE-2017-12146: Privilege escalation using a sysfs entry from platform driver. Install [60u6sewd] CVE-2017-17558: Buffer overrun in USB core via integer overflow. Install [2a1t0wfk] CVE-2017-16643: Out-of-bounds access in GTCO CalComp/InterWrite USB tablet HID parsing. Install [tcxwzxmf] CVE-2018-1093: Denial-of-service in ext4 bitmap block validity check. Install [3qhfzsex] CVE-2018-1000199: Denial-of-service in hardware breakpoints. Go ahead [y/N]? y Installing [q0j0yb6c] KAISER/KPTI enablement for Ksplice. Installing [afoeymft] Improve the interface to freeze tasks. Installing [bohqh05m] CVE-2017-17052: Denial-of-service due to incorrect reference counting in fork. Installing [eo2kqthd] Weakness when checking the keys in the XTS crypto algorithm. Installing [nq1xhhj5] CVE-2018-7492: Denial-of-service when setting options for RDS over Infiniband socket. Installing [b1gg8wsq] CVE-2017-7518: Privilege escalation in KVM emulation subsystem. Installing [lzckru19] Information leak when setting crypto key using RNG algorithm. Installing [npbx6wcr] Deadlock while queuing messages before remote node is up using RDS protocol. Installing [4fmvm11y] NULL pointer dereference when using bind system call on RDS over Infiniband socket. Installing [3eilpxc9] CVE-2017-14051: Denial-of-service in qla2xxx sysfs handler. Installing [385b9ve0] Denial-of-service in SCSI Lower Level Drivers (LLD) infrastructure. Installing [aaaqchtz] Denial-of-service when creating session in QLogic HBA Driver. Installing [d0apeo6x] CVE-2017-16646: Denial-of-service when using DiBcom DiB0700 USB DVB devices. Installing [5vzbq8ct] CVE-2017-15537: Information disclosure in FPU restoration after signal. Installing [6qv3bfyi] Kernel panic in HyperV guest-to-host transport. Installing [35rms9ga] Memory leak when closing VMware VMXNET3 ethernet device. Installing [5gdk22so] Memory corruption in IP packet redirection. Installing [6m4jnrwq] NULL pointer dereference in Hyper-V transport driver on allocation failure. Installing [owihyva9] CVE-2018-1068: Privilege escalation in bridging interface. Installing [buc7tc4q] Data-loss when writing to XFS filesystem. Installing [kef372kx] Denial-of-service when following symlink in ext4 filesystem. Installing [hb1vibbw] Denial-of-service during NFS server migration. Installing [4cqic4y6] Denial-of-service during RDS socket operation. Installing [4av6l7rd] Denial-of-service when querying ethernet statistics. Installing [8irqvffd] Denial-of-service in Hyper-V utilities driver. Installing [5ey3jcat] Denial-of-service in Broadcom NetXtreme-C/E network adapter. Installing [npapntll] Denial-of-service when configuring SR-IOV virtual function. Installing [s9mkcqwb] NULL pointer dereference during hardware reconfiguration in Cisco VIC Ethernet NIC driver. Installing [470l2f6x] Kernel panic during asynchronous event registration in LSI Logic MegaRAID SAS driver. Installing [cb7q8ihy] Kernel crash during PCI hotplug of Emulex LightPulse FibreChannel driver. Installing [tztxs6wf] Kernel crash during Emulex LightPulse FibreChannel I/O. Installing [o7drldhw] NULL pointer dereference during Emulex LightPulse FibreChannel removal. Installing [t8a1epky] Hard lockup in Emulex LightPulse FibreChannel driver. Installing [8du7f5q4] Deadlock during abort command in QLogic QLA2XXX driver. Installing [rghn5nkz] Kernel crash when creating RDS-over-IPv6 sockets. Installing [taix4vnz] CVE-2017-12146: Privilege escalation using a sysfs entry from platform driver. Installing [60u6sewd] CVE-2017-17558: Buffer overrun in USB core via integer overflow. Installing [2a1t0wfk] CVE-2017-16643: Out-of-bounds access in GTCO CalComp/InterWrite USB tablet HID parsing. Installing [tcxwzxmf] CVE-2018-1093: Denial-of-service in ext4 bitmap block validity check. Installing [3qhfzsex] CVE-2018-1000199: Denial-of-service in hardware breakpoints. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.14.3.el7uek

Congestion Control algorithms in UEK5 preview - try out BBR

Sun, 2018-04-08 18:47

One of the new features in UEK5 is a new TCP congestion control management algorithm called BBR (bottleneck bandwidth and round-trip propagation time). You can find very good papers here and here.

Linux supports a large variety of congestion control algorithms,  bic, cubic, westwood, hybla, vegas,  h-tcp, veno, etc..

Wikipedia has some good information on them : https://en.wikipedia.org/wiki/TCP_congestion_control

Here is a good overview of the important ones, including BBR : https://blog.apnic.net/2017/05/09/bbr-new-kid-tcp-block/

The default algorithm used, for quite some time now, is cubic (and this will remain the default also in UEK5). But we now also include support for BBR. BBR was added in the mainline Linux kernel version 4.9. UEK5 picked it up because we based the UEK5 tree on mainline 4.14. Remember we have our kernels on github for easy access and reading. We don't do tar files, you get the whole thing with changelog - standard upstream kernel git with backports, fixes, etc...

We have seen very promising performance improvements using bbr when downloading or uploading large files over the WAN. So for cloud computing usage and moving data from on-premises to cloud or the other way around, this might (in some situations) provide a bit of a performance boost. I've measured 10% in some tests. Your mileage may vary. It certainly should help when you have packet loss.

One advantage is that you don't need to have both source and target systems run this kernel. So to test out BBR you can run OL7 on either side and install uek5 on it (see here) and just enable it on that system. Try ssh or netperf or wget of a large(ish) file.

All you have to do is:

- use an Oracle Linux 7 install on one of the 2 servers.

- install the UEK5 preview kernel and boot into that one

- use sysctl (as root) to modify the settings / enable BBR. You can do this online. No reboot required.

You should also set the queue discipline to fq instead of pfifo_fast(default).

# sysctl -w net.ipv4.tcp_congestion_control=bbr # sysctl -w net.core.default_qdisc=fq

if you want to go back to the defaults:

# sysctl -w net.ipv4.tcp_congestion_control=cubic # sysctl -w net.core.default_qdisc=pfifo_fast

(feel free to experiment with switching pfifo_fast vs fq as well).

If need be, this can be set on an individual socket level in Linux. If you have a specific application (like a webserver or a data transfer program), use setsockopt(). Something like:

sock = socket(AF_INET, SOCK_STREAM, 0); sockfd = accept(sock, ...); strcpy(optval, "bbr"); optlen = strlen(optval); if (setsockopt(sockfd, IPPROTO_TCP, TCP_CONGESTION, optval, optlen) < 0) error("setsockopt(TCP_CONGESTION) failed");

or you should be able to do the same in Python starting in Python 3.6+.

sock.setsockopt(socket.IPPROTO_IP, socket.TCP_CONGESTION,...)

Have fun playing with it. Let me know if/when you see advantages as well.

Running VirtualBox inside a VM instance in Oracle Cloud Infrastructure

Tue, 2018-04-03 16:15

OK - So don't ask "Why?"... Because... I can! :) would be the answer for the most part.

Oracle Cloud Infrastructure supports nested virtualization. When you create a VM instance in OCI, and you run Oracle Linux 7 with our kernel, you can create KVM or (soon you see how...) VirtualBox VMs inside. If you create a BM instance, you can install VirtualBox or use kvm as you normally would on a local server. Since, well, it's a bare metal server - full access to the hardware and its features.

VirtualBox has some very interesting built-in features which might make it useful to run remote (even when virtualized). One example would be the embedded vRDP server. It can do great remote audio and video (enable/tune videochannel), it makes it easy to take your local VirtualBox images and run them unmodified remotely, it lets you create smaller VMs that you constantly start/stop... you can use vagrant boxes, and it opens up the whole vagrant VirtualBox environment to a remote cloud. So aside from "Because I can"... there are actual good use cases for this!

How do you go about doing this. For the most part it's pretty trivial, installation of VirtualBox in a VM in OCI is no different than how you would install it on your local desktop or server. Configuring a guest VM in VirtualBox should be done using the command line (vboxmanage) instead of installing a full remote desktop and run vnc and such. It's a lot faster to do it using the command line. And then also, if you want to run VirtualBox in Bridged mode so that you have full access to the OCI native cloud network facilities (VCN/Subnet/IP addresses, even public IPs - without NAT) there are a few minor things you need to do.

Here are some of the steps to get going: I'm not a big screenshot guy so bear with me in text for the most part.

Step 1: Create an OCI VM and create/assign an extra VNIC to pass through to your VirtualBox VM.

If you don't already have an OCI account, you can go sign up and get a $300 credit trial account here. That should give you enough to get started.

Set up your account, create a Virtual Cloud Network (VCN) with its subnets and create a VM instance in one of the availability domains/regions. To test this out I created a VM.Standard2.2 shape instance with Oracle Linux 7. Once this instance is created, you can log in with user opc and get going.

When you log into your VM instance, and from the OCI web console you will see that you have a primary VNIC attached. This might show up as ens3 or so inside your VM. In the OCI web console the VNIC has a name (typically the primary VNIC's name is the same as your instance name), it has a private IP and if you decided to have it on a public network, a public ip address as well. All this stuff will be configured out of the box for you as part of your instance creation.

Since I want to show how to use a bridged network in VirtualBox, you will need a second VNIC. You can create that at this point, or you can come back later and do it once you are ready to start your VirtualBox VM. Just go to Attached VNICs in the webconsole (or use the OCI cli) and create a VNIC on a given VCN/Subnet.

create vnic

























The important information to jot down are the mac address and the private ip address of this newly created vnic. In the example and 00:00:17:02:EB:EA  this info is needed later.

Step 2: Install and configure VirtualBox

With Oracle Linux 7 - this is a very easy process. Use yum to install VirtualBox and the dependencies for building the VirtualBox kernel modules and quickly download and install the Extension Pack and you're done:

# yum install -y kernel-uek-devel-`uname -r` gcc # yum install -y VirtualBox-5.2 # wget https://download.virtualbox.org/virtualbox/5.2.8/Oracle_VM_VirtualBox_Extension_Pack-5.2.8.vbox-extpack # vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.8.vbox-extpack

That's it - you now have a fully functioning VirtualBox hypervisor installed on top of Oracle Linux 7 in an OCI VM instance.

Step 3: Create your first VirtualBox guest VM

The following instructions show you how to create a VM from the command line. The nice thing with using the command line is that you can clearly see what it takes for a VM to be configured and you can easily tweak the values (memory, disk,...).

First, you likely want to create a new VM from an install ISO. So upload your installation media to your OCI VM. I uploaded my Oracle Linux 7.5 preview image which you can get here.

Create your VirtualBox VM

# vboxmanage createvm --name oci-test --ostype oracle_64 --register # vboxmanage modifyvm oci-test --memory 4096 --vram 128 --ioapic on # vboxmanage modifyvm oci-test --boot1 dvd --boot2 disk --boot3 none --boot4 none # vboxmanage modifyvm oci-test --vrde on

Configure the Virtual Disk and Storage controllers (Feel free to attach an OCI Block Volume to your VM and put the VirtualBox virtual disks on that volume, of course). The example below creates a 40G virtual disk image and attaches the OL7.5 ISO as a DVD image.

# vboxmanage createhd --filename oci-test.vdi --size 40960 # vboxmanage storagectl oci-test --name "SATA Controller" --add sata --controller IntelAHCI # vboxmanage storageattach oci-test --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium oci-test.vdi # vboxmanage storagectl oci-test --name "IDE Controller" --add ide # vboxmanage storageattach oci-test --storagectl "IDE Controller" --port 0 --device 0 --type dvddrive --medium /home/opc/OracleLinux-R7-U5-BETA-Server-x86_64-dvd.iso

Configure the Bridged Network Adapter to directly connect to the OCI VNIC

This is a little more involved. You have to find out which network device was created on the VM host for this secondary VNIC.

# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet scope host lo valid_lft forever preferred_lft forever 2: ens3: mtu 9000 qdisc mq state UP qlen 1000 link/ether 00:00:17:02:3a:29 brd ff:ff:ff:ff:ff:ff inet brd scope global dynamic ens3 valid_lft 73962sec preferred_lft 73962sec 3: ens4: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:00:17:02:eb:ea brd ff:ff:ff:ff:ff:ff

Bring up this network adapter without an IP address and configure the MTU to 9000 (default mtu settings for VNICs in OCI)

# ip link set dev ens4 up # ip link set ens4 mtu 9000

Almost there... Now just create the NIC in VirtualBox and assign the mac address you recorded earlier to this NIC. It is very important to make sure you use that mac address, otherwise the networking will not allow traffic over the network. Note: don't use : for the mac address on the command line.

# vboxmanage modifyvm oci-test --nic1 bridged --bridgeadapter1 ens4 --macaddress1 00001702ebea

That's it. You now have a VirtualBox VM that can be started, will boot from install media, and be directly connected to the hosts network in OCI. There is no DHCP running on this network, so when you create your VirtualBox VM, you have to assign a static IP (use the one that was assigned as Private IP address (10.0.02 in the example above)).

Before you start your VM, open up the firewall on the host for remote RDP connections and do the same in the OCI console, modify the security list for your host primary VNIC to allow for port 3389 (RDP) traffic ingress.

# firewall-cmd --permanent --add-port=3389/tcp # firewall-cmd --reload

Start your VM in headless mode and use your favorite RDP client on your desktop or laptop to connect to the remote VirtualBox console.

# vboxmanage startvm oci-test --type headless

If you want to experiment with remote video/audio (for instance, play a youtube video inside your VM or play a movie file), enable the vrde video channel. Use the quality parameter to modify the compression/lossy ratio (improves performance) of the mjpeg stream.

# vboxmanage modifyvm oci-test --vrdevideochannel on # vboxmanage modifyvm oci-test --vrdevideochannelquality 70

Raspberry Pi 3 B Oracle Linux 7.4 ARM64 with UEK5 preview image available for download

Tue, 2018-04-03 10:07

A few weeks ago we released an Oracle Linux 7 Update 4 for ARM64 preview update on OTN. This updated ISO installs on Ampere X-Gene 3 (emag) and Cavium ThunderX / ThunderX2 -based systems (and it's also known to work on Qualcomm Centriq 2400-based servers).

Today we added the RPI3 (Raspberry Pi 3 Model B) disk image as well. The previous RPI3 image was still using Oracle Linux 7.3 as a base along with a 4.9 Linux kernel. The newly released image makes it current. It is the same Oracle Linux 7.4 package set as we released on the ISO and it uses the same UEK5 preview kernel (based on 4.14.30 right now).

The current image uses uboot and boots the kernel directly. We will do another update in the near future where we switch to uboot+efi and grub2, so that updating kernels will work the same way as we can do on the regular ARM server installs (where we boot with EFI -> grub2).

A few things to point out:

- OL7/ARM64 is a 64-bit only build. That makes binaries pretty large and the RPI3 only has 1GB of RAM so it's a bit of a stretch.

- X/gnome-shell doesn't work in this release, this is a known issue, when we move to 7.5 this will be resolved but our focus is mostly server and per the above, running a heavy GUI stack is hard on a 1GB system.

- We do not yet support the latest RPI3 Model B+.  Only the RPI3 Model B. We don't have a device tree/dtb file yet for the RPI3 Model B+.

Since it has all the same packages as the server one, you can run docker on the RPI3:

# cat /etc/oracle-release Oracle Linux Server release 7.4 # uname -a Linux rpi3 4.14.30-1.el7uek.aarch64 #1 SMP Mon Mar 26 23:11:30 PDT 2018 aarch64 aarch64 aarch64 GNU/Linux # yum install docker-engine # systemctl enable docker # systemctl start docker # docker pull oraclelinux:7-slim

And there you go a small Oracle Linux 7 for ARM image right on your rpi - directly from docker hub.

# docker pull oraclelinux:7-slim 7-slim: Pulling from library/oraclelinux eefac02db809: Pull complete Digest: sha256:fc684f5bbd1e46cfa28f56a0340026bca640d6188ee79ef36ab2d58d41636131 Status: Downloaded newer image for oraclelinux:7-slim

Oracle Linux 7 for ARM64 preview images on Docker Hub

Wed, 2018-03-21 14:08

A few days ago, we released the docker packages for OL7/ARM64. If you have an ARM64 server running OL7, you can just install docker as you would normally do on x64.

# yum install docker

Of course in order to use this you need some images on docker hub to get started with. While there are some Linux builds on Docker Hub already, we wanted to make sure you could get OL just like you can for x64. Both architectures will be built at same time going forward.

so you can do

# docker pull oraclelinux # docker pull oraclelinux:7 # docker pull oraclelinux:latest

or if you want the smaller version

# docker pull oraclelinux:7-slim # docker images REPOSITORY TAG IMAGE ID CREATED SIZE oraclelinux 7 b5e0e6470f16 2 hours ago 279MB oraclelinux latest b5e0e6470f16 2 hours ago 279MB oraclelinux 7-slim fdaeac435bbd 2 hours ago 146MB

yum-builddep and rpmbuild

Sun, 2018-03-18 13:10

I sometimes try to build an RPM from source (to patch something or try a patch). Since I do these things every now and then, I tend to forget stuff easily and it takes me a while to get back into it.

Anyway - I was trying to build lxc (example) earlier today and I wanted to patch the lxc-oracle template. So I log into my OL7 box and use yumdownloader to download the lxc source.

# yumdownloader --source lxc

Install the src rpm

# rpm -ivh lxc-1.1.5-2.0.9.el7.src.rpm

so I now have ~/rpmbuild/SPECS/lxc.spec ~/rpm/build/SOURCES/<bunch of patch files and the lxc-1.1.5.tar.gz)

Install rpmbuild (wasn't installed yet)

# yum install rpm-build

(I know - the rpm is called rpm-build but the binary is rpmbuild... odd. never figured out why in the world it couldn't just be the same - anyway)

Ok. So... my usual step is : 

# rpmbuild -bp SPECS/lxc.spec

I don't want to build binaries. Just create the whole BUILD/tree with patches applied

Here is where I always waste time. There are a bunch of build dependencies that are not yet installed and in the past I would *pretty stupid of me, thinking back* just go down the list one by one doing yum install <rpm needed> until rpmbuild stops complaining.

Turns out that yum-utils includes a tool called yum-builddep! Aha.

# yum-builddep SPECS/lxc.spec

Look at that! It goes and pulls in all the build dependency packages for you.

ok, back to # rpmbuild -bp SPECS/lxc.spec

and all is happy!  This is one I won't forget.






Updated Oracle Linux 7 update 4 ARM64/aarch64 with uek5 4.14.26-2

Sat, 2018-03-17 10:48

We refreshed the installation media for OL7/ARM64 with the latest uek5 preview build based on upstream stable 4.14.26 and added perf and tuned.

You can download it from the OTN  OL ARM webpage. Ignore the 4.14-14 in the text, that will get updated. We're also working on updating the Raspberry Pi 3 image to match the same version. Hopefully using grub2 there as well to make it easier to have a single image repo.

The arm64 yum repo on http://yum.oracle.com has also been updated.

A few things to point out :

Oracle Linux 7 for ARM64 is going to be a 64-bit only distribution (aarch64). All binaries are built 64-bit and we have no support in user space libraries nor in the kernel for 32-bit.

Our ARM port is sharing the same source code base as x64. There are minor architecture changes where required to build but we have a single source code repository from which we build both architectures. This is important because it makes it easy and clean and allows us to synchronize the two architectures without problem.

Our kernel on ARM64 is built using GCC 7.3 : Linux version 4.14.26-2.el7uek.aarch64 gcc version 7.3.0 20180125

We currently test on Ampere Computing and Cavium ThunderX® systems. We plan to add more processor types over time.

Oracle Linux UEK4 (4.1.12-112.16.4) errata kernel update compiled with retpoline support

Thu, 2018-03-15 10:57

Yesterday afternoon, we released a UEK4 update for both Oracle Linux 6 and Oracle Linux 7.

You can find the announcement mail here.

This update includes a number of generic fixes but most importantly it adds support for retpoline. In order to build this kernel, we also had to release updated versions of gcc which we did a little while ago. You can find more information in general about retpoline on various sites, Here's an article of a discussion on the kernel maillist.

Note, our UEK5 preview kernels (based on 4.14 stable) are also built with retpoline support.

You can find more information about our errata publicly here .

As always, keep checking the what's new page for new RPMs released on http://yum.oracle.com.


Oracle Linux 7 UEK5 preview 4.14.26

Wed, 2018-03-14 10:13

We just updated the UEK5 kernel preview to 4.14.26-1. The latest version is based on upstream stable 4.14.26 and can be found in our UEK5 preview channel.

The preview channel also has a number of other packages in it: an updated dtrace, updated daxctl and ndctl tools for persistent-memory.

Another thing I wanted to point out. We have had the source tree for UEK on oss.oracle.com for a long time in a git repo. We've always made sure that the changes are public, full git history both upstream and our own patches/bugfixes on top so it's very easy for anyone publicly to see what the source is. Not a tarball with just the end result source code, not a web-based only thing that's tedious to see what's up but standard git with all source, all commits. In order to make that a bit easier, we moved this to github.   Nothing different on the code side but this gives a nicer consolidated, cleaner view.


We use the exact same git repo/tree for Oracle Linux for x64 and for ARM64. This source tree also includes dtrace, etc...

Oracle Linux in Oracle Cloud Infrastructure and on-premises.

Sun, 2018-03-11 12:59

Oracle Cloud Infrastructure is a really great platform to run many types of operating systems on many compute instance shapes available with larger amounts of NVMe storage, lots of threads or cores and super fast networking. OCI lets you run pretty much any operating system (Windows, Ubuntu, CentOS, any Linux pretty much runs..and of course Oracle Linux). With the Emulation Mode VMs, you can go way back with old version and someone even showed OS2 running!

One really nice thing about OCI is the fact that Oracle Linux support is included at no additional cost. I wrote about this before. You can file SRs, you get support for OL5 extended support, you can use Oracle Enterprise Manager Cloud Control instances to manage the OS, you can use spacewalk, you can use kubernetes, docker, it's all included. We have local yum repository mirrors inside OCI regions for fast downloads of packages and also making sure you get these without incurring external network traffic. And of course, we do very frequent updates of the Oracle Linux images so that you can always start instance create with the latest and greatest updates. We have scripts to make life easier (such as oci-utils), we create RPMs for the OCI CLI, python SDK, terraform provider etc.. so you don't have to manually download scripts or tools and compile or install them, it's all there.

Another reason is that we all work very closely together to support you. The Oracle Cloud Infrastructure development team and  the Oracle Linux development team work hand in hand to figure out what went wrong, in the rare case something happens. We're one team towards our customers and partners.

Another nice thing with Oracle Linux in OCI is the on-premises angle. When you run Oracle Linux on your serves on-prem, you have access to the exact same code, packages, with a support subscription you have full Oracle support, and even without a support subscription you have access to the errata updates, and all the packages I mentioned here without a need for authorization keys or access codes. It's all right there. If you are an ISV that wants to package an application and embed an OS, OL is perfect (you can distribute it for free, you can decide to get support subscriptions when you need it without being forced to change OSs underneath) you can then take that exact same code and run it in a cloud environment, and in OCI in particular at no additional cost including full support. Create a VM image and distribute the entire image, no contract needed. You can provide that VM image on-premises or in the cloud. You can install it on bare-metal servers, it's not limited to VMs. And of course customers have the flexibility of moving between on-premises and Oracle Cloud without having to worry. Same code, predictable cost. Full support in both places.

Whether you are a developer, a customer with test and development systems, production systems, an ISV that creates solution bundles with an embedded OS... no difference. You don't have to worry about taking an RPM from your developer platform and install it on your production system. 

Want to play with docker images? They're on docker hub, they're on Oracle Container registry, free to use by anyone and everyone. Both in our cloud (and any cloud) and on-premises. Regularly updated images. For the exact some OS you can run in production, in test/dev, for developers, ISVs, anywhere. No distinction. And we have an OCI mirror of our Container registry, again, for fast access and  to ensure you don't create external network usage.

Sure there are other Linux distributions out there. Free ones, great, but if you need help, support, service levels for production, it's not offered. Commercial ones, well, no such flexibility, not even close. And if something goes wrong, you deal with at least 2 companies to figure out what happened.  1 call, 1 SR, on-prem, in cloud. Same code everywhere.

Public Oracle Linux yum server

Source code https://oss.oracle.com/sources/

Vagrant boxes

docker hub

ISO images

full public git repo with mainline and our commits, transparent. (not tar balls to actually try and obfuscate)

public service patch breakout for those that don't want to go through patch files for that other kernel 


Oracle Container Services for use with Kubernetes(1.9.1) 1.1.9

Tue, 2018-03-06 11:23

We just released Oracle Container Services for use with Kubernetes 1.1.9. This is based on Kubernetes 1.9.1.

There are also docker images to get going easily. You can download them from the Oracle Container Registry using standard docker commands. Please remember that we have OCR mirrors that provide fast performance (ocr-phx.oracle.com ocr-ash.oracle.com ocr-fra.oracle.com - I suggest using one of those alternative mirrors... at some point we will do traffic routing but right now it's still manual for this). For users trying out our OCSK8s (let me shorten it to that) in Oracle Cloud Infrastructure, do use the mirrors as they are hosted inside the OCI datacenters.

The individual packages are released in the Oracle Linux 7 add_ons channel  on yum.oracle.com.

Documentation can be found here. This release is also formally supported as part of Oracle Linux support.

Also of note, we are a certified platform/distribution in the Kubernetes Conformance program. See here.