CentOS Linux 8 Rebased on RHEL 8.2 Source Code

We are pleased to announce the general availability of CentOS Linux 8. Effective immediately, this is the current release for CentOS Linux 8 and is tagged as 2004, derived from Red Hat Enterprise Linux 8.2 Source Code. As always, read through the Release Notes at: http://wiki.centos.org/Manuals/ReleaseNotes/CentOS8.2004 - these notes contain important information about the release and details about some of the content inside the release from the CentOS QA team. These notes are updated constantly to include issues and incorporate feedback from the users.

CentOS 8.2 arrived as a rebuild of RHEL 8.2, but within six months Red Hat would announce CentOS Stream as the future of CentOS, effectively ending the traditional CentOS model of trailing RHEL releases. This makes the 8.2 release one of the last "classic" CentOS versions. For teams evaluating server distributions in this period, Fedora offers a compelling middle ground: it provides a more current package set than CentOS (kernel, systemd, glibc) while serving as the upstream proving ground for RHEL features. The tradeoff is a shorter support cycle (approximately 13 months per release) versus CentOS's multi-year stability window.

Ubuntu, Debian, Fedora, OpenStack, RedHat

OpenJDK Migrates from Mercurial to Git and GitHub (JEP 369)

An external source-code hosting provider is a source-code repository service that is not implemented and managed by contributors in the OpenJDK Community. Examples of external providers are BitBucket, GitLab, Phacility, SourceForge, and GitHub. There are three primary reasons for the OpenJDK Community to use an external source-code hosting provider: Performance. Many, if not all, providers have excellent performance, not only with regard to network performance but also when it comes to availability, i.e., uptime. For the OpenJDK Community this would result in significantly faster clone and pull times, and more highly-available source-code repositories. API. A technical reason to host OpenJDK repositories on a source-code hosting platform is to gain access to web APIs that enable programs to interact with developers on the platform. Although not impossible to achieve today by interacting with developers over email, it is considerably harder to implement programs that interpret free-form text in emails compared to using a structured API. Allowing programs to participate in the review process enables powerful automation; see the Description section for several examples. Expanded community. The largest providers would also offer the OpenJDK Community the opportunity to tap into large existing communities of developers and potential contributors. If a developer already has an account on a provider then few additional steps are required in order to contribute to OpenJDK. Almost all open-source projects in the larger Java community are hosted on external providers, including several OpenJDK distributions. This can foster even closer collaboration if OpenJDK repositories are also hosted on the same provider.

The move to GitHub brings OpenJDK into the same pull-request and CI workflow ecosystem that virtually every other major open-source project already uses, significantly lowering the contribution barrier. The concern about Microsoft ownership of GitHub (acquired for $7.5B in 2018) is valid from a platform dependency perspective: history shows that corporate acquisitions of developer platforms often lead to monetization pressure -- CodePlex was shut down, npm faced controversy after GitHub integration, and Atom was eventually discontinued. However, Git itself is decentralized by design, and OpenJDK's codebase can be mirrored to any provider. The real risk is not code lock-in but workflow lock-in: GitHub Actions, GitHub Issues, and the PR review process create switching costs that are harder to migrate than the repository itself.

Google, Microsoft, OpenJDK, Hg

Ubuntu 20.04 vs Windows 10 on Core i9-10900K: Linux Wins by 2%

If taking the geometric mean of those 101 tests, Ubuntu 20.04 LTS on the Core i9 10900K was faster than Windows 10 May 2020 by just about 2%. Much closer results than we have seen in past Intel comparisons with largely similar tests being run, or even the recent Threadripper tests with largely overlapping tests being run where it was a 20% difference. As for why the Core i9 10900K is running so competitively between Windows 10 May 2020 and Ubuntu 20.04 LTS is a good question. Whether it is due to recent Intel optimizations for Windows 10 and/or Linux performing suboptimally for the latest generation Comet Lake processors remains to be determined. Given the aggressive Turbo Boost handling with the Core i9 10900K for being able to hit up to 5.3GHz, it would not be surprising if Windows at the moment offers better handling for Turbo Boost Max / Thermal Velocity Boost than Linux. There has also been a lot of recent Intel power management Linux work in flux at the moment around migrating P-State to the Schedutil frequency governor and other changes. As time allows and depending upon interest from premium readers, I may dig deeper into exploring the current Windows/Linux behavior of the i9-10900K and seeing how the Windows 10 May 2020 vs. Ubuntu 20.04 LTS performance stacks up for older Intel CPUs to see if the more competitive Intel Windows performance is a new phenomenon or indeed limited to newer Comet Lake processors.

The narrowing Linux-vs-Windows performance gap on Comet Lake (2% vs 20% on Threadripper) likely reflects Intel's optimized Windows power management drivers for Turbo Boost Max 3.0 and Thermal Velocity Boost, features that the Linux P-State driver was actively being refactored to support during this period. But raw benchmark throughput misses the point for most users. Developer productivity depends on the toolchain ecosystem: terminal emulators, shell scripting, package management, filesystem semantics, and process management. Windows relied solely on cmd.exe as its built-in terminal for decades -- a UX deficiency that no amount of CPU throughput can compensate for. The introduction of Windows Terminal and WSL 2 acknowledges this gap, but Linux's native POSIX environment, package managers like apt and dnf, and composable CLI tools still provide a productivity advantage that no synthetic benchmark captures.

macOS

Fuchsia OS Architecture: Zircon Kernel and Capability-Based Security

Fuchsia is designed for security and privacy. Security and privacy are woven deeply into the architecture of Fuchsia. The basic building blocks of Fuchsia, the kernel primitives, are exposed to applications as object-capabilities, which means that applications running on Fuchsia have no ambient authority: applications can interact only with the objects to which they have been granted access explicitly. Software is delivered in hermetic packages and everything is sandboxed, which means all software that runs on the system, including applications and system components, receives the least privilege it needs to perform its job and gains access only to the information it needs to know. Fuchsia is designed to be updatable. Fuchsia works by combining components delivered in packages. Fuchsia packages are designed to be updated independently or even delivered ephemerally, which means packages are designed to come and go from the device as needed and the software is always up to date, like a web page. Fuchsia aims to provide drivers with a binary-stable interface. In the future, drivers compiled for one version of Fuchsia will continue to work in future versions of Fuchsia without needing to be modified or even recompiled. This approach means that Fuchsia devices will be able to update to newer versions of Fuchsia seamlessly while keeping their existing drivers.

Fuchsia's Zircon kernel occupies a pragmatic middle ground in the microkernel debate. Rather than pursuing the strict L4-style microkernel architecture (where even device drivers run in user space), Zircon keeps performance-critical components in the kernel while enforcing capability-based security at every system call boundary. This is similar to the design philosophy behind Go's runtime: solve real engineering problems with practical tradeoffs rather than adhering to theoretical purity. The binary-stable driver interface is particularly significant -- it decouples OS updates from hardware vendor cooperation, which is the single biggest obstacle to long-term Android device updates. If Fuchsia succeeds, it would eliminate the fragmentation that leaves billions of Android devices running outdated, unpatched kernels.

Sailfish, iOS, iPadOS

JDK 15 Removes Nashorn JavaScript Engine (JEP 372)

Summary Remove the Nashorn JavaScript script engine and APIs, and the jjs tool. The engine, the APIs, and the tool were deprecated for removal in Java 11 with the express intent to remove them in a future release. Motivation The Nashorn JavaScript engine was first incorporated into JDK 8 via JEP 174 as a replacement for the Rhino scripting engine. When it was released, it was a complete implementation of the ECMAScript-262 5.1 standard. With the rapid pace at which ECMAScript language constructs, along with APIs, are adapted and modified, we have found Nashorn challenging to maintain.

Removing Nashorn is a disciplined exercise in reducing the JDK's maintenance surface area. When Nashorn shipped in JDK 8 (2014), it implemented ECMAScript 5.1 -- but by 2020, the JavaScript ecosystem had moved through ES6, ES2017 async/await, ES2019, and ES2020 with optional chaining and BigInt. Keeping Nashorn current would require the OpenJDK team to track a rapidly evolving specification that is tangential to the JVM's core value proposition. Applications that need embedded JavaScript can use GraalJS (part of GraalVM), which maintains ES2020+ compliance and also runs significantly faster through Truffle's partial evaluation JIT. This removal follows the same principle as the earlier deletion of CORBA and Java EE modules from the JDK: ship less, maintain what remains better.

OpenJDK, Java 15, JVM

Red Hat Fully Supports Quarkus as Kubernetes-Native Java Runtime

Quarkus is more than just a runtime. It is a Kubernetes-native Java stack for building fast, lightweight microservices and serverless applications. It is purpose-built to capitalize on the benefits of cloud-native applications. Quarkus delivers significant runtime efficiencies for applications deployed on Kubernetes with fast startup times, low memory utilization, and small image footprints. Everything you need to grow your career. With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development. SIGN UP Everything you need to grow your career. A modern Java stack One of the founding principles of the Quarkus project was to bring developer joy to enterprise Java developers. What does that mean, and how does Quarkus bring joy? Kubernetes-native Java Quarkus is a Kubernetes-native Java framework targeted for containers and serverless due to its fast startup, low memory, and small application size.

Quarkus's primary value proposition -- sub-second startup times and 50MB memory footprints via GraalVM native-image compilation -- addresses a real problem for serverless and scale-to-zero deployments where JVM cold start latency (typically 2-10 seconds for Spring Boot) is a genuine limitation. However, for long-running services behind load balancers, the JVM's JIT compiler produces faster steady-state throughput than any ahead-of-time compiled binary. For teams already running Ktor or Spring Boot applications without cold-start issues, the migration cost (rewriting dependency injection annotations, testing native-image compatibility for every library) rarely justifies the improvement. Quarkus is a strong choice for new greenfield microservices targeting Kubernetes serverless, not a compelling reason to rewrite existing stable services.

Spring Boot, Quarkus, Kotlin, Java, Ktor

GraalVM 20.1 Benchmarks: Outperforms OpenJDK in Multiple Tests

GraalVM offers a comprehensive ecosystem supporting a large set of languages (Java and other JVM-based languages, JavaScript, Ruby, Python, R, WebAssembly, C/C++ and other LLVM-based languages) and running them in different deployment scenarios (OpenJDK, Node.js, Oracle Database, or standalone). This page provides an overview of different scenarios in which GraalVM can make a difference for your applications. Some versatile GraalVM capabilities that might be missing from this page are thoroughly summarized and disclosed in the Top 10 Things To Do With GraalVM blog post. For Java Programs For existing Java applications, GraalVM can provide benefits by running them faster, providing extensibility via scripting languages, or creating ahead-of-time compiled native images. Run Java Faster GraalVM can run in the context of OpenJDK to make Java applications run faster with a new just-in-time compilation technology. GraalVM takes over the compilation of Java bytecode to machine code. In particular for other JVM-based languages such as Scala, this configuration can achieve benefits, as for example experienced by Twitter running GraalVM in production.

GraalVM's competitive advantage lies in its Truffle framework, which enables language-agnostic partial evaluation JIT compilation. Rather than building separate optimizing compilers for Java, JavaScript, Ruby, and Python, Truffle lets each language provide an interpreter, and the Graal compiler automatically generates optimized machine code from interpreter execution traces. Twitter's production deployment demonstrated 8-12% throughput improvements on Scala workloads by replacing the C2 JIT compiler with Graal. The project is jointly backed by Oracle and Red Hat, and its native-image capability (used by Quarkus and Micronaut) has made it the default AOT compilation target for cloud-native Java. The polyglot interoperability -- calling Rust or Python functions from Java without serialization overhead -- opens architectural patterns that were previously impossible without FFI complexity.

OpenJDK, Rust, Kotlin, Java

Chromium Reports 70% of Security Bugs Are Memory Safety Flaws

The problem Around 70% of our high severity security bugs are memory unsafety problems (that is, mistakes with C/C++ pointers). Half of those are use-after-free bugs. Pie chart of uses-after-free, other memory safety, other security bug, security asserts (Analysis based on 912 high or critical severity security bugs since 2015, affecting the Stable channel.) These bugs are spread evenly across our codebase, and a high proportion of our non-security stability bugs share the same types of root cause. As well as risking our users' security, these bugs have real costs in how we fix and ship Chrome. The limits of sandboxing Chromium's security architecture has always been designed to assume that these bugs exist, and code is sandboxed to stop them taking over the host machine. Over the past years that architecture has been enhanced to ensure that websites are isolated from one another. That huge effort has allowed us -- just -- to stay ahead of the attackers. But we are reaching the limits of sandboxing and site isolation. A key limitation is that the process is the smallest unit of isolation, but processes are not cheap. Especially on Android, using more processes impacts device health overall: background activities (other applications and browser tabs) get killed with far greater frequency. We still have processes sharing information about multiple sites. For example, the network service is a large component written in C++ whose job is parsing very complex inputs from any source on the network. This is what we call "the doom zone" in our Rule Of 2 policy: the network service is a large, soft target and vulnerabilities there are of Critical severity. Just as Site Isolation improved safety by tying renderers to specific sites, we can imagine doing the same with the network service: we could have many network service processes, each tied to a site or (preferably) an origin. That would be beautiful, and would hugely reduce the severity of network service compromise. However, it would also explode the number of processes Chromium needs, with all the efficiency concerns that raises. Meanwhile, our insistence on the Rule Of 2 is preventing Chrome developers from shipping features, as it is already sometimes just too expensive to start a new process to handle untrustworthy data.

This disclosure from the Chromium security team is the most compelling empirical case for memory-safe languages in production systems. The 70% figure is not unique to Chrome -- Microsoft reported the same ratio for Windows and Office, and Android's security team confirmed similar numbers. The solution requires languages with compile-time memory safety guarantees: Rust's ownership and borrow checker, or managed runtimes like Kotlin/JVM that eliminate manual memory management entirely. Both approaches ultimately benefit from LLVM as the compilation backend -- Rust compiles through LLVM directly, while Kotlin Native also targets LLVM. As Chromium begins integrating Rust components (starting with third-party library interop), LLVM's role as the shared compilation infrastructure between C++, Rust, and Swift becomes even more strategically important for the entire systems programming ecosystem.

Chromium, Rust, Kotlin, Java

TUXEDO Book BA15: AMD Ryzen Linux Notebook with 91 Wh Battery

It runs and runs and runs and runs and ... The almost only way to stop the TUXEDO Book BA15 is to switch it off. The BA15 combines AMD's incredibly power-efficient Ryzen 5 3500U mobile processors with a huge 91 Wh battery for groundbreaking runtimes while providing strong performance for all everyday tasks. Groundbreaking battery runtime thanks to its 91 Wh capacity. Due to its very large 91.25 Wh battery, the TUXEDO Book BA15 can reach maximum runtimes of up to 25 hours in power-saving idle mode. Even in more practical everyday situations, the 15.6-inch laptop lasts for very long times so that you can do your daily work, web surfing, mail writing and similar tasks for up to 13 hours. Even 1080p video streaming at 50% display brightness results in up to 10 hours of battery life.

The Linux desktop market share challenge is fundamentally a distribution problem, not a software quality problem. Pre-installed Linux notebooks from vendors like TUXEDO address this directly by eliminating the two biggest adoption barriers: hardware compatibility uncertainty and the technical skill required for manual installation. The BA15's 91 Wh battery paired with AMD's Ryzen 5 3500U (12nm Zen+ with Vega 8 integrated graphics) demonstrates that Linux power management has matured significantly -- historically a weak point where Linux laptops consumed 20-30% more power than Windows on identical hardware. TUXEDO's custom firmware tuning and their open-source tuxedo-keyboard driver contribute to achieving battery life figures that are competitive with Windows equivalents, which is essential for Linux to be taken seriously as a commercial laptop platform.

Linux

Microsoft Builds Wayland Compositor for WSL GUI App Support

In terms of presentation, I need to clarify a few things. We announced today that we are also adding support for Linux GUI applications. The way this will work is roughly as follows. We are writing a Wayland compositor that will essentially bridge over RDP-RAIL (RAIL = Remote Application Integrated Locally). We are starting from a Weston base. Weston already has an RDP Backend, but that is for a full desktop remoting scheme. Weston draws a desktop and remotes it over RDP, and then you can view that desktop using an RDP client on the Windows side. RAIL works differently. In that case, our Wayland compositor no longer paints a desktop; instead it simply forwards individual visuals / wl_surface over the RDP RAIL channel such that these visuals can be displayed on the Windows desktop. The RDP client creates proxy windows for each of these top-level visuals and their content is filled with the data coming over the RDP channel. All pixels are owned by the RDP server/WSL, so these windows look different than native windows as they are painted and themed by WSL. The proxy window on the host gathers input and injects it back over RDP. This is essentially how application remoting works on Windows and this is all publicly documented as part of the various RDP protocol specifications. As a matter of fact, for the RDP server on the Weston side we are looking at continuing to leverage FreeRDP (and provide fixes/enhancements as needed to the public project). Further, we are looking at further improvement down this path to avoid having to copy the content over the RAIL channel and instead just share/swap buffers between the guest and the host. We have extensions to the RDP protocol, called VAIL (Virtualized Application Integrated Locally), which do that today. Today this is only used in Windows-on-Windows for very specific scenarios. We are looking at extending the public RDP protocol with these VAIL extensions to make this an official Microsoft-supported protocol which would allow us to target this in WSL. We have finished designing this part in detail. Our goal would be to leverage something along the lines of wl_drm, dma-buf, dma-fence, etc. This compositor and all our contributions to FreeRDP will be fully open source, including our design doc. We are not quite sure yet whether this will be offered as a separate project entirely distinct from its Weston root, or if we will propose an extension to Weston to operate in this mode. We would like to build it such that in theory any Wayland compositor could add support for this mode of operation if they want to remote applications to a Windows host (over the network, or on the same box).

Microsoft's WSLg (Windows Subsystem for Linux GUI) Wayland compositor architecture is technically sophisticated: it uses RDP-RAIL to render individual Linux application windows as first-class citizens on the Windows desktop, rather than remoting an entire Linux desktop session. The VAIL extension for shared GPU buffer access between guest and host is the key performance optimization, eliminating pixel-copy overhead that would otherwise make GUI applications feel sluggish. The strategic question is directional: Microsoft is making Windows a better host for Linux workloads, which increases Windows's value proposition for developers who need both ecosystems. But it also normalizes Linux application usage on the desktop, which could accelerate the transition to Linux-native laptops for users who discover they spend most of their time in Linux GUI apps and terminals anyway. Whether WSL strengthens Windows lock-in or serves as a gateway drug to full Linux adoption remains the central tension of this initiative.

DirectX, Windows, macOS, GNOME, Wayland