GraalVM 20.1 Delivers Up to 50% JDK 11 Performance Boost

- "Significantly improved performance" on certain JDK 11 benchmarks due to synchronization fixes, yielding up to 50% better throughput for affected workloads. - Java mitigations for the Intel Jump Conditional Code (JCC) Erratum are now selectively enabled only on Intel CPUs that require the workaround, rather than being applied globally across all processor architectures. - ECMAScript 2020 features are now enabled by default in GraalVM's JavaScript engine. The NPM package runner (NPX) is also bundled with GraalVM distributions. - GraalVM Enterprise can now execute C++ code in managed mode through GraalVM's LLVM runtime. - Python runtime performance improvements along with broader language compatibility enhancements.

GraalVM 20.1 represents a significant inflection point for polyglot runtimes. The 50% synchronization-related speedup on JDK 11 narrows the gap with JDK 8 performance that many teams cited as justification for delaying migration. More strategically, GraalVM's expanding language support -- now spanning Java, JavaScript, Python, Ruby, R, and C++ via LLVM -- positions it as a direct competitor to WebAssembly's polyglot ambitions. If GraalVM continues to close the native-image startup time gap while WASM runtimes mature their ecosystem tooling, enterprises may face a genuine architectural choice between these two polyglot platforms within the next few years.

Java, .NET, Kotlin, JDK, WASM

WSL2 Gains GPU Acceleration and Linux GUI App Support

Microsoft is planning to dramatically improve its Windows Subsystem for Linux (WSL) with GUI application support and GPU hardware acceleration. With a full Linux kernel shipping in Windows 10 via WSL version 2, Microsoft now intends to support Linux GUI apps running alongside regular Windows applications. This will work without requiring X11 forwarding, and is primarily designed for developers to run Linux integrated development environments (IDEs) alongside their regular Windows applications.

The fundamental challenge with WSL remains its 99% compatibility ceiling. That remaining 1% carries outsized consequences for real-world development workflows. A critical example: Windows enforces mandatory file locking -- you cannot delete a file held by an active process -- while Linux allows unlinking open files without restriction. This architectural difference means build tools like Gradle that expect POSIX file semantics will fail unpredictably when both Windows and Linux processes touch the same filesystem. GPU acceleration and GUI support are welcome additions, but they do not address the fundamental OS-level semantic mismatches that force developers back to Windows-native toolchains once they move beyond trivial workloads.

Linux, Microsoft, Windows, WSL

Lenovo Ships Fedora 32 Pre-Installed on ThinkPad Laptops

This Summer, Lenovo will launch a trio of new ThinkPad laptops powered by Fedora 32. This represents a substantial boost for Linux visibility, and Lenovo's vocal endorsement is a step toward establishing desktop Linux as a viable alternative to Windows for creators, developers, and general users. But beyond the headline, what matters are the steps Lenovo and Fedora -- and by extension Red Hat -- are taking to treat Linux as a first-class citizen when these systems launch.

This is arguably one of the most significant OEM commitments to desktop Linux in recent years. Lenovo shipping Fedora on ThinkPads -- workstation-class hardware with certified driver support -- removes the single biggest barrier to Linux adoption: hardware compatibility uncertainty. The move could push desktop Linux from its roughly 1% market share toward meaningful double-digit territory if other OEMs follow suit. However, the long-term success depends on whether Lenovo treats this as a sustained product line rather than a one-off pilot, and whether the Linux community can deliver the kind of polished, zero-configuration experience that business users expect after years on macOS and Windows.

Linux, macOS, Notebook, Dell

Linux Kernel Gains 20% Power Savings via latency_nice Scheduler

IBM engineers have been working on improvements to the Linux kernel's power consumption while running latency-sensitive tasks, achieving comparable performance with up to approximately 20% power savings in their testing. The new patch series builds on earlier work providing a per-task "latency_nice" knob for scheduler hints. latency_nice indicates the latency requirements of a given task so the scheduler can make better decisions. Among the use cases discussed are improved turbo/boost frequency decisions based on grouping tasks with similar latency requirements. Additionally, the scheduler could avoid assigning low-latency tasks to a CPU core running AVX-512 workloads, where core frequencies become significantly constrained.

IBM's latency_nice patch series addresses a long-standing gap in the Linux scheduler: the inability to express per-task latency constraints separate from priority. Current schedulers treat all tasks as latency-equal, forcing a global trade-off between power and responsiveness. With latency_nice, the kernel can group latency-tolerant background tasks onto shared cores at lower frequencies while reserving turbo boost headroom for interactive workloads. The AVX-512 avoidance logic is particularly valuable for mixed workloads on Intel Xeon servers, where heavy vector operations throttle the entire core and penalize co-located latency-sensitive services. If merged upstream, this could substantially reduce power consumption in data center and laptop scenarios alike.

Linux, macOS, Notebook

Valve Drops macOS SteamVR Support, Focuses on Windows and Linux

The writing is on the wall for macOS as a general purpose operating system, since Apple will most likely use the transition to ARM processors in Macs to further lock down macOS, making it more like iOS. While macOS might be more popular than Linux in absolute user numbers, the Linux userbase has a far larger community of skilled developers, programmers, and tinkerers willing to put the effort into making non-native games work on Linux and to improve VR device support. These are exactly the kind of people Apple seems to have a deep-rooted disdain for. Expect more announcements like this over the coming years, as game companies and other developers decide whether to support an isolated and locked down platform like macOS on ARM -- a platform without first-party OpenGL or Vulkan support, with a steward actively pushing a proprietary API that cannot be used anywhere else.

Valve's decision to drop macOS SteamVR support is a direct consequence of Apple's graphics API strategy. By deprecating OpenGL in 2018 and refusing to adopt Vulkan, Apple forced every cross-platform graphics vendor into maintaining a Metal-specific code path that provides zero reuse on Windows or Linux. For VR specifically, the economics are stark: Metal development costs must be justified against a macOS VR user base that barely registers in Steam's hardware surveys. This sets a precedent for other graphics-intensive applications -- any software that requires low-level GPU access now faces a growing tax for macOS support. The irony is that Apple's ARM transition could have been a competitive advantage for GPU compute, but their insistence on proprietary APIs is accelerating developer abandonment instead.

Gaming, Linux, macOS, Vulkan, OpenGL

Lenovo Launches Fedora Linux Community Series on ThinkPad P1, P53, X1

Today, I'm excited to share some big news with you -- Fedora Workstation will be available on Lenovo ThinkPad laptops! Yes, many of us already run a Fedora operating system on a Lenovo system, but this is different. You'll soon be able to get Fedora pre-installed by selecting it as you customize your purchase. This is a pilot of Lenovo's Linux Community Series -- Fedora Edition, beginning with ThinkPad P1 Gen2, ThinkPad P53, and ThinkPad X1 Gen8 laptops, with possible expansion to other models in the future. The Lenovo team has been working with Red Hat engineers on Fedora desktop technologies to ensure that the upcoming Fedora 32 Workstation is ready to go on their laptops. The best part about this is that Lenovo follows existing trademark guidelines and respects open source principles. These laptops ship with software exclusively from the official Fedora repositories. When they ship, you'll see Fedora 32 Workstation. (Models which can benefit from the NVIDIA binary driver can install it in the normal way after the fact, by opting in to proprietary software sources.)

The success of Lenovo's Fedora pilot hinges on whether the Linux community can bridge the gap between technical excellence and mainstream usability. Historically, Linux desktop initiatives from OEMs -- Dell's Ubuntu line, System76's Pop!_OS -- have remained niche offerings serving a self-selecting technical audience. The challenge is not hardware compatibility (Red Hat's engineering collaboration largely solves that) but rather the out-of-box experience for the 90% of ThinkPad buyers who expect macOS-level polish: seamless WiFi roaming, reliable display scaling on external monitors, and zero-configuration printing. If Lenovo and Red Hat invest in that last-mile UX polish rather than treating this as a checkbox for developer market share, it could genuinely expand Linux's addressable desktop market.

Fedora, Linux, Ubuntu, Mac, macOS

WebGPU Enables Browser-Native ML and GPU Compute via WASM

WebGPU is an emerging API that provides access to the graphics and computing capabilities of GPU hardware on the web. It's designed from the ground up within the W3C GPU for the Web group by all major browser vendors, as well as Intel and others, guided by the following principles:[...] We are excited to bring WebGPU support to Firefox because it will allow richer and more complex graphics applications to run portably on the web. It will also make the web platform more accessible to teams who primarily target modern native platforms today, thanks to the use of modern GPU concepts and first-class WASM (WebAssembly) support.

WebGPU is designed to work on top of modern graphics APIs: Vulkan, D3D12, and Metal. The constructs exposed to users reflect the basic primitives of these low-level APIs. Let's walk through the main constructs of WebGPU and explain them in the context of WebGL -- the only baseline we have today on the Web.

WebGPU combined with WebAssembly fundamentally changes what is possible in the browser. Where WebGL was limited to graphics rendering with an OpenGL ES 2.0-era API, WebGPU exposes general-purpose compute shaders that can run ML inference, physics simulations, and data-parallel workloads directly on the GPU. This is particularly significant because Java has struggled for years to provide meaningful GPU access -- CUDA requires native bindings, and OpenCL support remains fragmented across JVM implementations. WebGPU + WASM sidesteps this entirely by providing a hardware-abstracted compute layer accessible from any language that compiles to WebAssembly. If browser-based ML inference reaches acceptable latency, it could shift substantial compute from cloud APIs to client-side execution, reducing both costs and privacy concerns.

HTML5, Vulkan, OpenCL, AI/ML

Java 14 G1 GC Cuts Pause Times Dramatically Since JDK 8

Over the last 6 years since JDK 8 was released, the Java platform has evolved substantially. One major change in JDK 9 was making G1 the default garbage collector. Parallel GC, the previous default, focused on raw throughput, and this change shifted the platform toward a more balanced model where latency is as important as throughput. G1 is designed to avoid the long occasional full collections that eventually occur with a stop-the-world collector like Parallel GC. To achieve this, G1 performs parts of the collection work concurrently with the Java application. This comes with a slight throughput cost that becomes especially visible in benchmarks measuring only throughput. If applications observed a performance drop when migrating from Java 8 to a later version, this GC approach shift was the main reason. Applications that want maximum throughput can switch back to Parallel GC (using JVM option -XX:+UseParallelGC) and take advantage of all other performance improvements in JDK 14.

The improvements to G1 GC in JDK 14 address what has historically been Java's most cited disadvantage against C/C++ in latency-sensitive systems: unpredictable pause times. In practice, a well-written Java application often outperforms an equivalent C/C++ application because Java's runtime optimizations -- JIT compilation, escape analysis, and memory layout optimizations -- are applied automatically, while C/C++ requires manual tuning to achieve similar results. The real penalty was always GC-induced pauses disrupting tail latencies. With G1 in JDK 14 demonstrating sub-millisecond pauses for many workload profiles, and ZGC offering sub-10ms maximums even for multi-terabyte heaps, Java's GC story has shifted from a competitive liability to a genuine engineering advantage over manual memory management for the vast majority of server workloads.

Java 14, Java 15, Performance, Benchmark

Google Replaces Android Apps with PWAs on Chrome OS

Google is replacing some Android apps for Chromebooks with Progressive Web Apps (PWAs). A PWA is essentially a web application that looks and feels like a native app. This is good news for Chromebook owners. In many cases, PWAs are faster and more functional than their Android counterparts. PWAs also consume less storage and require less system resources to run. It's well known that some Android apps on Chrome OS perform poorly. Google has struggled for years to optimize Android apps for tablet-sized screens. While the selection has improved since the Pixelbook era, many programs remain notoriously incompatible. Even though PWAs have been available for some time, many users didn't know how to install them or understand why they were a better alternative. Some users also simply prefer getting all of their apps through the same process.

Google's pivot from Android apps to PWAs on Chrome OS reflects a broader architectural convergence between web and native platforms. The move is overdue -- Android apps on Chromebooks have always been a compatibility shim running inside ARC (Android Runtime for Chrome), adding overhead and introducing touch-first UX assumptions that conflict with keyboard-and-trackpad interaction models. PWAs eliminate this impedance mismatch entirely. For the Kotlin ecosystem specifically, this does not diminish Kotlin's relevance but rather shifts the target: Kotlin/JS and Kotlin/Wasm become increasingly viable paths for building these web-first applications. The broader implication is that the web platform, powered by Service Workers, WebAssembly, and now WebGPU, is steadily absorbing capabilities that previously required native runtimes.

PWA, Web, WASM, WebAssembly

Microsoft Acquires ThreadX RTOS, Rebrands as Azure RTOS

Microsoft's 2019 acquisition of Express Logic brought its ThreadX real-time operating system into the Azure ecosystem. Now branded as Azure RTOS, it's an industrial-grade, real-time operating system for devices between Arduino and Raspberry Pi complexity -- needing more than bare firmware but less than a full Azure Sphere-like Linux. The OS extends Microsoft's edge compute capabilities, already running on more than two billion devices. At the heart of Azure RTOS is the ThreadX picokernel, designed to scale across a range of hardware with a customizable deployment image that only bundles the services your code requires. Those services are implemented as a C library, simplifying the building and delivery of runtime code. The kernel is distributed as C source code, making it possible (though not recommended) to modify it for specific hardware or application requirements. Services run side by side as threads -- they don't layer -- allowing ThreadX to be optimized for speed and for switching between services. Performance is essential, as real-time operating systems must respond quickly to event-driven inputs since they often operate in safety-critical roles.

Microsoft's acquisition of Express Logic and its ThreadX picokernel fills a strategic gap in Azure's IoT stack. Azure Sphere targets higher-end MCUs with full Linux, but the vast majority of deployed IoT devices -- industrial sensors, medical monitors, automotive ECUs -- run on resource-constrained microcontrollers with kilobytes of RAM, not megabytes. ThreadX's picokernel architecture, with a footprint as small as 2KB, addresses this tier directly. With over two billion ThreadX deployments already in the field, Microsoft effectively acquired an installed base that rivals the entire Azure cloud user count. The strategic play is clear: connect these billions of edge devices to Azure IoT Hub for telemetry and management, creating a hardware-to-cloud pipeline that AWS and Google currently lack at this scale.

Windows, RTOS, Azure