Linux 5.14 Btrfs Gains 12% Faster Truncation and 17% xattr Boost

 
Key Btrfs improvements in Linux 5.14 include: - Skipping full sync when truncation does not touch extents, reducing run-times by up to 12%. - Eliminating unnecessary extended attribute logging during fast fsyncs, yielding approximately 17% higher throughput and 17% lower run-time on xattr-intensive workloads. - A sysfs control to limit scrubbing I/O bandwidth per-device. - Exporting device statistics via sysfs at /sys/fs/btrfs/FSID/devinfo/DEVID/error_stats. - New ioctls for cancellable resize and device delete. - Preemptive flushing improvements. - Preparations around sub-page blocksize handling.

The cumulative impact of these Btrfs optimizations in Linux 5.14 is significant for production workloads. The 12% truncation speedup stems from avoiding unnecessary journal flushes when the operation does not modify any extents, which directly benefits database and log-rotation patterns. The 17% xattr throughput gain matters especially for container runtimes and SELinux-heavy environments that attach security labels to every file. Combined with the new per-device scrub bandwidth throttling via sysfs, administrators can now run background integrity checks without saturating storage I/O on multi-disk arrays. The sub-page blocksize groundwork also signals upcoming support for 4K-sector Btrfs on systems with larger hardware block sizes, a prerequisite for broader NVMe compatibility.

Google Launches Open Source Vulnerability Schema for Automated CVE Triage

 
In recent months, Google has launched several efforts to strengthen open-source security across multiple fronts. One key focus is improving how known security vulnerabilities are identified and addressed without extensive manual effort. A precise common data format for triaging and remediating security vulnerabilities is essential, particularly when communicating risks across affected dependencies -- it enables easier automation and empowers consumers of open-source software to determine exposure and apply security fixes as quickly as possible. Google released the Open Source Vulnerabilities (OSV) database in February with the goal of automating and improving vulnerability triage for developers and users of open source software. This initial effort was bootstrapped with a dataset of several thousand vulnerabilities from the OSS-Fuzz project. Deploying OSV to communicate precise vulnerability data for hundreds of critical open-source projects validated the format's utility, and community feedback drove improvements to the project; for example, the Cloud API key requirement was dropped, making the database easier to access for a broader user base. The community response also demonstrated broad interest in extending the effort further.

The OSV schema addresses a long-standing fragmentation problem in vulnerability tracking. Before OSV, each ecosystem (npm, PyPI, crates.io, Maven Central) maintained its own advisory format, forcing security tools to implement bespoke parsers for every source. By defining a unified JSON schema with precise affected-version ranges and package ecosystem identifiers, OSV enables downstream tooling -- dependency scanners, CI/CD gates, and SBOM generators -- to ingest vulnerability data from multiple sources through a single parser. The decision to drop the API key requirement was strategically important: it lowered the barrier for integration into open-source CI pipelines where managing secrets adds friction. As more advisory databases adopt this schema, the compounding benefit is that a single vulnerability report can propagate automatically across every affected package manager without manual re-entry.

Intel Open-Sources LLVM-Based Graphics Compiler for OpenCL

 
The Intel Graphics Compiler for OpenCL is an LLVM-based compiler targeting Intel Gen graphics hardware architecture. Refer to http://01.org/compute-runtime for additional details regarding Intel's motivation and approach to OpenCL support in the open source ecosystem.

Intel's open-source GPU compiler stack reveals a broader strategic pattern: Intel consistently publishes full driver and compiler source for its hardware, spanning network (ice/i40e), wireless (iwlwifi), and now graphics. By contrast, AMD's open-source GPU work arrived later and still relies on proprietary firmware blobs for many features. For developers, Intel's approach means that GPU compute workloads via OpenCL can be debugged, profiled, and patched at every layer of the stack. The LLVM foundation also means Intel's compiler benefits from upstream LLVM optimizations automatically. While AMD leads in raw multi-core throughput for server workloads, Intel's single-threaded IPC advantage and deeper open-source toolchain integration remain compelling for developer workstations where compile times and IDE responsiveness dominate the experience.

LibreOffice WebAssembly Port Targets Full Browser Execution

 
This page describes the port of LibreOffice to WebAssembly (WASM) using the Emscripten toolchain, currently targeting the Qt5 VCL backend. The goal is to cross-compile LibreOffice to run in the browser, potentially with some native UI via LibreOfficeKit. Future targets may include a WASI runtime or Node.js.

The LibreOffice WASM port is technically ambitious because it requires compiling millions of lines of C++ through Emscripten while preserving the Qt5 VCL rendering pipeline. The key challenge is not just compilation but performance: WASM currently lacks native threading (SharedArrayBuffer plus Atomics are available but not universally enabled), so LibreOffice's multi-threaded layout and rendering engine must be adapted for a largely single-threaded execution model. If successful, this port would deliver a fully offline-capable, privacy-respecting office suite that runs without any server-side processing, unlike Google Docs which requires a persistent connection. The WASI runtime target is also notable because it would allow LibreOffice to function as a headless document processing engine in serverless environments, enabling server-side document conversion without installing native packages.

Google Funds Rust for Linux Kernel to Strengthen Memory Safety

 
Google wants to see Rust programming language support within the Linux kernel and has contracted the lead developer working on "Rust for Linux" as the effort aims for mainline inclusion. Google is publicly announcing its formal support for Rust in the Linux kernel to improve memory safety, and has contracted developer Miguel Ojeda to continue his work on Rust for the Linux kernel and related security efforts. This contract extends through at least the next year.

Rust's entry into the Linux kernel addresses a concrete problem: roughly two-thirds of kernel CVEs stem from memory safety bugs in C code, including use-after-free, buffer overflows, and uninitialized memory reads. Rust's ownership model eliminates these categories at compile time. The trade-off is real though -- Rust's borrow checker imposes a cognitive overhead that slows initial development velocity, which matters less in kernel code where correctness outweighs iteration speed. For application-level code such as web services and microservices, garbage-collected languages like Kotlin or Go typically offer a better balance of safety and productivity. The strategic significance of Google funding this work is that it signals enterprise willingness to invest in systemic security improvements rather than patching individual CVEs after exploitation.

Fedora Cloud 35 Adopts Btrfs as Default Filesystem

 
Last month plans were published for Fedora Cloud 35 to use the Btrfs filesystem by default, following Fedora Workstation which has used Btrfs by default for several releases. That plan has now been approved by FESCo, allowing the change to proceed. Fedora developers along with engineers from Amazon, Facebook, and other organizations have advocated for using Btrfs by default with Fedora Cloud. Key Btrfs features of interest for cloud deployments include transparent filesystem compression, copy-on-write (CoW) semantics, reflinks and snapshots, stronger data integrity, online shrink and grow, and the other capabilities commonly highlighted when discussing Btrfs.

Fedora Cloud adopting Btrfs by default is a strong signal of production readiness for a filesystem that was long considered experimental. The cloud context makes this particularly meaningful: Amazon and Meta engineers backing the change means Btrfs is being validated at hyperscaler scale. For cloud workloads, the most impactful Btrfs features are transparent zstd compression (which reduces EBS storage costs and I/O latency simultaneously), instant snapshots for rapid VM cloning, and reflinks that enable copy-on-write file duplication without consuming additional disk space. The migration path from ext4 is straightforward with btrfs-convert, but for server workloads previously running XFS, the switch requires evaluating whether Btrfs's metadata overhead on very small file operations is acceptable for the specific workload profile.

XFS Hits 1.7M Transactions Per Second in Linux 5.14

 
The scalability work for the XFS filesystem shows strong performance improvements. Transaction rates climb from approximately 700k to 1.7M commits per second, with flush operations reduced by two orders of magnitude for metadata-heavy workloads that do not enforce fsync.

A 2.4x increase in transaction throughput (700k to 1.7M commits/second) combined with a 100x reduction in flush operations is a transformative improvement for XFS on high-IOPS NVMe storage. These gains primarily benefit metadata-intensive workloads such as container image layer management, mail servers, and build systems that create and delete thousands of small files per second. The optimization targets the log subsystem, where contention on the active in-core log (AIL) previously serialized concurrent metadata updates. XFS remains the strongest choice for large-file sequential I/O workloads (video editing, database tablespaces) where its allocation group parallelism and delayed allocation strategy minimize fragmentation. Btrfs offers richer features (snapshots, compression, checksumming), but XFS continues to lead in raw throughput for workloads that prioritize speed over data management flexibility.

Hyper-V DRM Display Driver Lands in Linux 5.14 Kernel

 
Last summer Microsoft engineers submitted a DRM kernel display driver for their Hyper-V synthetic video device. One year later, after several rounds of code review, this Hyper-V DRM driver will be going mainline with the upcoming Linux 5.14 kernel cycle. This open-source Direct Rendering Manager driver supports Microsoft's Hyper-V synthetic video device for display output within their virtualized environment. It builds on the company's existing framebuffer (hyperv_fb) driver but is now a full DRM driver compatible with Wayland compositors and modern display stacks.

The upgrade from a simple framebuffer driver (hyperv_fb) to a proper DRM/KMS driver is technically significant for Linux-on-Hyper-V deployments. The framebuffer driver was limited to basic display output with no hardware acceleration, no Wayland support, and fixed resolution handling. A DRM driver enables KMS mode-setting, multi-monitor configurations, and integration with modern compositors like Sway and GNOME Wayland. This directly benefits Azure VM users running Linux desktop workloads and WSL2 GUI applications (WSLg). Microsoft's strategy here serves a clear business interest: the better Linux runs on Hyper-V and Azure, the more Linux workloads migrate to Microsoft's cloud rather than competing hypervisors. The year-long code review process also demonstrates that the upstream kernel community holds corporate contributors to the same review standards regardless of company size.

JEP 391: Native Java 17 on Apple M1 via macOS AArch64 Port

 
Motivation
Apple has announced a long-term plan to transition their line of Macintosh computers from x64 to AArch64. We therefore expect to see broad demand for a macOS/AArch64 port of the JDK. Although it will be possible to run a macOS/x64 build of the JDK on AArch64-based systems via macOS's built-in Rosetta 2 translator, the translation will almost certainly introduce a significant performance penalty. Description
An AArch64 port already exists for Linux (JEP 237), and work is underway on an AArch64 port for Windows (JEP 388). We expect to reuse existing AArch64 code from these ports by employing conditional compilation — as is usual in ports of the JDK — to accommodate differences in low-level conventions such as the application binary interface (ABI) and the set of reserved processor registers. macOS/AArch64 forbids memory segments from being executable and writeable at the same time, a policy known as write-xor-execute (W^X). The HotSpot VM routinely creates and modifies executable code, so this JEP will implement W^X support in HotSpot for macOS/AArch64.

JEP 391 resolves a measurable performance gap: Java applications running under Rosetta 2 translation on Apple M1 incur roughly 20-30% overhead due to instruction-level x86-to-ARM translation, JIT code invalidation, and memory mapping differences. The native AArch64 port eliminates this entirely. The most technically interesting aspect is the W^X (write-xor-execute) requirement: Apple Silicon enforces that memory pages cannot be simultaneously writable and executable, which fundamentally conflicts with how HotSpot's JIT compiler works -- it generates machine code into memory and then executes it. The port must use a dual-mapping approach where JIT code is written through one virtual address and executed through another. This W^X implementation also benefits security on all platforms and will likely be backported. The Linux AArch64 port (JEP 237) has been stable since JDK 9, so the macOS port inherits years of ARM-specific JIT optimizations.

Wasmer 2.0: Cross-Platform WebAssembly Runtime with OS Access

 
Wasmer allows you to run WebAssembly modules either standalone or embedded within other languages such as C/C++, Rust, Python, Go, PHP, Ruby, and more. By design, the environment within which a WebAssembly module runs is completely isolated (sandboxed) from the native functionality of the underlying host system. This means that by default, WASM modules are designed to perform nothing more than pure computation. Consequently, access to OS-level resources such as file descriptors, network sockets, the system clock, and random numbers is not normally available from WASM. However, there are many cases in which a WASM module needs to interact beyond pure computation and must access native OS functionality.

Wasmer 2.0 represents a maturation point for WebAssembly as a universal binary format beyond the browser. Unlike the JVM, which carries a large runtime and language-specific garbage collector, WASM modules are lightweight, language-agnostic (compilable from Rust, C, Go, AssemblyScript, and more), and execute in a capability-based sandbox where OS access is explicitly granted rather than implicitly available. The WASI (WebAssembly System Interface) layer that Wasmer implements is the key differentiator: it provides POSIX-like abstractions for filesystem, networking, and clocks while maintaining sandbox guarantees. For server-side applications, this means untrusted code can be executed with fine-grained permission control -- a module can be granted read access to a specific directory without any ability to access the network. This security model is architecturally superior to containers for multi-tenant code execution, which is why edge compute platforms like Fastly and Cloudflare have adopted WASM runtimes.