XFS Online Repair Foundation Lands in Linux Kernel 5.7

Please pull this first batch of new changes for 5.7. There's a lot going on this cycle with cleanups in the log code, the btree code, and the xattr code. We're tightening metadata validation and online fsck checking, and introducing a common btree rebuilding library so that we can refactor xfs_repair and introduce online repair in a future cycle. We also fixed a few visible bugs -- most notably there's one in getdents that we introduced in 5.6; and a fix for hangs when disabling quotas. This series has been running fstests & other QA in the background for over a week and looks good so far. I just did a test merge and it seems to go in cleanly, so please let me know if you encounter any surprises. I anticipate sending you a second pull request next week. That batch will change how xfs interacts with memory reclaim; how the log batches and throttles log items; how hard writes near ENOSPC will try to squeeze more space out of the filesystem; and hopefully fix the last of the umount hangs after a catastrophic failure. That should ease a lot of problems when running at the limits, but for now I'm leaving that in for-next for another week to make sure we got all the subtleties right.

The introduction of a common btree rebuilding library in XFS is the critical prerequisite for live filesystem repair -- the ability to fix metadata corruption on a mounted, active filesystem without downtime. This is a capability that no mainstream Linux filesystem currently offers. For production servers running multi-terabyte XFS volumes, the current alternative is unmounting and running xfs_repair offline, which can mean hours of downtime. The complementary changes to ENOSPC handling and memory reclaim interaction also address pain points that surface specifically under high-load conditions. Combined with XFS's superior scalability for parallel I/O workloads compared to ext4, these improvements strengthen XFS's position as the preferred filesystem for enterprise Linux deployments and large-scale storage systems.

XFS, Linux, File-system

Google Adds rel=ugc and rel=sponsored Link Attributes for SEO

Nearly 15 years ago, the nofollow attribute was introduced as a means to help fight comment spam. It also quickly became one of Google's recommended methods for flagging advertising-related or sponsored links. The web has evolved since nofollow was introduced in 2005 and it's time for nofollow to evolve as well. Today, we're announcing two new link attributes that provide webmasters with additional ways to identify to Google Search the nature of particular links. These, along with nofollow, are summarized below: rel="sponsored": Use the sponsored attribute to identify links on your site that were created as part of advertisements, sponsorships or other compensation agreements. rel="ugc": UGC stands for User Generated Content, and the ugc attribute value is recommended for links within user generated content, such as comments and forum posts. rel="nofollow": Use this attribute for cases where you want to link to a page but don't want to imply any type of endorsement, including passing along ranking credit to another page.

Google's introduction of rel="ugc" and rel="sponsored" represents a meaningful step toward richer link-level semantics on the web. Previously, nofollow served as a catch-all signal that conflated spam prevention, advertising disclosure, and editorial discretion into a single attribute. The new granularity lets Google's ranking algorithms differentiate between a forum post linking to a useful tool (ugc) and a paid placement (sponsored) -- two fundamentally different link intents. The practical adoption challenge is significant, however: most CMS platforms and commenting systems will need explicit updates to emit these attributes, and site operators have limited incentive to implement them unless Google signals clear ranking benefits for correct usage versus ranking penalties for non-adoption.

Web, Google, SEO

Shenandoah GC Reaches Production Status in Java 15 (JEP 379)

In JDK 12 and later, Shenandoah is enabled via the -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC options. Making Shenandoah a product feature means that -XX:+UnlockExperimentalVMOptions would no longer be needed. A number of related Shenandoah options would transition from "experimental" to "product" status, subject to review. The default values for the options would not change, making this a largely cosmetic change in flag classifications. At the time of its integration into JDK 12, Shenandoah had already been shipping in Red Hat 8u and 11u downstream releases as a supported garbage collector, used by RHEL and RHEL downstream users. Because of this, Shenandoah 8u and Shenandoah 11u are already non-experimental and thus not affected by this change. Since there are only a few users running something other than 8u and 11u, the actual impact of this change is expected to be minuscule.

G1 remains the default general-purpose garbage collector in Java 15, but Shenandoah's promotion to production status gives JVM operators a compelling alternative for latency-critical workloads. Shenandoah performs concurrent compaction -- evacuating live objects while the application continues running -- achieving pause times that are largely independent of heap size. This makes it particularly suited for financial trading systems, real-time bidding platforms, and interactive services where P99 latency budgets are measured in single-digit milliseconds. The trade-off is approximately 10-15% throughput overhead compared to G1 due to the read barriers required for concurrent relocation. Engineering is fundamentally about making these trade-offs explicit, and having Shenandoah as a supported production option rather than an experimental flag significantly lowers the barrier to adopting it in production JVM deployments.

Java, Garbage Collector, Realtime, Java15

AMD Ryzen 4000 Beats Intel in Laptop Benchmarks at 35W TDP

To put AMD's Ryzen 4000 in perspective, you have to understand that in AMD's 50-year history, it has never beaten Intel in laptops. It has won a few battles in desktop: Athlon, Athlon 64, and the current desktop Ryzen chips. AMD's fortunes change dramatically with the Ryzen 4000 chips, which are clearly the new leader in performance laptops. Even more astonishing is AMD's ability to deliver so much performance within tight thermal and power constraints. While a comparably thin Intel-based laptop would have to crank up fans to annoying levels, the Ryzen 9 4900HS achieves this with relatively moderate fan noise. Worse news for Intel: AMD's Ryzen 4000 can compete with laptops that weigh two to even three times as much. This is something we frankly didn't expect. Ryzen 4000 is without a doubt the most game-changing performance laptop CPU we've seen in years.

AMD's Ryzen 9 4900HS represents a decisive shift in the laptop CPU landscape. With 8 cores and 16 threads at a 35W TDP, it outperforms Intel's flagship mobile i9 processors that consume 45W or more -- a remarkable power efficiency advantage enabled by TSMC's 7nm process node versus Intel's 14nm+++. For software developers specifically, this translates to faster compilation times and smoother IDE performance in thermally constrained ultrabook form factors. However, the broader industry question is whether x86-64's dominance in laptops is itself approaching an end. Apple's rumored ARM transition and Qualcomm's Snapdragon 8cx suggest that within a few years, the relevant benchmark comparison may not be AMD versus Intel but rather x86-64 versus ARM -- a contest where per-watt performance favors the RISC architecture.

Intel, CPU, AMD

MLIR Joins LLVM 10: A Multi-Level IR for ML and Beyond

MLIR is intended to be a hybrid IR which can support multiple different requirements in a unified infrastructure. For example, this includes: MLIR is a powerful representation, but it also has non-goals. We do not try to support low level machine code generation algorithms (like register allocation and instruction scheduling). They are a better fit for lower level optimizers (such as LLVM). Also, we do not intend MLIR to be a source language that end-users would themselves write kernels in (analogous to CUDA C++). On the other hand, MLIR provides the backbone for representing any such DSL and integrating it in the ecosystem.

MLIR is arguably the most consequential addition to LLVM in years because it addresses the fragmentation problem that has plagued ML compiler toolchains. Before MLIR, every framework -- TensorFlow, PyTorch, JAX -- maintained its own intermediate representation and optimization pipeline, duplicating effort and limiting cross-framework optimization. MLIR provides a shared infrastructure where high-level graph optimizations, loop transformations, and hardware-specific lowering can be composed as reusable passes. This is analogous to how WebAssembly opened browser runtimes to languages beyond JavaScript: MLIR opens the ML compilation pipeline to languages and frameworks beyond Python and TensorFlow. For languages that target LLVM -- including Rust, Swift, and Kotlin/Native -- MLIR creates a path to first-class ML accelerator support without requiring Python as an intermediary.

TensorFlow, LLVM, MLIR

GraalVM vs OpenJDK vs Amazon Corretto: Java 8, 11, 14 Benchmarks

When taking the geometric mean of the 32 tests carried out, OpenJDK 8 upstream actually came out best overall, followed closely by GraalVM 20.0 Java 8. Meanwhile the Java 11 version of GraalVM 20.0 was by far the slowest. On the Amazon Corretto front, version 11 was quite similar to OpenJDK 11 upstream but its Java 8 implementation performed similarly to that slower Java 11 milestone.

The benchmark results confirm what JVM engineers have long observed: Java 8 runtimes have benefited from years of profile-guided optimization and JIT compiler tuning that newer JDK versions have not yet fully recaptured. GraalVM 20.0 on Java 8 performing nearly at parity with upstream OpenJDK 8 validates its JIT compiler maturity, while the significantly slower GraalVM Java 11 performance suggests that the Graal compiler's optimization passes have not yet been tuned for the module system and other JDK 11 runtime changes. For teams evaluating migration from Java 8, these results indicate that OpenJDK 14 offers a reasonable performance baseline -- not yet matching Java 8's peak throughput, but close enough that the language and API improvements justify the transition for most workloads.

Java, GraalVM, Benchmark

Java 15 Removes Nashorn JavaScript Engine (JEP 372)

Remove the Nashorn JavaScript script engine and APIs, and the jjs tool. The engine, the APIs, and the tool were deprecated for removal in Java 11 with the express intent to remove them in a future release. The Nashorn JavaScript engine was first incorporated into JDK 8 via JEP 174 as a replacement for the Rhino scripting engine. When it was released, it was a complete implementation of the ECMAScript-262 5.1 standard. With the rapid pace at which ECMAScript language constructs, along with APIs, are adapted and modified, we have found Nashorn challenging to maintain.

Nashorn's removal from the JDK standard distribution does not eliminate the ability to run JavaScript on the JVM -- it shifts Nashorn to a standalone dependency that applications must explicitly include. The deeper significance is what motivated the removal: the ECMAScript specification now evolves annually with substantial additions (optional chaining, nullish coalescing, top-level await), and maintaining a compliant engine inside the JDK's release cycle proved unsustainable. For applications that embedded Nashorn for scripting or template evaluation, the migration path is either the standalone Nashorn project on GitHub or GraalVM's JavaScript engine, which offers significantly better performance through Graal JIT compilation and tracks the ECMAScript specification more closely.

Java, JavaScript

YouTrack vs Jira: Why Feature Bloat Undermines Jira's Usability

Reports generated by YouTrack are more flexible too. For example, you can create a matrix report for multiple projects using any of their issues' fields for X and Y axes. In Jira you can do it too -- if you buy an appropriate plugin. However, sometimes they lack basic information about time tracking and time management. In Jira we have used a plugin called "Tempo" which has given us -- both developers and managers -- a nice view of time spent on work. Unfortunately, YouTrack's reports for that are less clear. But there is an issue for that awaited by more people, so hopefully JetBrains will upgrade this part of reports-making in the future.

Jira's trajectory exemplifies the classic product management anti-pattern of scope creep driven by market dominance. By attempting to serve as project tracker, service desk, portfolio manager, and agile board simultaneously, Jira has accumulated layers of configuration complexity that slow down the exact workflows it was originally designed to accelerate. YouTrack takes the opposite approach: a focused, opinionated tool that prioritizes keyboard-driven workflows, inline command syntax, and smart search over plugin ecosystems. For software teams specifically, YouTrack's native integration with the JetBrains IDE suite provides a tighter feedback loop between issue tracking and code changes than Jira's plugin-based IDE connectors can match.

Issue Tracker, Comparison

Redox OS Introduces pkgar Package Manager with Atomic Updates

It has been a while since the last Redox OS news, and I think it is good to provide an update on how things are progressing. The dynamic linking support in relibc got to the point where rustc could be loaded, but hangs occur after loading the LLVM codegen library. Debugging this issue has been difficult, so I am taking some time to consider other aspects of Redox OS. Recently, I have been working on a new package format, called pkgar.

Redox OS faces an existential challenge common to alternative operating systems: building a viable software ecosystem from scratch. The pkgar package manager with atomic update support is technically sound, but the critical differentiator that gives Redox any realistic chance of long-term viability is its syscall API and ABI compatibility with Linux. Without this compatibility layer, Redox would need to rewrite or port every application individually -- an impossible task for a small team. With it, Redox can potentially leverage the vast Linux software ecosystem while offering its own microkernel architecture advantages: better fault isolation, memory safety through Rust, and a cleaner driver model. The question is whether these architectural benefits are compelling enough to justify the compatibility tax.

Linux, Rust, OS

Clear Linux Narrows Focus to Servers and Developers Only

We, Intel, work with many Linux distros pretty intensely on hardware support and performance and many other things. Many of our customers nowadays have Linux distros of their own rather than using a "standard" distro as is. For many reasons, we also build Clear Linux. By knowing what it takes to get features into a Linux distro (our own) it's easier for us to work with others who are/have a distro. We also want to make sure we can do the best performance etc etc... and sometimes that means doing experiments that are only possible if you have your own distro in house. Now on Desktop... based on a lot of history (Moblin/Meego/...) we know that it is very hard to do a "general consumer desktop", and we tried something different, aim JUST at software developers (e.g. advanced technical users not afraid of a command line who write code but also generally have more modern, higher quality hardware) and do a very narrow thing that was hopefully more tractable. Turns out that there is no such thing really, people expect, almost demand, that any obscure piece of hardware "just works" (often stuff we can't even buy anymore to test it etc) and... well we got asked for 15+ different desktop environments etc etc... an infinity of "weird stuff" that has nothing to do with "developer". We have been trying to accommodate those as much as we can, but there are clear limits because we also do not want to just throw junk over the wall. It also means we are likely going to change a bit how we work, rather than "everything" we need to make sure that what we do ship is usable, with a bias to servers and what developers use rather than "random stuff". With the 3rd party repo stuff getting more ready, there's ways where others can provide their own repositories for "weird stuff" without us being a bottleneck.

Clear Linux presents two fundamental risks for potential adopters. First, depending on a Linux distribution maintained by a CPU vendor that has no incentive to optimize for competitor hardware creates an inherent conflict of interest -- AMD and ARM workloads will always be secondary priorities. Second, Clear Linux uses its own swupd package manager, which sits outside both the RPM (Fedora/RHEL/SUSE) and DEB (Debian/Ubuntu) ecosystems. While swupd offers genuinely innovative features like OS-level delta updates and bundle-based installation, it means that any software not packaged for Clear Linux requires manual compilation or container workarounds. For benchmarking and Intel hardware validation, Clear Linux serves a valuable purpose. As a production development environment, the ecosystem limitations and vendor lock-in risks outweigh the performance gains that are typically in the single-digit percentage range.

Linux, Software Development