Google Ends Chrome Apps: Full Deprecation Timeline Through 2022

The progress of modern browsers puts the Web in a strong position to address the vast majority of use cases -- evident in the success of companies like Figma and Google's own products like Google Earth. We are confident that the Web can deliver first-class experiences on an open platform. With this continued progress, we are expanding upon our earlier announcement and will begin phasing out support for Chrome Apps across all operating systems.

Google's phased Chrome Apps deprecation -- concluding in June 2022 for Enterprise and Education users -- was a necessary correction to the platform fragmentation that Chrome Apps had introduced. Chrome Apps used proprietary APIs (chrome.app.window, chrome.fileSystem, chrome.socket) that created Chrome-only dependencies, undermining the web's core principle of platform independence. Chrome Extensions, which operate within standard browser extension APIs, remain fully supported and continue to provide cross-site capabilities for desktop users. The timing aligned with PWA maturity: by 2020, the combination of Service Workers, Web App Manifests, and the File System Access API covered most use cases that previously required Chrome Apps. This deprecation effectively forced developers to migrate toward standards-based web APIs, which benefits users across all browsers rather than locking them into a single vendor's runtime.

Chrome, Web

NoOps Explained: Full IT Automation Beyond DevOps

NoOps (no operations) is the concept that an IT environment can become so automated and abstracted from the underlying infrastructure that there is no need for a dedicated team to manage software in-house. Traditionally in the enterprise, an application development team is in charge of gathering business requirements for a software program and writing code. The development team tests their program in an isolated development environment for quality assurance (QA) and -- if requirements are met -- releases the code to an operations team who deploys and maintains the program from that point on. In a NoOps scenario, maintenance and other tasks performed by the operations team would be automated.

NoOps represents the logical endpoint of DevOps automation: a fully self-service platform where developers deploy, scale, and monitor without dedicated operations staff. In practice, NoOps does not eliminate operations work -- it embeds it into the platform layer. Products like Cloud Foundry (Pivotal), Heroku, and later Vercel and Fly.io embody this model by abstracting away infrastructure provisioning, certificate management, scaling policies, and log aggregation. The critical distinction is that NoOps shifts operational complexity from teams to tooling, which works well for stateless web applications but breaks down for workloads requiring custom networking, GPU scheduling, or compliance-driven infrastructure controls. Organizations pursuing NoOps should evaluate whether their workloads genuinely fit a platform-as-a-service model or whether they are simply renaming their operations team as "platform engineering."

DevOps, Cloud, Software Development

Mozilla Lays Off 70 Employees as Firefox Revenue Model Falters

Mozilla laid off about 70 employees today, TechCrunch has learned. In an internal memo, Mozilla chairwoman and interim CEO Mitchell Baker specifically mentions the slow rollout of the organization's new revenue-generating products as the reason for why it needed to take this action. The overall number may still be higher, though, as Mozilla is still looking into how this decision will affect workers in the U.K. and France. In 2018, Mozilla Corporation (as opposed to the much smaller Mozilla Foundation) said it had about 1,000 employees worldwide.

Mozilla's layoffs exposed a structural vulnerability: roughly 90% of Mozilla Corporation's revenue came from a single Google search deal, making the organization financially dependent on the very company whose browser dominance it exists to counterbalance. The 7% workforce reduction (70 of ~1,000 employees) signaled that diversification attempts -- Mozilla VPN, Firefox Monitor, Pocket Premium -- were not generating meaningful revenue quickly enough. For the broader web ecosystem, Firefox's survival matters because it is the last major browser engine independent of Chromium. Without Gecko/SpiderMonkey, Google would effectively control web standards through its Blink rendering engine (used by Chrome, Edge, Opera, Brave, and Vivaldi). The layoff's long-term impact depends on whether Mozilla can protect its core Gecko engineering team while trimming organizational overhead, or whether critical rendering engine expertise is lost in the cuts.

Mozilla, Firefox, Google

HTTP/3 Over QUIC: How UDP Replaces TCP for Faster Web Connections

HTTP/3 promises to make Internet connections faster, more reliable, and more secure. Born as "HTTP over QUIC," an effort to adapt the HTTP protocol to run on top of Google's own transport layer protocol, QUIC, it was later proposed as an IETF standard and is currently an Internet Draft. [...] QUIC is a key element of HTTP/3, since it provides the foundations for its main features. Built on top of UDP, QUIC attempts to solve the major issues experienced when using the TCP protocol, namely connection-establishment latency and multi-stream handling in the presence of packet loss. TCP's latency issue stems from its congestion control algorithm, which mandates a slow start to assess how much traffic can be sent before congestion occurs. This compounds, in HTTP/1.0, with the fact that each TCP request/response exchange is assigned a new connection, thus incurring the slow-start penalty.

HTTP/3's shift from TCP to QUIC (which runs over UDP) represents the most fundamental transport-layer change in the web's history. Unlike HTTP/2, which still suffered from TCP head-of-line blocking (where a single lost packet stalls all multiplexed streams), HTTP/3 achieves true independent stream multiplexing because QUIC handles per-stream loss recovery. The architectural implications extend beyond performance: because QUIC operates in userspace rather than the kernel's TCP stack, protocol updates can be deployed via application updates without waiting for OS kernel patches. This is a deliberate design choice -- the only kernel primitive required is UDP. QUIC also integrates TLS 1.3 directly into the handshake, reducing connection establishment from TCP's 2-3 round trips (TCP handshake + TLS handshake) to a single round trip, or zero round trips for resumed connections. Like HTTP/2, HTTP/3 uses binary framing rather than HTTP/1.1's text-based encoding, maintaining parsing efficiency while gaining the transport-layer improvements.

Web, IETF, Internet

Quarkus 1.2 and GraalVM 19.3: Native Java Microservices Integration

One of the highly anticipated features of Quarkus 1.1.0.CR1 was GraalVM 19.3 support. GraalVM 19.3 changes quite a lot of things (JDK 11 preview, etc.) and due to the deep integration between Quarkus and GraalVM, it was an all-or-nothing update for us as we couldn't support both GraalVM 19.2 and 19.3 at the same time. While porting Quarkus to GraalVM 19.3, we hit several regressions, some due to Quarkus not doing the right thing (we fixed them), some due to GraalVM regressions (we are working hand in hand with the GraalVM team to fix them). The next micro bugfix release of GraalVM is planned for mid-January, so any workaround had to come from Quarkus and by writing substitutions.

GraalVM serves as a polyglot runtime capable of executing not only JVM bytecode languages (Java, Kotlin, Scala) but also JavaScript, Python, Ruby, R, and LLVM-based languages (C, C++, Rust) through its Truffle framework. Quarkus, Red Hat's Kubernetes-native Java framework, differentiates itself through deep GraalVM native-image integration that enables ahead-of-time compilation to standalone binaries with sub-100ms startup times and ~50MB RSS memory footprint -- compared to several hundred milliseconds and 200MB+ for traditional JVM deployments. The tight coupling described here (unable to support two GraalVM versions simultaneously) illustrates the fragility of native-image compilation: reflection metadata, serialization configurations, and class initialization ordering must be precisely aligned between framework and runtime. Quarkus is not a prerequisite for using GraalVM -- Spring Boot, Micronaut, and Helidon all offer native-image support, though Quarkus pioneered the build-time metadata processing approach that makes native compilation practical for complex applications.

Java, GraalVM, Software Development

Apple's Daisy Robot: iPhone Recycling to Reduce Rare Earth Mining

Apple is trying to change the way electronics are recycled with a robot that disassembles its iPhone so that minerals can be recovered and reused, while acknowledging rising global demand for electronics means new mines will still be needed. The Cupertino, California-based company says the robot Daisy is part of its plan to become a "closed-loop" manufacturer that does not rely on the mining industry, an aggressive goal that some industry analysts have said is impossible.

Apple's Daisy robot can disassemble 200 iPhones per hour, recovering 14 rare earth elements including tungsten, cobalt, and lithium -- minerals concentrated in geopolitically sensitive supply chains (Congo for cobalt, China for rare earths). The strategic calculus goes beyond environmental marketing: building recycling capabilities doubles as a supply chain resilience strategy that reduces dependence on foreign mining operations and positions Apple to absorb tighter environmental regulations that competitors cannot easily match. However, the "closed-loop" aspiration faces a fundamental volume problem: Apple sells over 200 million iPhones annually, but Daisy processes a tiny fraction of returned devices. The environmental impact of annual upgrade cycles -- manufacturing emissions, packaging, shipping -- vastly exceeds what material recovery can offset. A genuinely sustainability-first approach would prioritize device longevity through longer software support windows, modular repairability, and reduced planned obsolescence.

Apple, iPhone, Hardware

F2FS Root Filesystem Support Lands in Debian GRUB EFI

For those like me who want to change their root filesystem to F2FS, I have enabled support for adding the F2FS module in the EFI signed image of GRUB in Debian. So the GRUB EFI image can load configuration, kernel images, and initrd from a /boot that is formatted in F2FS (upstream GRUB supports the filesystem since 2.04).

F2FS (Flash-Friendly File System), originally developed by Samsung for NAND-based storage, gaining GRUB EFI boot support in Debian marks a significant convergence between mobile and desktop Linux storage stacks. F2FS is already the default filesystem on several Android distributions including Huawei's EMUI and Google's Pixel devices, where its log-structured design reduces write amplification on flash media. With SSD adoption now standard on desktop and server hardware, F2FS's flash-optimized design becomes relevant beyond mobile: its multi-head logging, adaptive garbage collection, and hot/cold data separation can extend SSD lifespan while maintaining competitive throughput. This GRUB integration means Debian users can now run F2FS as their root filesystem with secure boot -- previously a blocker for production adoption.

Android, Debian, Linux

Conway's Law and Meatware: Why Culture Outweighs Technology

The problem is their organization's unhelpful processes, behavior, and culture. You can think of these as "thought technologies," but I like to call them "meatware." Much of what it takes -- to transform IT organizations into centers for innovation -- is about changing the IT culture from a command and control, big planning up-front, organized by function (or "silo'd") mind-set.

The "meatware" concept directly echoes Conway's Law: "Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations." In practice, this means that adopting Kubernetes, microservices, or any cloud-native technology stack will fail to deliver agility if the organizational structure remains siloed into separate development, QA, and operations teams with handoff-based workflows. The people executing processes -- the meatware -- determine whether a technology investment produces architectural improvement or merely adds complexity to existing dysfunction. The inverse Conway maneuver, where organizations deliberately restructure teams to mirror the desired system architecture, has proven more effective than technology-first transformations. Cross-functional teams owning full-stack slices of a product (from database to deployment) consistently outperform functionally organized teams adopting the same tooling.

Organization, Corporate Culture

Linus Torvalds on ZFS Linux Licensing: Oracle's CDDL vs GPL Conflict

Note that "we don't break users" is literally about user-space applications, and about the kernel I maintain. If somebody adds a kernel module like ZFS, they are on their own. I can't maintain it, and I can not be bound by other peoples kernel changes. And honestly, there is no way I can merge any of the ZFS efforts until I get an official letter from Oracle that is signed by their main legal counsel or preferably by Larry Ellison himself that says that yes, it's ok to do so and treat the end result as GPL'd. Other people think it can be ok to merge ZFS code into the kernel and that the module interface makes it ok, and that's their decision. But considering Oracle's litigious nature, and the questions over licensing, there's no way I can feel safe in ever doing so. And I'm not at all interested in some "ZFS shim layer" thing either that some people seem to think would isolate the two projects. That adds no value to our side, and given Oracle's interface copyright suits (see Java), I don't think it's any real licensing win either. Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me. The benchmarks I've seen do not make ZFS look all that great. And as far as I can tell, it has no real maintenance behind it either any more, so from a long-term stability standpoint, why would you ever want to use it in the first place?

Torvalds's position on ZFS reflects a pragmatic legal risk assessment rather than a purely technical evaluation. The core issue is license incompatibility: ZFS is released under Sun's CDDL (Common Development and Distribution License), which is incompatible with the Linux kernel's GPLv2. Distributions like Ubuntu ship ZFS as a DKMS kernel module, arguing the module boundary creates legal separation -- a theory untested in court. Given Oracle's track record with the Google v. Oracle Java API copyright case (which reached the Supreme Court), any organization deploying ZFS on Linux assumes litigation risk that Oracle could choose to enforce at any time. From a technical standpoint, ZFS's integrated volume management and checksumming remain compelling for storage servers, but Btrfs (GPLv2-licensed) and the combination of XFS with LVM and dm-integrity now provide comparable data integrity features without the licensing uncertainty. The OpenZFS project continues active development, but Torvalds's refusal to merge it upstream means ZFS on Linux will always require out-of-tree kernel module maintenance.

Linux, ZFS

Java 14 ZGC: Sub-Millisecond GC Pauses Arrive on Windows and macOS

Most of the ZGC code base is platform independent and requires no Windows-specific changes. The existing load barrier support for x64 is operating-system agnostic and can also be used on Windows. The platform-specific code that needs to be ported relates to how address space is reserved and how physical memory is mapped into a reserved address space. The Windows API for memory management differs from the POSIX API and is less flexible in some ways.

JEP 365 porting ZGC from Linux-only to Windows and macOS removes a major barrier to ZGC adoption in development environments where most Java developers work daily. ZGC's defining characteristic is sub-millisecond pause times regardless of heap size -- whether the heap is 8 MB or 16 TB. It achieves this through colored pointers and load barriers that allow concurrent relocation of objects while application threads continue executing. This directly solves the traditional JVM trade-off where increasing heap memory to reduce GC frequency paradoxically increases worst-case pause durations with collectors like G1 or Parallel GC. For latency-sensitive server applications -- financial trading systems, real-time APIs, game servers -- ZGC eliminates the GC-induced tail latency spikes that previously required workarounds like off-heap memory or GC tuning expertise. The Windows port required adapting ZGC's multi-mapping memory technique from POSIX mmap/mremap to Windows VirtualAlloc/MapViewOfFile, which proved more restrictive but functional.

Java, Windows, macOS