Product & Design Pulse v91

Under the Hood ⚙️

Welcome to this week’s edition of Product & Design Pulse, where we explore the latest in tech, product, design, and innovation! Last week was about building the plumbing that makes everything else possible. Mozilla published a detailed blueprint for how it scaled Claude Mythos Preview into an agentic security pipeline that found 423 bugs in a single month, including sandbox escapes and vulnerabilities that survived two decades of fuzzing, establishing the clearest proof yet that AI-powered defense is no longer experimental but operational. Anthropic, meanwhile, struck its most unlikely partnership to date, signing a deal to take over all of SpaceXAI's Colossus 1 compute after Elon Musk went from calling the company "misanthropic" to saying he was "impressed," effectively turning his AI subsidiary into a neocloud provider for a direct competitor. OpenAI contributed its own infrastructure play by open-sourcing MRC, a networking protocol that lets 100,000+ GPU supercomputers run on two tiers of switches instead of four, while also quietly shelving plans to spin off its robotics and hardware divisions after realizing the restructuring wouldn't actually clean up its balance sheet before an IPO. On the legal front, five major publishers and Scott Turow filed what may be the most consequential AI copyright suit yet, personally naming Mark Zuckerberg for allegedly ordering Meta to stop licensing books and "lean into the fair use strategy." The week's common thread: the AI race is increasingly being won not by the best model but by the best infrastructure, the best partnerships, and the fewest legal liabilities.

🎧 Audio Overview [BETA]

For those who don’t have time to read 😁

Last week…

  1. Mozilla Details How It Built an AI-Powered Security Pipeline Around Claude Mythos

    Mozilla published a technical deep dive on how it built an agentic security harness that scaled Claude Mythos Preview across parallel VMs to find, reproduce, and triage hundreds of Firefox vulnerabilities, including 15-to-20-year-old bugs, sandbox escapes, and race conditions that survived decades of human review and fuzzing. Of the 271 bugs shipped in Firefox 150, 180 were rated sec-high, and the team fixed 423 total security bugs in April across multiple models and techniques. For engineering teams, this is both a blueprint and a warning: the harness infrastructure matters as much as the model, and any project not building one now is falling behind defenders who already have.

  2. The Verge: Neuroscience Research Suggests LLMs May Never Achieve True Intelligence

    Benjamin Riley argues in The Verge that neuroscience research shows human thinking is largely independent of language, meaning LLMs, which model language rather than cognition, face a hard ceiling on what they can achieve no matter how much compute is applied. He cites fMRI studies showing distinct brain regions for language vs. reasoning, and research on people who lost language ability but retained their capacity to solve problems, follow instructions, and read emotions. For the AI industry, the essay is a useful counterweight to AGI hype, though it was published in late 2025 before the latest wave of agentic capabilities that complicate the "language isn't thought" framing.

  3. Anthropic Signs Deal to Take Over All of SpaceXAI's Colossus 1 Compute

    SpaceXAI signed an agreement giving Anthropic access to the full capacity of Colossus 1, its Memphis data center housing over 220,000 NVIDIA GPUs across more than 300 MW, with the compute going directly toward improving Claude Pro and Claude Max subscriber capacity. The deal also includes an expressed interest in partnering on multiple gigawatts of orbital AI compute, and comes after Elon Musk, who previously called Anthropic "misanthropic," said he was "impressed" after meeting with the company's senior team. For the compute market, this is Anthropic's most surprising partnership yet: its fiercest critic is now its landlord, and xAI is effectively becoming a neocloud rather than a frontier model competitor.

  4. OpenAI Open-Sources MRC, a New Networking Protocol for AI Supercomputers

    OpenAI partnered with AMD, Broadcom, Intel, Microsoft, and NVIDIA to develop MRC (Multipath Reliable Connection), a networking protocol that spreads GPU data transfers across hundreds of parallel paths, enabling supercomputers with 100,000+ GPUs to run on just two tiers of Ethernet switches instead of three or four. MRC is already deployed across OpenAI's largest GB200 supercomputers at Oracle's Abilene site and Microsoft's Fairwater facilities, and has been released as an open specification through the Open Compute Project. For infrastructure teams, this is OpenAI contributing real plumbing to the industry while reinforcing its position at the center of the Stargate buildout.

  5. Five Publishers and Author Scott Turow Sue Meta and Zuckerberg for "Massive" Copyright Infringement

    Hachette, Macmillan, McGraw Hill, Elsevier, and Cengage, along with author Scott Turow, filed a class action alleging Meta torrented 267 terabytes of pirated books and articles to train Llama, with Zuckerberg personally named for authorizing the infringement and ordering his team to stop licensing negotiations so the company could "lean into the fair use strategy." The suit claims Meta briefly considered a $200 million licensing budget in early 2023 before Zuckerberg intervened. For the AI industry, this is the most significant copyright case yet filed against a model developer, and the personal naming of a CEO raises the legal stakes well beyond what OpenAI and others have faced.

  6. OpenAI Explored Spinning Off Robotics and Hardware Before IPO, Then Dropped the Idea

    The Wall Street Journal reports that Sam Altman discussed spinning out OpenAI's robotics and consumer hardware divisions (including the Jony Ive/io Products unit) into separate entities late last year, but abandoned the plan after determining the units would still need to be consolidated on OpenAI's balance sheet, eliminating the financial clarity the restructuring was meant to achieve. The discussions reflect broader pressure to sharpen focus ahead of a potential IPO at up to $1 trillion, as the company has missed internal revenue targets and lost ground to Anthropic in coding and enterprise markets. For product leaders, this is further evidence that OpenAI's sprawl is becoming a liability: even the company itself tried and failed to organize its way out of it.

  7. Ben Thompson: Amazon Looked Behind in the Training Era But Is Well-Positioned for Inference

    Thompson argues that Amazon's long-term infrastructure investments, including its bet on custom Trainium chips and its newly launched Amazon Supply Chain Services (consolidating freight, trucking, and last-mile delivery into one offering), reflect the same strategic patience that built AWS. He contends that while Amazon looked behind in the GPU-intensive training era because NVIDIA deprioritized chips for a customer it knew would shift to in-house silicon, the shift to inference and agentic workloads plays to Amazon's strengths in cost optimization and commodity-scale operations. The essay frames Amazon as the most durable of the hyperscalers precisely because it has always invested upstream in infrastructure others treat as a cost center.

🗓️ Upcoming Events

📱 Product & Feature Highlights

🧠 For the Nerds