Claude Mythos Leak, Instant Checkout Retires, Gemini Imports Your ChatGPT History

March 27, 2026

Executive Summary

Anthropic's biggest story this week was unplanned: an internal data leak on March 26 exposed draft materials for an unreleased model called Claude Mythos, which the company describes as its most capable model to date with meaningful advances in reasoning, coding, and cybersecurity. The leak also surfaced a new model tier called Capybara, positioning Mythos above the existing Opus line. Separately, OpenAI pivoted its ChatGPT commerce strategy on March 24, dropping Instant Checkout in favor of a visual product discovery layer with Walmart, Target, and major retailers. Google moved to lower the switching cost for AI users, releasing a chat history import tool on March 26 that lets people migrate conversations and memory profiles from ChatGPT and Claude directly into Gemini. NVIDIA had already released Nemotron 3 Super on March 11, an open 120-billion-parameter model built for agentic systems, which continued gaining enterprise adoption through the week.

Top Stories

1. Anthropic's Claude Mythos Surfaces in Accidental Data Leak

Anthropic's Claude Mythos Leak

Anthropic confirmed on March 27 that it is testing a new AI model called Claude Mythos after nearly 3,000 internal assets, including draft blog posts and unpublished documentation, became publicly accessible through a misconfigured content management system. Anthropic attributed the leak to human error in CMS configuration. The exposure was discovered by Cambridge University researcher Alexandre Pauwels and LayerX Security's Roy Paz.

The leaked materials describe Mythos as "the most capable we've built to date," with notable advances in reasoning, coding, and cybersecurity. Cybersecurity benchmarks drew the most attention: Anthropic's own draft materials warn the model is "currently far ahead of any other AI model in cyber capabilities" and that it could find and exploit software vulnerabilities "in ways that far outpace the efforts of defenders." The company is currently testing Mythos with a limited group of early access customers.

The leak also revealed that Mythos sits in a new model tier called Capybara, positioned above the existing Opus line. This is a structural addition to Anthropic's model hierarchy rather than a version increment within an existing tier. The Capybara tier is expected to be more expensive than Opus when it launches publicly.

Market reaction was immediate. Cybersecurity stocks fell on March 27, with CrowdStrike dropping 7%, Palo Alto Networks down 6%, Zscaler losing 4.5%, and SentinelOne and Fortinet each declining around 3%.

Business Impact: The cybersecurity capability claims are the most consequential part of this story for enterprise teams. If Mythos performs as described in internal documents, organizations building security tooling on top of AI models will need to recalibrate their threat models. Legal and compliance teams at companies with AI-assisted vulnerability disclosure programs should monitor the Mythos early access program for specifics before this model reaches general availability.

2. OpenAI Drops Instant Checkout, Pivots ChatGPT to Visual Product Discovery

OpenAI Instant Checkout

OpenAI announced a significant change to its commerce strategy on March 24, acknowledging that its earlier Instant Checkout feature, which let users complete purchases directly inside ChatGPT, "did not offer the level of flexibility" it aimed for. The company retired Instant Checkout and shifted focus to product discovery: a visual shopping layer that lets users browse products, upload images to find similar items, compare options side-by-side, and refine results conversationally.

The update is powered by the Agentic Commerce Protocol (ACP), OpenAI's open standard for connecting retailers to ChatGPT's shopping layer. Merchants now control their own checkout experiences while ACP handles product data, pricing, and reviews. Retailers already integrated include Walmart, Target, Sephora, Nordstrom, Lowe's, Best Buy, and Wayfair. Walmart is introducing an in-ChatGPT app experience that bridges discovery in ChatGPT into a Walmart-branded environment with account linking and loyalty payments.

The pivot is available to all free, Go, Plus, and Pro ChatGPT users.

Business Impact: The retreat from Instant Checkout signals that AI-native purchasing flows face friction that product discovery does not. For retailers evaluating ACP integration, the opportunity is now in driving qualified intent, putting the right product in front of a user who is ready to buy, rather than completing the transaction inside the AI interface. Teams building e-commerce on top of LLM platforms should review ACP's open specification before implementing proprietary discovery integrations.

3. Google Releases Chat History Import Tool to Attract ChatGPT and Claude Users

Google Gemini releases Chat History Import Tool

Google released a migration toolset for Gemini on March 26 that lets users import conversation history and memory profiles from ChatGPT and Claude. The tool has two components. The first is a chat history import: users export a .zip from ChatGPT or Claude and upload it to gemini.google.com/import, where imported conversations appear in a separate panel. The second is a memory transfer: Gemini generates a prompt the user pastes into their existing AI assistant, which then produces a summary of learned preferences and context. The user copies that summary back into Gemini, giving it an immediate profile without weeks of conversational calibration.

The tool supports up to five .zip uploads per day at 5 GB each and is available to all consumer Gemini accounts. It is not available in the European Economic Area, UK, or Switzerland at launch, likely due to GDPR constraints around data transfer.

Business Impact: The practical barrier to switching AI assistants has always been context loss: the hours of conversation history, preferences, and personalization built up over months. By addressing that barrier directly, Google is making Gemini a more credible alternative to ChatGPT for users with established AI workflows. For enterprise IT teams managing AI tool standardization, this tool changes the calculus on migration costs.

4. NVIDIA Launches Nemotron 3 Super: Open 120B Model for Agentic AI

NVIDIA Nemotron 3 Super

NVIDIA released Nemotron 3 Super on March 11, a 120-billion-parameter open model with 12 billion active parameters, designed specifically for multi-agent AI systems that require reasoning, coding, and long-context task handling at scale. The model uses a hybrid Mamba-Transformer mixture-of-experts (MoE) architecture that delivers 5x higher throughput compared to comparable dense models, with a native one-million-token context window.

Nemotron 3 Super is immediately available on NVIDIA's build.nvidia.com platform, Hugging Face, and OpenRouter, with cloud deployment available via Google Cloud Vertex AI and Oracle Cloud Infrastructure, and coming to AWS Bedrock and Microsoft Azure. Early enterprise adopters include Accenture, CrowdStrike, Cursor, Deloitte, Perplexity, ServiceNow, Siemens, and Zoom.

NVIDIA also announced the Nemotron Coalition, a collaboration of AI labs working to advance open frontier models, with Cursor, LangChain, Mistral AI, Perplexity, and Reflection AI as founding members.

Business Impact: Nemotron 3 Super's combination of a 1M-token context window and 5x throughput advantage makes it a practical option for multi-agent pipelines where a coordinator model needs to maintain long context while coordinating faster subagents. Its open weights and broad cloud availability lower deployment barriers for teams evaluating open alternatives to proprietary model APIs. The MoE architecture means teams running it on-premise can keep costs manageable despite the total parameter count.

Quick Bytes

  • MCP hits 97 million monthly SDK downloads. Anthropic published an ecosystem report in March showing the Model Context Protocol crossed 97 million monthly SDK downloads, up from roughly 2 million at its November 2024 launch. OpenAI, Google, xAI, Mistral, and Cohere all support MCP, and the server ecosystem now includes 5,800+ community and enterprise integrations.
  • Anthropic Institute launched. Anthropic formally established the Anthropic Institute on March 11, a research unit led by co-founder Jack Clark to study AI's economic, societal, and security impacts. The institute consolidates its Frontier Red Team, Societal Impacts, and Economic Research teams. Hires include Matt Botvinick from Google DeepMind and Zoë Hitzig from OpenAI.
  • Google Gemini reaches 750 million users. Google reported Gemini has reached 750 million monthly active users in March, up from the 500 million milestone reported earlier in the year. Gemini 3.1 Pro is available across Google Cloud Vertex AI and Gemini Enterprise, with updated Workspace integrations across Docs, Sheets, Slides, and Drive.

Industry Impact Analysis

The Claude Mythos leak is the most consequential development this week, and not only because it reveals a more powerful model in development. The accidental disclosure of internal safety documentation, including Anthropic's own assessment that Mythos poses "unprecedented cybersecurity risks," sets an unusual precedent: an AI company's pre-release threat assessment becoming public before the product ships. That changes the tone of the eventual announcement significantly. Security researchers and enterprise buyers will hold the public release to the standards Anthropic set for itself in those leaked materials.

The OpenAI commerce pivot and Google's migration toolset are both product decisions driven by the same underlying pressure: the cost of switching AI tools is still high enough that it protects incumbents. OpenAI is trying to make ChatGPT stickier through commerce integration. Google is trying to remove the switching cost from Gemini's side. These are mirror strategies with opposite goals, and their outcome will depend on whether users value integrated purchasing or context continuity more.

NVIDIA's Nemotron 3 Super is worth watching for teams building open-source AI infrastructure. The hybrid Mamba-Transformer MoE architecture with 1M-token context at 5x throughput is a meaningful technical offering for agentic pipelines that need to stay on-premise or avoid proprietary model lock-in.

About Azumo

Azumo builds and scales AI-powered software for product teams that need to move fast without cutting corners on quality or security. The team specializes in custom AI agents, enterprise integrations, and production ML systems, with senior LATAM-based engineers who are time-zone aligned with US teams.

If your team is evaluating where Claude Mythos, Nemotron 3, or the new ACP commerce layer fits in your product architecture, Azumo's AI practice covers the full stack from design to deployment.

Sources

This newsletter is curated by Azumo's AI Intelligence Scanner to help engineering leaders and product teams stay current on AI developments that affect architecture, tooling, and strategy decisions.

Are You New to Outsourcing?
We Wrote the Handbook.

We believe an educated partner is the best partner. That's why we created a comprehensive, free Project Outsourcing Handbook that walks you through everything from basic definitions to advanced strategies for success. Before you even think about hiring, we invite you to explore our guide to make the most informed decision possible.