Skip to main content
The Singularity Daily Digest

Google Launched a World Model You Can Actually Use

Google DeepMind released Project Genie on January 29th. It's the first consumer-facing world model that lets you generate interactive 3D environments from text or images and actually walk around inside them.

A world model is exactly what it sounds like - AI that can simulate how a world works, including physics, space, and how things change over time. It's different from image or video generation because you can interact with what it creates.

Genie 3 runs at 20-24 frames per second in 720p resolution. Sessions are capped at 60 seconds because of how compute-intensive it is - each user gets a dedicated chip while they're using it. The model remembers environments up to one minute back, so you can backtrack and it still knows where you were.

DeepMind explicitly called world models "a key stepping stone on the path to AGI" because they let you train AI agents across unlimited simulated environments. The competition is heating up: Fei-Fei Li's World Labs released Marble in November, Runway shipped their first world model in December, and Yann LeCun left Meta to start Advanced Machine Intelligence Labs with a reported $5 billion valuation target - focused specifically on world models.

This is available now to AI Ultra subscribers at $250/month.

DeepMind open-sourced a model that reads the "junk" in your DNA

The day before Genie launched, DeepMind released the source code and model weights for AlphaGenome.

Here's the context: 98% of your genome is "noncoding DNA" - meaning it doesn't directly create proteins. For a long time, scientists called this "junk DNA" because they didn't understand what it did. Turns out it plays a massive role in regulating how genes express themselves, which affects everything from disease risk to how your body develops.

AlphaGenome can process sequences up to 1 million base pairs at single-letter resolution and predict how variations in this noncoding DNA affect gene expression. It runs on a single NVIDIA H100 GPU and outperformed the previous best model by 25% on gene expression tasks.

Nearly 3,000 scientists across 160 countries have already used it. The API handles about 1 million requests daily. DeepMind also announced they're providing AlphaGenome to all 17 U.S. Department of Energy national laboratories.

The Stargate Project is actually being built

Remember when OpenAI, SoftBank, and Oracle announced the $500 billion Stargate Project? It's not just an announcement anymore - over $100 billion has already been deployed.

The flagship facility in Abilene, Texas operates at 1.2 gigawatts across ten 500,000-square-foot buildings. For context, 1.2 gigawatts is enough to power roughly 900,000 homes. Oracle has secured permits for three small modular nuclear reactors just to power this thing.

Expansion sites are planned across Texas, New Mexico, Ohio, and Michigan, plus international campuses in Abu Dhabi (1 GW) and Norway (500 MW running on hydropower).

The grid is straining. PJM Interconnection - which serves 65 million people across 13 states - projects a 6 gigawatt shortage by 2027, with data centers responsible for $23 billion in capacity costs. California developers have requested 18.7 GW of data center capacity, which is enough to power every household in the state. Senator Bernie Sanders has called for a national moratorium on data center construction.

The chip war is escalating

NVIDIA and AMD are both racing to release their next generation of AI chips by the second half of 2026.

NVIDIA's new Rubin platform promises to cut inference costs by 10x compared to their current Blackwell chips. Inference is the process of actually running AI models to get outputs - every time you send a message to ChatGPT or generate an image, that's inference. It's the most expensive part of operating AI at scale, so cutting that cost by 10x is a big deal for anyone running these systems.

The Vera Rubin NVL72 combines 72 GPUs into a single rack-scale system. Instead of individual graphics cards spread across a data center, this is 72 of them packaged together into one unit the size of a server rack - essentially a supercomputer in a box. This makes it easier to run massive AI models that need lots of chips working together.

AMD's competing Helios platform matches that 72-GPU configuration. CEO Lisa Su claims the MI500 series delivers up to 1,000x performance gains over their previous generation. Both companies are trying to ship by late 2026.

On the policy side, the Trump administration changed export controls for AI chips going to China. Previously, chips like NVIDIA's H200 were under "presumption of denial" - meaning they'd be blocked unless there was a good reason to approve them. Now it's "case-by-case review" - less restrictive, but still controlled.

At the same time, they imposed a 25% tariff on advanced AI chips passing through the U.S. to other countries.

The Remote Access Security Act passed the House 369-22, extending export controls to cloud GPU rentals. This closes a loophole where Chinese companies could access American AI chips remotely through cloud services without ever importing physical hardware.

Autonomous vehicles hit real scale

Waymo expanded to Miami on January 28th (60 square miles, nearly 10,000 pre-registrations) and San Francisco International Airport on January 29th - making SFO the first California international airport with autonomous ride-hailing.

The numbers: Waymo now completes over 250,000 trips per week and has done more than 20 million total rides. They're targeting 1 million weekly trips by year-end.

Tesla's robotaxis started operating without safety monitors in Austin. The Cybercab targets $0.20 per mile operational costs including energy, maintenance, and insurance.

NVIDIA released Alpamayo, the first chain-of-thought reasoning model specifically for autonomous vehicles - a 10-billion-parameter system trained on 1,700+ hours of driving data. JLR, Lucid Motors, and Uber are already using it for Level 4 autonomy development. Jensen Huang declared: "The ChatGPT moment for physical AI is here."

Healthcare AI is becoming infrastructure

OpenAI's healthcare report revealed that over 40 million people use ChatGPT daily for health questions - that's more than 5% of all messages on the platform. Two-thirds of U.S. physicians and half of nurses now use AI for clinical tasks.

OpenAI launched ChatGPT for Healthcare powered by GPT-5.2, rolling out to Memorial Sloan Kettering, Stanford Children's, Cedars-Sinai, and other major health systems.

NVIDIA and Eli Lilly announced a $1 billion, five-year partnership for AI-accelerated drug discovery. Chinese researchers published DrugCLIP in Science - a system that screens millions of compounds against thousands of protein targets 10 million times faster than current methods.

The FDA has now authorized 1,357 AI-enabled medical devices, though Harvard Law analysis notes "the vast majority of medical AI is never reviewed by a federal regulator."

AI company valuations keep climbing

OpenAI is in discussions for a funding round that could value them at $830 billion, with SoftBank potentially investing an additional $30 billion. Their 2025 revenue jumped to over $20 billion (from $6 billion in 2024), but they're still operating at a loss. Disney joined the cap table with $1 billion tied to content licensing for Sora.

xAI closed a $20 billion Series E at a $230 billion valuation, even as UK regulators launched an investigation into Grok's deepfake image generation and Malaysia and Indonesia became the first countries to block the chatbot entirely.

Anthropic now holds 32% of enterprise LLM market share by usage versus OpenAI's 25%. They're seeking $10 billion at a $350 billion valuation.

Skild AI raised $1.4 billion led by SoftBank at a $14 billion+ valuation, building what they call "a single, general-purpose brain that can control any robot for any task."

AI is writing a lot of the code at the companies building AI

AI now writes roughly 30% of Microsoft's code and over 25% of Google's code.

Agentic AI adoption surged 327% in the second half of 2025 according to Databricks. Gartner predicts 40% of enterprise applications will embed AI agents by year-end.

But employee anxiety is rising: 40% of workers now fear AI job loss, up from 28% in 2024. Amazon announced 16,000 corporate layoffs on January 28th. Anthropic CEO Dario Amodei published a 20,000-word essay warning AI will cause "unusually painful" disruption, acting as a "general labor substitute for humans."

The market is split on AI spending

Microsoft and Meta both announced massive AI infrastructure spending within hours of each other yesterday. Microsoft's stock dropped 10%, wiping out $400 billion in market value. Meta gained $176 billion.

Same bet. Opposite reactions.

The difference came down to visibility on returns. Meta's AI-driven advertising platform is already running at a $60 billion annual rate - investors can see where the money is going and what it's producing. Microsoft reported strong revenue but Azure (their cloud platform) growth is slowing, and 45% of their $625 billion demand backlog is tied to OpenAI's uncertain trajectory.

Microsoft is spending $37.5 billion per quarter on AI infrastructure. Meta announced plans to spend up to $135 billion on AI infrastructure in 2026 - nearly double last year. The hyperscalers (Microsoft, Meta, Google, Amazon) are planning $470-505 billion in combined AI capex this year.

The question isn't whether to bet on AI anymore. It's whether anyone can prove the bet is paying off.

The bottom line

The question has shifted. It's no longer about what AI can do - that's increasingly obvious. It's about who captures the value from what it can do.

Infrastructure is being built at unprecedented scale. Capabilities keep advancing. But Microsoft just lost $400 billion in market value because investors couldn't see clearly enough where the returns are coming from.

The gap between AI capability and AI value capture is now the central question of 2026.

That's today. More tomorrow.

Matthew Ortiz

CEO, OTZ Group

Want to discuss what this means for your business?

The pace of AI development is accelerating. Let's talk about how to position your organization for what's coming.

Schedule a Conversation