Josh Woodward and the Future of Everything
How a kid who set up web feeds at his Oklahoma high school became the person Google is betting its entire AI future on. An investigation.
Before Josh Woodward was Vice President of Google Labs and the Gemini app, before TIME Magazine named him one of the 100 Most Influential People in AI, before Demis Hassabis personally selected him to lead the most important product at one of the most important companies in human history, he was a kid at Deer Creek High School in Edmond, Oklahoma, setting up web feeds of sports events, which is the kind of origin story that sounds made up but isn't, and which I've now verified through multiple sources because I've been researching Josh Woodward for the past seventy-two hours and I'm not entirely sure what day it is anymore or whether the sun has risen or set since I started, but I can tell you with absolute certainty that in the early 2000s, a teenager in Oklahoma looked at high school basketball and thought: what if the grandparents who can't make the drive could still watch?
I want you to sit with that for a moment, because it's easy to skip past and it contains, I think, everything you need to know about what comes next. The idea that you could stream high school athletics to people who couldn't attend seemed, to most people in 2003 or whenever this was, like a curiosity, like a novelty, like the kind of thing a teenager with too much time and not enough supervision might do on a weekend when his friends were busy. To Woodward, it was obvious in the way that things are only obvious to people who see the world a certain way: here was a tool that could help people, here was something people needed, here was a gap between what technology could do and what people actually wanted it to do, and he was going to close that gap, even if the gap was just some grainy video of a JV basketball game streaming to a retirement community in Norman.
He was seventeen, maybe eighteen, and he did not know that he would spend the next two decades asking the same question at progressively larger scales, through Chromebooks and Google Pay and something called the Next Billion Users initiative and eventually through the most consequential AI product race in human history, until the scale became Google's entire future and the question became existential and the stakes became whether a company with 180,000 employees and more PhD researchers than most universities could figure out how to make something that people actually wanted to use. But I'm getting ahead of myself, which is a problem I have, and which you'll have to forgive me for, because when you spend three days doing nothing but reading about one person, the chronology starts to blur and everything feels like it's happening at once.
The Intern
In the summer of 2009, a twenty-six-year-old from Norman, Oklahoma, fresh from St. Anne's College at Oxford with a master's degree in comparative government, took a product marketing internship at Google's London office, which is not, traditionally, the kind of career move that leads to running AI at the world's most important technology company. A master's degree in comparative government from Oxford is the kind of credential that leads to writing op-eds about foreign policy for magazines that your parents pretend to have heard of, or to a career in academia where you publish papers that seventeen people read, or to diplomacy, or to the State Department, or, if things go poorly, to consulting, where you tell other people what to do for a living because you couldn't figure out what to do yourself. It is not the kind of credential that leads to making decisions about whether your phone should be able to generate photorealistic images of you as a medieval knight, and yet here we are, because the world is strange and people's lives don't follow the trajectories you'd expect them to.
Woodward had spent his time at Oxford studying how U.S. military and economic foreign aid affected democracy in developing nations, which is, and I cannot stress this enough, an extremely serious topic, the kind of thing you write a dissertation about while furrowing your brow and drinking tea out of a mug with the Oxford crest on it and having long conversations about governance and institutions and the delicate machinery of nation-building. He had originally planned to continue for a doctorate, to spend his life in libraries surrounded by books about political development, to become the kind of person who gets quoted in the New York Times when something happens in a country most Americans can't find on a map. Instead, for reasons that I have not been able to fully determine despite reading everything about him that exists on the internet, he took an internship at Google, in London, working with small businesses to help them understand Google's products, which is about as far from Oxford political theory as you can get without actually leaving the planet.
I don't know what happened. I don't know if he woke up one morning and thought, "You know what, I've spent enough time studying how outside intervention shapes political development in nations that are still figuring out who they are, I should probably go help a fish and chips shop understand Google Ads." I don't know if someone at Oxford took him aside and said, "Look, Josh, you're very smart, but have you considered that the future is computers and not, you know, lengthy analyses of USAID programs in sub-Saharan Africa?" I don't know if he just saw a job listing and thought it looked interesting, or if he was broke and needed money, or if he had some premonition that the next two decades would belong to companies like Google in ways that would make his Oxford research seem, in retrospect, like a detour. What I know is that he decided to see where Google would take him, and Google would take him, over the next sixteen years, through Chromebooks (he helped ship the first several generations) and through Google Pay and Google Wallet and through something called the Next Billion Users initiative (he co-founded it) and through AI Test Kitchen and Project Tailwind and NotebookLM and eventually to the job he has now, which is arguably the most important product job in the technology industry, at a time when technology jobs have never mattered more.
This is, in retrospect, a very strange career path, like if you told me that the Secretary of Defense got his start as a birthday party magician and then gradually worked his way up through clowning and then balloon animals and then somehow ended up controlling the nuclear arsenal. "Yes, he used to pull rabbits out of hats for six-year-olds, and now he decides whether to invade countries." It doesn't make sense, and yet if you squint at Woodward's trajectory, you can see the through-line: he has always been interested in what people need, in the gap between what technology can do and what people actually want it to do, in the question of how you take something powerful and complicated and make it useful to someone who doesn't care about the technology, who just wants to stream a basketball game to their grandparents or manage files on a phone with limited storage or understand a stack of research papers without reading them seventeen times.
The Laugh
Everyone who works with Josh Woodward mentions the laugh, and I mean everyone, to the point where it's become a kind of running joke in my notes, a tally mark I add every time another article or interview or podcast transcript mentions the laugh, the infectious laugh, the disarming laugh, the goofy laugh that comes out mid-conversation, the warmth that people attribute to his Midwestern upbringing, as if Oklahoma instills in its children some kind of fundamental good humor that survives even the soul-crushing bureaucracy of a company with 180,000 employees. I have not met Josh Woodward; I have never heard Josh Woodward laugh; I have no idea what this laugh sounds like, whether it's a chuckle or a guffaw or something in between; but I have now read approximately forty-seven articles and interviews and podcast transcripts about him, and in every single one, someone mentions the laugh, to the point where it's beginning to haunt me, where I hear it in my dreams, this phantom laugh from a man I've never met, echoing through the halls of Google's Mountain View campus like the ghost of some long-dead employee who was too cheerful for his own good.
The coaches from Google's School for Leaders, who worked with him before his promotion, described it this way: "What we remember most, aside from his scintillating intelligence, was his infectious enthusiasm and laughter." Aside from his scintillating intelligence, they said, as if the intelligence were a footnote, as if the real story were the laugh, which I think tells you something about the man that his intelligence is the thing people mention second. Caesar Sengupta, who recruited Woodward to the Next Billion Users project and now runs an AI finance startup called Arta, told CNBC that he'd never seen Woodward get angry with anyone, not once, not ever, which is the kind of statement that sounds like hyperbole until you consider that Sengupta worked closely with Woodward for years, through the bureaucracy of Google, through product launches that went well and product launches that didn't, through all the petty politics and territorial disputes that plague any organization of that size, and apparently through all of it, Woodward just... didn't get angry. Sengupta added that he used to tease Woodward, suggesting he would someday be Google's next CEO, which was a joke at the time but which feels, in retrospect, less like a joke and more like a prophecy.
I get angry when my coffee is too hot; I get angry when my coffee is too cold; I get angry when my coffee is the correct temperature but the mug has a chip in it that I keep forgetting about and that catches my lip in an unpleasant way every time I take a sip. Josh Woodward has apparently never gotten angry at anything, including, presumably, the time Google's AI told people to put glue on pizza to help the cheese stick, which was based on a joke someone made on Reddit in 2017 that Google's AI Overviews cited as if it were genuine culinary advice. I would have gotten angry about that. I would have gotten very angry about that. Josh Woodward, apparently, did not, or if he did, no one who worked with him noticed, which is perhaps even more impressive.
This is the kind of detail that usually gets relegated to the personality section of a profile, the part you skim past to get to the business stuff, to the strategy and the products and the competitive dynamics, but I'm putting it up front because I think it matters, because I think it might be the whole thing, because I think that in a world where most tech executives are wound so tight they vibrate at a frequency only dogs can hear, where the default mode of Silicon Valley leadership is a kind of performative intensity that reads as either visionary or unhinged depending on how the stock price is doing, a man who laughs easily and never gets angry might be exactly what Google needs, might be exactly what AI needs, might be the difference between building technology that serves people and building technology that terrifies them. Or maybe I've just been researching this too long and I've lost all perspective, which is also possible, and which I leave for you to judge.
The Problem Google Didn't Know It Had
Here is something I've been thinking about for a long time, something I think I finally understand, or think I think I understand, because at this point in my research I'm no longer certain of anything except that Josh Woodward laughs a lot and grew up in Oklahoma: Google won the model race. By almost every objective measure, Gemini 2.5 and now 3.0 are among the most capable AI systems in existence, the benchmarks say so, the researchers say so, the people who actually understand transformer architectures and attention mechanisms and all the other technical concepts I pretend to understand at dinner parties say so, and yet for years none of it mattered, because while Google was winning benchmarks, OpenAI was winning users.
ChatGPT crossed 800 million weekly active users and became a verb; people started saying "I ChatGPT'd it" the way they used to say "I Googled it," which is the kind of sentence that must have kept Sundar Pichai awake at night, pacing around his bedroom in whatever expensive pajamas tech billionaires wear, muttering to himself about market share and brand erosion and how the company that invented the attention mechanism that made all of this possible was somehow losing to a startup that didn't exist five years ago. The problem wasn't the models; Google's models were fine, Google's models were arguably better than fine, Google's models were winning on every metric that researchers care about. The problem was everything around the models, the product experience, the user interface, the basic question of whether the thing you built was something people actually wanted to use. Google had the best ingredients in the world and couldn't figure out how to cook; they had wagyu beef and black truffles and saffron from the hills of La Mancha, and they kept serving it as a casserole, a gray and vaguely unsettling casserole that technically contained the finest ingredients known to humanity but that no one wanted to eat.
Consider the Gemini image generation disaster of February 2024, when Google released an image generator that was, by most technical measures, excellent, and that almost immediately generated pictures of the Founding Fathers as people of color, and Black Vikings, and a woman as Pope, and the entire thing became a national embarrassment that made Google look either politically captured or technically incompetent or both, depending on which cable news network you watched. Google's CEO sent an internal memo calling the release "unacceptable" and they had to pause the entire feature, which is the kind of thing that happens when you build an extraordinarily sophisticated image generation system and apparently don't think to ask it to draw George Washington before releasing it to the public. Or consider what happened in May 2024, when Google's AI Overviews started telling people to put glue on pizza and eat rocks, because someone at Google had decided that the best way to answer questions was to have an AI read the internet and summarize whatever it found, including joke posts and satire and obvious nonsense, without any mechanism for determining whether the information was, you know, true. "Should I put glue on my pizza?" "Yes," said Google, citing that Reddit joke I mentioned earlier, "it helps the cheese stick." The underlying model was fine; the product implementation was not; and these, it turns out, are different things, perhaps the most important different things, and for years Google could not understand that they were different things.
And then, in April 2025, Google did something unusual: it acknowledged that there was a difference between having good technology and having good products, and it found someone who understood.
The Next Billion
To understand why Google chose Josh Woodward, you have to understand how he spent the decade before anyone outside of Google had heard his name, and frankly you have to want to understand it, because I'm about to tell you about internal Google initiatives, and that's the kind of thing that makes people's eyes glaze over at parties, assuming you get invited to parties, which, if you're reading a 28-minute article about a Google executive on New Year's Eve, maybe you don't, in which case we're in this together, you and I, two people who could be doing literally anything else right now but instead are thinking about organizational structures at a company neither of us works for.
In the mid-2010s, Sundar Pichai tapped Caesar Sengupta to start something called the Next Billion Users project, which was based on the premise that a billion people were about to come online for the first time, mostly in emerging markets like India and Southeast Asia, and that these people would have a fundamentally different relationship with technology than the people who had been online for decades, the people who had grown up with computers, the people who thought "clearing the cache" was a normal thing to say to another human being. Sengupta needed people who could think this way, who could look at technology from the perspective of someone who had never used it before, who could ask what people needed rather than what technology could do, and Woodward was "one of the first people I asked to join," he told CNBC.
At NBU, Woodward did what he would do everywhere: he wrote. He wrote a weekly newsletter that was concise and thought-provoking, and it became so popular that people started emailing the author asking to be added to the distribution list, which, at Google, a company with 180,000 employees and more internal communications than anyone could possibly read, a company that sends so many emails they probably have an internal email about having too many internal emails, is remarkable. Here was a newsletter so good that people sought it out, people emailed the author, people said "please add me to your list, I must read what you write," which is like being the one good restaurant in an airport, the kind of thing that shouldn't exist, that the economics don't support, and yet there it is, serving edible food while everyone around it serves microwaved sadness. Woodward also launched products at NBU, including Files Go, a file management app designed for phones with limited storage that didn't use machine learning or blockchain or any of the buzzwords that make investors salivate but that solved a real problem for real people and that millions of people downloaded, and the pattern was being established, the pattern that would define his entire career: find out what people need, build something that gives it to them, don't overcomplicate it.
The Kitchen
In 2022, Google gave Woodward his first major AI assignment: he became the first product manager behind something called AI Test Kitchen, an initiative to help people experience and give feedback on emerging AI technologies, which sounds boring when you describe it that way but which was, in May 2022, before ChatGPT existed, before most people had ever interacted with a large language model, before anyone outside of AI research knew what "LaMDA" meant (most people thought it was either a Greek letter or a very pretentious nightclub), genuinely radical. At Google I/O 2022, Woodward took the stage to demonstrate three features built on LaMDA 2: "List It," which could break down complex goals into subtasks; "Imagine It," which could generate descriptions of imaginary places; and "Talk About It," which could have open-ended conversations about topics you were interested in. In the demo, Woodward told the AI he wanted to plant a vegetable garden, and it instantly provided a list of subtasks (make a list of what to grow, research what grows best in your area, prepare the soil), which seems mundane now, obvious even, the kind of thing my grandmother's refrigerator can practically do at this point, but which in May 2022 was a glimpse of a future that most people hadn't imagined.
Woodward was already trying to figure out what people would use these things for, not what the technology could do in theory but what people would actually do with it in practice, which is apparently a rare skill, the skill Google needed, the skill they didn't know they needed until they had spent several years not having it and releasing products that were technically impressive and practically useless.
The Labs
In 2022, Clay Bavor restarted Google Labs, a division dedicated to experimental AI products, and Woodward was his first choice to help lead it. Bavor, who's now co-founder of an AI agent startup called Sierra, later explained why: Woodward "was among the very earliest people in the company to see the potential in large language models for building products," which is an important distinction, because lots of people at Google understood the technology (Google employs approximately half of the world's AI researchers and has more PhDs per square foot than most universities), but Woodward understood what to build with it. They were like a carpenter with every tool in the world, standing in front of a pile of the finest lumber money could buy, saying "I could build anything, I could build a house, I could build a boat, I could build a cathedral," and then building a really complicated birdhouse, an absolutely exquisite birdhouse with joinery that would make a master craftsman weep, but still, at the end of the day, a birdhouse, while everyone else was building houses.
Under Woodward, Google Labs developed a culture that was unusual for Google, a culture of velocity, where ideas would go from concept to working product in users' hands within 50 to 100 days. Fifty to 100 days, at Google, a company where, legend has it, it takes 50 to 100 days to get approval for a new stapler, a company with so many processes and review committees and stakeholder alignment meetings that the word "meeting" has probably been added to the official list of carcinogens. "We put a huge premium on how fast you can go from idea to being in people's hands," Woodward has said, "the biggest risk is making something no one wants, and the longer you wait to find that out, the worse off you'll be," which is so obvious that it's almost insulting to write down, the kind of thing they teach in the first week of business school right after "buy low, sell high" and right before "don't embezzle," and yet Google, one of the most successful companies in human history, apparently needed someone to come along and say it out loud, and when he did, everyone nodded sagely, as if this were wisdom from the ages.
He instituted something he called "block": if a team member encountered a bureaucratic obstacle, they could escalate it, and a dedicated team would handle it, the obstacle would be removed, the team member could keep working. He broke internal protocol by letting the NotebookLM team use Discord for collaboration, even though Google preferred its own internal tools, because the team needed to talk to users and Discord was where the users were and Woodward just made it happen, protocol be damned. Clay Bavor described Woodward's approach as "a force that breaks through bureaucracy and moves at the speed of a startup," which, at Google, a company that has so much bureaucracy they probably have bureaucracy about their bureaucracy, a company where I'm told you need to fill out a form to fill out a form, is the highest possible praise.
The Notebook
One of the first projects at Labs was something called Project Tailwind, which came from a 20% time project by a senior product manager named Raiza Martin. (Google's 20% time allows employees to spend one day a week on projects of personal interest, though in practice this means 20% time on top of their regular work, so really it's 120% time, but that's between Google and its employees' therapists.) The idea was simple: an AI notebook that could analyze documents you uploaded and help you understand them, and Martin later recalled that "Notebook started as a 20% project, as things kind of do at Google, as they used to," though she wouldn't even call it a 20% because it was way more than that, which means this woman was doing 120% of 120%, which is 144%, and that's not how math works but it's how startups work, even startups that exist inside giant corporations.
Woodward saw the potential and helped shepherd the project through iterations; the first prototype was built in six weeks with four to five people working part-time. He also made an unusual hire: Steven Johnson, an author who had spent his career writing books about the history of science and technology, who had never had a full-time boss, who had no connection to Google, who lived in New York. Bavor and Woodward had read everything Johnson had written and were fans, so they cold-emailed him, which I love, the idea that two Google executives with all of Google's resources and all of Google's recruiting infrastructure decided to bring on a writer by cold-emailing him like they were pitching a Substack collaboration. "Hey, we loved your book, want to come work on AI? We have good snacks." The job ladder didn't exist, Woodward has said, so he had to go figure out with HR how to create a role, and they created something called "visiting scholar," at Google, a company with approximately 47,000 distinct job titles and a classification system so complex it probably requires its own HR department. "This role doesn't exist? Fine. Now it exists. Next problem."
The project became NotebookLM, and NotebookLM became, against all expectations, one of Google's most beloved AI products, not because it had the flashiest technology or won the most benchmarks but because it just worked, because people used it and liked it and told other people about it, which is, apparently, remarkable, apparently the kind of thing that surprises people at Google. "Wait, people... like it? They're using it voluntarily? Not just to test it? They're actually using it?"
The Eyes
At Google I/O 2024, Woodward took the stage to demonstrate a new feature for NotebookLM, and in classic Woodward fashion, he made it personal, he brought his family into it, he talked about his son. "In this notebook, I've been using it with my younger son," he told the audience, "and I've added some of his science worksheets, a few slide decks from the teacher, and even an open source textbook full of charts and diagrams." His son Jimmy, he explained, learned best by listening, so they had prototyped something called Audio Overviews: NotebookLM would take all those documents and turn them into a podcast-like discussion between two AI hosts, personalized for Jimmy. In the demo, the AI-generated hosts discussed physics, and then Woodward verbally asked, "Hey, can you give my son Jimmy a basketball example?" and a few seconds later the two AI hosts pivoted and started explaining physics concepts through basketball, Jimmy's favorite sport.
"Pretty cool, right?" Woodward said. "I gotta say, the first time my son heard this, you should have seen how big his eyes got. Because he was gripped. They were talking to him. He was learning science through the example of basketball."
You should have seen how big his eyes got. I've watched this demo approximately twelve times now, I've watched a Google executive talk about his son's eyes getting big, I've watched him light up while talking about making AI that helps a kid learn physics through basketball, and I keep thinking: this is not normal, this is not how tech executives usually talk, tech executives usually talk about engagement metrics and quarterly targets and synergistic value propositions, they do not usually talk about their son's eyes getting big, and yet here was Woodward, on stage at one of the biggest tech conferences in the world, talking about exactly that, and meaning it, and the internet went wild when Audio Overviews launched publicly, not because the underlying technology was revolutionary but because the product experience was delightful, because it made people happy, because it worked the way people wanted things to work.
The 28 Minutes
There's a story Woodward tells about his fourth-grade son, a different kid with a different challenge. "A couple of weekends ago, I'm with my fourth grade son. We are struggling right now in our household to implement chores." I love that he said "struggling to implement chores," not "trying to get the kids to do chores," but "struggling to implement," which is product manager brain, this is someone who has spent so long in the world of product development that household chores have become a feature to be shipped, a user experience to be optimized, a problem to be solved with the right tooling. Using an AI coding assistant, they decided to build an app. "We created a chore tracking app. 28 minutes, 45 cents. Done."
Twenty-eight minutes, forty-five cents, a father and a fourth-grader building an app together on a weekend. The AI was the tool and the connection was the point, and the chores are, presumably, still not getting done, because no technology in human history has ever successfully motivated a child to do chores, but that's not the point, the point is that they spent 28 minutes together, building something, solving a problem, creating something that didn't exist before. Woodward calls this "software abundance," the ability to create disposable software tailored to specific needs without the overhead of traditional development, and he says it's never been easier for a generalist with an idea to make something new, which is perhaps the most Woodward sentence imaginable: a generalist with an idea, like a kid from Oklahoma with an economics degree and a master's in comparative government, like someone who didn't know what a product manager was until he became one, like someone who built his career not on technical expertise but on understanding what people need.
The Joystick
Woodward has a concept he returns to repeatedly, something he calls the "AI joystick." "Actually, one of the marketing people came up with it," he admits, which is so perfectly Woodward, he's not taking credit, he's giving credit to the marketing person, he's the VP of Google Labs and Gemini, running one of the most important products at one of the most important companies in history, and he's making sure you know that the metaphor came from someone else. The idea is that in the best AI products, users feel like they're steering, they're not handing over control to an autonomous system that does things for them, they're using a tool that amplifies their own capabilities, they're the player and the AI is the joystick. "What's really interesting is it gives people a lot of control," Woodward explains, "they feel like they're steering the AI," which is different from how most AI products work, because most AI products are trying to be autonomous, they want to do things for you, but Woodward's products want to do things with you.
The Mariner
In late 2024, Labs shipped Project Mariner, an AI agent that could control the Chrome browser and navigate the web and take autonomous actions, and the project went from concept to users' hands in 84 days, built as a Chrome extension "just because it was quick to build," which I love, they built an AI that can control your browser and they built it as a Chrome extension because that was the fastest way to do it, they didn't spend six months debating the optimal architecture, they didn't form a committee to evaluate platform options, they just built the thing. "Is it possible for an AI model to drive your computer? Yes," Woodward has said. "That's a huge new capability. Is it accurate? Sometimes. Is it fast? Not at all yet." Is it accurate? Sometimes. Is it fast? Not at all yet. This is not how tech executives usually talk; tech executives usually say things like "We're excited to announce our groundbreaking new capability that represents a paradigm shift in human-computer interaction"; they do not usually say "Is it accurate? Sometimes." Woodward doesn't oversell; he ships things that work imperfectly, acknowledges their imperfections, and iterates, which is apparently radical, apparently the kind of behavior that makes people at Google stand up and take notice. "Wait, he admitted it doesn't work perfectly? And then he shipped it anyway? And then he's going to make it better? Is that allowed?"
The Promotion
In April 2025, Sissie Hsiao stepped down as head of Gemini. She was a 19-year Google veteran who had built Bard in the Code Red days of late 2022, when ChatGPT had terrified Google into action, who had shepherded the rebrand to Gemini, who had done what she had been asked to do. In a memo to staff, she called her time as head of the team "chapter 1" of the Bard story and said she was optimistic about handing the baton to Woodward for "chapter 2." Demis Hassabis, CEO of Google DeepMind, sent his own memo saying the move would "sharpen our focus on the next evolution of the Gemini app," which is corporate speak for "we've been doing something wrong and we're going to do something different," the way a sports team says they're "going in a different direction" when everyone knows what that means, the coach is getting fired.
The Semafor article that broke the news explained the strategic logic: "The AI competition now emphasizes product packaging around models rather than just model capability itself." Product packaging, not model capability; Google had spent years winning the model race, and now they had realized that the model race wasn't the race, the race was product, the race was experience, the race was whether people actually wanted to use the thing you built. They chose Woodward, the guy who laughs easily and never gets angry, the guy who builds products in 50 to 100 days, the guy who talks about his son's eyes getting big, the guy from Oklahoma.
The Melting
Here is what happened in the eight months after Woodward took over Gemini: monthly active users grew from 350 million to 650 million, which is 300 million new users in eight months, which is more than the entire population of the United States, which is like if everyone in America who wasn't already using Gemini suddenly started using Gemini and then a few million people from Canada joined in just to round out the numbers. In August, Gemini launched something called Nano Banana, a feature that let users blend photos to create personalized figurines, and it went viral, and Amin Vahdat, Google's head of AI infrastructure, would later tell CNBC: "Our TPUs almost melted." Our TPUs almost melted. TPUs are Google's custom AI chips, they cost a fortune, they are some of the most sophisticated computing hardware ever built, and they almost melted, because so many people wanted to make figurines of themselves, which is, I think, the perfect encapsulation of the Woodward era: he built something people wanted so badly that it almost broke Google's infrastructure, he built something so delightful that the data centers nearly caught fire, he built something that made people happy, and there were so many happy people that the happiness threatened to physically destroy the servers.
By September, Gemini had generated over 5 billion images and briefly topped ChatGPT on Apple's App Store, the app that everyone had written off as an also-ran was suddenly winning. Woodward started a "Papercuts" initiative focused on fixing small frustrations that individually seemed minor but collectively degraded the user experience, things like being able to switch AI models mid-conversation, things that a large company would normally deprioritize because they weren't flashy enough, things that actual users actually cared about. He responded to user complaints on X and Reddit, then brought that feedback directly to employees, which a former NotebookLM designer told CNBC was "that level of commitment to the end user I hadn't seen in other leaders." In November, Google launched Gemini 3, and in December, Woodward did an interview with CNBC's Deirdre Bosa and said, "I've never had more fun than right now. It's partly the pace." I've never had more fun than right now, running the most important product at one of the most important companies in history, in the middle of the most consequential technology shift since the internet, with 650 million users and competitors who want to destroy you, and he's having fun, he's having the time of his life, he's laughing that infectious laugh, and the TPUs are almost melting, and the users keep coming, and he's having fun.
The 25 Percent
In his March 2025 interview with Sequoia Capital, Woodward shared a statistic that I have not been able to stop thinking about: "25 percent of all the code's written by AI" at Google. Twenty-five percent, at Google, one of the largest software engineering organizations in the world, a company with some of the best programmers on the planet, a quarter of their code is now written by AI, and I don't know what to do with this information, I don't know if this is terrifying or exciting or both, I don't know if this means programmers are doomed or liberated or something in between, but I know that Josh Woodward said it casually, like it was just an interesting fact, like he was mentioning the weather. "Oh, by the way, a quarter of our code is written by machines now. Anyway, let me tell you about the chore app I built with my son."
He talks about code models that can self-correct and self-heal ("that would totally change the curve on development pace"), about the death of text prompting as we know it ("we'll look back at this time and say, 'I can't believe we tried to write paragraph-level prompts into these little boxes'"), about infinite context and AI systems that know everything about you and a "second brain" that actually works. "I keep thinking it will slow down," he says of the pace of progress, "and it's never slowed down in the last three years." It never slows down, it just keeps accelerating, and Josh Woodward is at the center of it, laughing his laugh, shipping his products, talking about his sons.
The Doors
Woodward borrows a concept from Steven Johnson's writing: "adjacent possibles," the idea that innovation happens at the edges of what's currently possible, in the spaces that have just become accessible. "You walk into this room," Woodward says, "and there's all these doors that are opening up into these adjacent possibles. It feels like 30 doors that you can go explore." Thirty doors, each leading somewhere new, each representing a possibility that didn't exist before: video generation, computer control, long context, agents, each door leading to more doors, an infinite hallway of doors, and somewhere at the end, maybe, something that looks like the future. I don't know what's behind those doors; neither does Woodward, probably; but he's going to open them anyway, he's going to ship something in 50 to 100 days, and it won't be perfect, and he'll admit it's not perfect, and then he'll make it better. That's the pattern; that's always been the pattern; from the web feeds in Oklahoma to the TPUs almost melting in Mountain View.
The Swings
"Take big swings," Woodward says. "A lot of these won't work. Because if they're all working, you're not swinging big enough." This is not something you often hear from a Google executive, or frankly from any executive, because executives like certainty, executives like plans, executives like knowing that the thing they're building will work before they build it, that's the whole point of being an executive, you reduce risk, you manage expectations, you make sure the quarterly numbers look good. Woodward is saying: swing big, fail a lot, if everything works you're not trying hard enough. "We normalize things like failure," he says of Labs, "you have to think about things differently around promotion, compensation, all these things." They normalize failure, at Google, a company that is famous for killing products that don't succeed, a company where, legend has it, being associated with a failed product can end your career; at this company, Woodward has built a culture where failure is normal, where failing means you were trying something hard, where not failing means you weren't ambitious enough. This is either brilliant or insane, possibly both, probably both.
The Question
In that Sequoia interview, the host asked Woodward about his values, about what he was building into the products and, by extension, into the future, and his answer has stayed with me, it's the kind of thing that sounds simple until you think about it, and then it sounds profound, and then you think about it more and it sounds simple again, and then you realize that's the point: "Are you trying to replace and eliminate people, or are you trying to amplify human creativity?" He paused, the laugh presumably momentarily absent. "I'm on the side of wanting to amplify human creativity."
On its surface, this is a modest statement, the kind of thing every AI company says, every AI company has a mission statement about empowering humans and augmenting capabilities and synergistic whatever, it's the kind of thing you put on a slide deck when you're trying to raise funding. But I've now read everything Josh Woodward has said publicly, and I've watched every demo, and I've studied every product, and I think he actually means it, I think when he talks about amplifying human creativity he's thinking about his son's eyes getting big, he's thinking about building a chore app in 28 minutes, he's thinking about the grandparents in Oklahoma who couldn't make the drive to the basketball game. NotebookLM is the proof of concept: it doesn't write your papers for you, it helps you understand your research so you can write better papers; it doesn't replace your thinking, it augments your thinking; the AI is the joystick, you are the player.
The Consequences
Woodward is not naive; he is optimistic, but he is not naive, and there's a difference. He talks about "veracity and truth" as things that will "become way more important," about the challenge of knowing "what is real" as synthetic content becomes increasingly realistic, about the fact that the same technology that can help a kid learn physics can also be used to deceive people at scale. Nano Banana Pro faced criticism for racial misrepresentation issues; the technology that enables magical image blending also enables misleading image manipulation; Woodward's teams have had to navigate these tradeoffs and will keep having to navigate them, because the doors keep opening, and not all of them lead somewhere good. "This is one of those moments where things change," Woodward told Sequoia, "and they change often for generations. And they can change for good or bad. Put it to good use and think about the consequences downstream." This is not the language of a techno-optimist who thinks everything will work out fine; it's the language of someone who understands that power comes with responsibility, that the things you build can outlast you, that consequences matter.
What This Means for You
I make my living from Google's decisions, and so do a lot of people, and when Google changes something we all feel it, we feel it in our traffic numbers and our revenue and our ability to pay rent. Woodward's ascent is a fundamental change, and here's what I think it means:
First, Google is going to get better at products. For years, Google has released technically impressive things that nobody wanted to use, the meme about Google killing products is a meme because it keeps happening, but Woodward's entire career has been about understanding what people actually need, and the trajectory from 350 million to 650 million users in eight months suggests something has changed, something real.
Second, the AI race is going to become a product race. The model improvements are plateauing, everyone's models are getting good, and the differentiation is going to come from what you do with the models; OpenAI understands this, Anthropic understands this, and now Google is starting to understand it, and they have the resources to execute at a scale nobody else can match.
Third, search is going to change in ways we can't fully predict. Woodward's philosophy of "tools for people" suggests that Google is going to try to make AI feel helpful rather than threatening, which might mean AI features that enhance search rather than replace it, which might mean new ways of interacting with information that we haven't imagined yet, which definitely means that the next few years are going to be volatile for everyone in SEO.
Fourth, and this is the one that keeps me up at night, the "helpful content" thing might actually start to mean something. Google has been saying for years that they want to reward content that helps people, and they've been bad at implementing this vision, but Woodward's entire worldview is about understanding what people need, and if he brings this lens to search, the algorithm might finally start to reward the thing Google claims to want, which would be, depending on your perspective, either terrifying or wonderful, possibly both, probably both.
The Intern, Again
I keep coming back to the beginning, to the kid at Deer Creek High School setting up web feeds of sports events, to the intern at Google London working with small businesses, to the product manager asking what people need, to the VP who still talks about his son's eyes getting big. Caesar Sengupta used to tease him that he'd be Google's next CEO, and that was a joke at the time, probably, but Sengupta saw something, he saw someone who combined deep intelligence with genuine warmth, who could explain complicated things simply, who cared about users as actual people rather than as metrics, who laughed easily and never got angry, who might, just might, be able to do something that had eluded Google for years: build AI products that people actually loved.
Woodward has been studying the question of how technology shapes human development for his entire adult life, first academically at Oxford researching how foreign intervention affects democracy, then practically across sixteen years at Google watching the company evolve and struggle and make mistakes and recover. Now he's at the center of the most important technological transition of our time, a 42-year-old from Norman, Oklahoma, making decisions that will affect billions of people, still laughing, still shipping, still asking what people need.
I don't know if he'll succeed; the forces arrayed against him are enormous, OpenAI is formidable, the internal politics at Google are legendary, the technology is moving faster than anyone can fully comprehend, and the TPUs might actually melt next time. But I find myself, against my usual cynicism, hopeful, and I'm not usually hopeful, I'm usually the guy who points out everything that could go wrong, the guy who says "yes, but" and then lists seventeen concerns, but there's something about Woodward that makes it hard to be cynical. Maybe it's the laugh, maybe it's the stories about his sons, maybe it's the fact that he built his career on understanding what people actually need, which is such a simple idea that most tech executives never think of it.
"I'm interested in building tools for people. I'm not interested in replacing them."
It's a modest statement, the kind of thing you might say and not mean, but I've spent seventy-two hours researching this man, and I think he means it, I think when he says he wants to build tools for people he's thinking about the grandparents who couldn't make the drive to the high school basketball game, he's thinking about the kid who learns better by listening, he's thinking about the 28-minute chore app and the eyes getting big and the TPUs almost melting. The kid from Oklahoma might just figure this out; stranger things have happened; and you should have seen how big his eyes got.
This piece draws from extensive research including CNBC's December profile, TIME's AI 100 list, Semafor's leadership reporting, Sequoia's Training Data podcast, the Creator Economy interview, Tulsa World's early coverage, Google's NotebookLM development blog, and Google I/O 2022, 2024, and 2025. I have never met Josh Woodward. I would buy him coffee if he'd let me. I would also accept if he just wanted to laugh in my general direction.