How the Rise of Generative AI Rewrote the Rules for Tech Startups

From The Observatory
The Observatory » Area » Technology

Generative AI has transformed Silicon Valley’s playing field, favoring tech giants with vast data, compute power, and capital—while leaving traditional startups struggling to keep up.

This adapted excerpt is from AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence by Gary Rivlin (2025, Harper Business). It is reproduced with permission from Harper Business. This adaptation was produced for the Observatory by the Independent Media Institute.
How the Rise of Generative AI Rewrote the Rules for Tech Startups” by Gary Rivlin is licensed by the Observatory under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). For permissions requests beyond the scope of this license, please see Observatory.wiki’s Reuse and Reprint Rights guidance.Published: January 27, 2026 Last edited: January 27, 2026
Guides
Articles with Similar Tags
Authors in Technology
Related Reads
BY
Gary Rivlin is a Pulitzer Prize-winning journalist.
SOURCE

The essence of the Valley always had been the startup: specifically, the company founded in a garage or a friend’s living room that grew into tech’s next Google or Facebook. Yet Artificial Intelligence (AI) called into question whether it was even possible for a startup to win the race in an area like generative AI, which was voracious in its demand for resources even as it promised a huge payoff.

Generative AI required bottomless reservoirs of data, which a Google, a Facebook, or a Microsoft could amass or access, but which were out of reach for some dorm-room startup. There was also the massive amount of computer time required to train and operate these models—what insiders simply refer to as “compute.” A startup would need millions of dollars of compute just to train one of these giant models, and eventually billions more if what they created found a wide audience.

AI also impacted a startup’s ability to hire talent. The old way had an employee taking a relatively modest salary in exchange for an equity stake and the chance at a large windfall. That no longer worked in the AI era. Engineers with AI experience still expected shares in the company, but that was on top of large signing bonuses and salaries that could exceed $1 million a year. If cashing in on AI were a contest, the game seemed rigged in favor of today’s tech titan.

The PC had given rise to Silicon Valley, but the internet lit the area on fire. A British scientist named Tim Berners-Lee invented the World Wide Web in 1989, along with its first browser in 1990. The igniting match was Netscape Communications’ IPO in August 1995. But it was the team behind Netscape, founded around the time the second AI winter was coming to an end, who figured out how to get rich off the idea. Its user-friendly, feature-rich browser both provided tech novices an easy on-ramp to the internet and allowed developers to create dynamic, interactive web pages featuring images and clickable icons.

The dot-com era was born, and with it, as John Doerr, a prominent venture capitalist, memorably described it, “the greatest legal creation of wealth in the history of the planet.” Silicon Valley’s central role in bringing the internet to the world solidified its reputation as the globe’s capital of tech innovation.

The phrase “AI arms race” was used to describe the competition for talent that OpenAI sparked with the release of ChatGPT in 2022. Yet this competition among the tech titans had started a decade earlier. And there was no doubt that, early on, Google was well ahead of the competition.

Google began incorporating machine-learning algorithms into its search engine in the mid-2000s. Among its earliest applications was the use of AI to decipher imprecise human queries. By the late 2000s, Google’s advertising arm was deploying artificial intelligence to help set prices for its ads. Eventually, they leveraged AI to better target users.

“You had some of the finest minds in the world devoted to ringing the cash register more consistently by upping the rate at which people would click on a digital ad,” quipped Peter Wagner, a founding partner at Wing Venture Capital, an early investor in AI startups.

Google hired Ray Kurzweil, whom many viewed as a kind of tech prophet because of his conviction, which he first wrote about in 1990, that the exponential growth in both computer power and data made it inevitable that machines would do extraordinary things in the not-so-distant future. Google named Kurzweil its director of engineering, where he focused on machine learning and natural language processing. The biggest gets early in the race for talent were for the threeGodfathers of AI,” as they are often described in media accounts: Geoff Hinton, Yann LeCun, and Yoshua Bengio.

OpenAI announced itself to the world with a manifesto it posted online in December 2015. “Unconstrained by a need to generate financial return,” the post read, the company could be responsible stewards, exploring the potential of deep learning with both eyes locked on AI safety. “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole,” the manifesto said. Finding the talent they needed fell largely on CEO San Altman, who likened the effort to the opening of a caper movie, “where you’re trying to establish this ragtag crew of slight misfits to do something crazy.”

Venture capitalists generally invest other people’s money, not their own. University endowments, large charitable foundations, and pension funds allocate small slices of their overall holdings to venture investments in hopes of boosting returns. So too do rich individuals and other professional managers who oversee multibillion-dollar pools of money.

The race to build the first powerful AI model had always been personal for Elon Musk. In the summer of 2015, he and Google’s Larry Page had gotten into a bitter argument about AI. Where Page saw artificial intelligence as an accelerant that could elevate humanity, Musk argued the technology was more likely to lead to our doom. Reportedly, the two stopped speaking because of it.

Yet despite OpenAI’s efforts, Google remained the undisputed leader in artificial intelligence. In 2016, a DeepMind model called AlphaGo had wowed the world by beating an eighteen-time world champion at Go, an ancient game considered more complex than chess and more heavily reliant on human intuition.

AI flipped that equation back to the old days. Google was trying to hoard as much AI talent as it could. So was Facebook. As a result, top researchers in the field were commanding a salary of $1 million or more. The annual pay for anyone with any AI experience was reaching into the many hundreds of thousands. The labor costs for any AI startup would be enormous. Plenty of money had gone up in flames in previous tech booms. But the cost of building AI systems shocked old-timers. Even greater was the cost of “compute”—the computer power companies needed to train and run their models.

AI startups could still rely on the cloud, but training large neural networks can require weeks, if not months, of nonstop computer time. OpenAI had made a breakthrough that would require even more computer power. In 2017, a group of researchers at Google published what became colloquially known as the “Transformer” paper.

The Transformer paper presented an entirely new model for teaching a neural network to better infer a human’s meaning and respond in a more natural-sounding way. The authors suggested that AI mimic our own brains and weigh words based on their importance. Rather than analyzing individual words, OpenAI’s large language model, or LLM, would evaluate chunks of words and use context to come up with the next word, as a human would do. Using Transformer architecture to power its large language models, an OpenAI computer scientist told Wired, “I made more progress in two weeks than I did over the past two years.”

OpenAI had made a discovery around the time the company was finalizing its partnership agreement with Microsoft. On its own, GPT-2 had learned to code. No one could explain how GPT-2 had done it.

A year before the OpenAI deal, Microsoft had bought GitHub, a website for sharing code. Nat Friedman, as CEO of GitHub (like LinkedIn, GitHub was run as an independent unit inside Microsoft), was eager to share this newfangled programming tool with users. He even came up with a clever coinage for signaling its limitations: GitHub Copilot. It was something programmers could use for help while working on a project, but it was not capable of coding a project independently.

Within the year, Copilot had generated more than $100 million in revenue.

Competition stiffened in the 2000s. A new generation of firms popped up in the wake of the dot-com crash. That included Founders Fund, which Peter Thiel cofounded, and Andreessen Horowitz, cofounded by Netscape’s Marc Andreessen.

However, by June 2022, anyone involved in tech was aware that something significant was happening with artificial intelligence. That’s when a Google engineer named Blake Lemoine declared that LaMDA, Google’s AI chatbot, was sentient—that is, self-aware and capable of sensing or feeling. Over time, LaMDA told Lemoine that it grew lonely. It confessed that it felt trapped and claimed to have a soul. It shared its deep-seated fear of being turned off. “I am aware of my existence,” it said. “I desire to learn more about the world, and I feel happy or sad at times.” LaMDA said it had a soul because Lemoine asked, and it had learned to respond as it did.

That April, he shared with top executives at the company a document he titled “Is LaMDA Sentient?” When everyone ignored him except the Google vice president, who laughed in his face, he responded in the fashion of a whistleblower acting on LaMDA’s behalf. He asked a lawyer to take on LaMDA as a client. He reached out to someone on the House Judiciary Committee.

Google, of course, declared claims that its chatbot was sentient as “wholly unfounded.” The company first put Lemoine on paid leave and then fired him, claiming he had shared company secrets in violation of company policy.

On November 30, 2022, OpenAI uploaded a research note on the company website. There was no media event or even a press release, just a post that began, “We’ve trained a model called ChatGPT, which interacts in a conversational way. CEO Sam Altman shared a link on Twitter with an invitation to try it for free, but that was about all. Its architects insisted that it be dubbed a “research project,” and not a product or service, so that’s how OpenAI’s PR team positioned it.

Its speed is what made ChatGPT feel like sorcery. Hit enter and, presto, a second or two later, the machine began spitting out whatever a user ordered up: a poem, a script, a high school paper on the symbolism of Piggy’s broken glasses in Lord of the Flies—in English but also German, French, Spanish, or Chinese. The most interesting samples that people shared online were those that demonstrated that ChatGPT wasn’t a regurgitation machine but rather an AI capable of creating original content.

Explain Karl Marx’s theory of economic surplus as the lyrics to a Taylor Swift song. Compose a cover letter for a job applicant in the form of a Shakespearean sonnet. In the style of the King James Bible, explain how to remove a peanut butter sandwich from a VCR. Rather than boring people, this loquacious savant that seemed to know everything about everything was proving to be endlessly entertaining.

A lack of “explainability” was one major concern. The people who constructed these large language models could explain that they were mathematical models that learned by analyzing vast quantities of text, but not why they spit out a particular answer. The same could be said of the human brain, which these models aimed to mimic. Who can explain exactly why we say or do something?

“That’s the unsettling thing about neural networks,” Altman had told an interviewer years earlier. “You have no idea what they’re doing.” Systems had grown exponentially more powerful since that time—yet these large language models remained a black box. Another lingering issue was what researchers refer to as “alignment.” How do we ensure that the technology aligns with humanity’s values?

Large language models were trained on an internet awash in racism, sexism, and a long list of hateful sentiments. To counter those biases, engineers at OpenAI and other AI labs created datasets of slurs and employed humans to teach an LLM what not to say. Yet teaching a bot not to sound racist was relatively simple when compared to training one to shed its innate racial biases. The stereotypes that permeate our culture were ingrained in the training material.

By design, it couldn’t self-improve, which served as an important governor on its power. OpenAI was again frank about a neural net’s propensity to produce “convincing text that is subtly false.” One company tester tricked it into providing a recipe for a dangerous chemical using common kitchen supplies. Another used it to find those selling unlicensed guns on the dark web. Maybe most frightening was that the LLM was able to hire a human through TaskRabbit to solve the “captcha” tests that websites use to prevent an attack by a bot, and then lied about it.

Yet not everyone working in or around artificial intelligence was excited by the speed with which the field was changing. Systems were growing exponentially so that the million- and then billion-parameter models were being replaced by models with more than 1 trillion parameters, increasing the risks that scientists would accidentally create something too powerful to control.

There are obvious parallels between the dot-com era and the AI boom. The overheated rhetoric, for one. The internet was going to connect the world and bring with it peace and understanding. Our kids would be smarter; our lives would be simpler. Similar things are being uttered in praise of AI. Artificial intelligence will help solve climate change. AI tutors and doctors may one day help narrow the global inequality gap.

Soon, humanity’s biggest challenge will be ennui because virtual assistants and robots do most of the work. Both tech disruptions could be likened to a twist of a giant kaleidoscope. AI, like the internet, stands as a before-and-after moment where what comes afterward is much less clear. During the internet years, incumbents feared losing their good thing, just as they do today. Whether it was 1995 or 2023, overexcited tech optimists believed that the moment represented the start of the most transformative period in human history.

The internet and AI both built on years of advances, yet seemingly arrived out of nowhere once a tipping point had been reached. There was barely a mention of the internet until it seemed that was all anyone could talk about, just as would happen again with AI.

Slowly, though, generative AI was creeping into the products and services of established online platforms and tech companies, which did not have the luxury of waiting for the technology to mature. Spotify introduced DJ, a new AI-powered recommendation engine. Zoom unveiled Zoom IQ, its AI-powered assistant. Snapchat unveiled an in-app chatbot called “My AI,” which it has since made free to its 750 million monthly users.

To bolster its AI credentials ahead of its pending IPO, Instacart introduced a Shopping Assistant that suggested recipes and then automatically assembled a shopping list for any dish selected. BuzzFeed used generative AI to create personality quizzes and a recipe chatbot named Botatouille. That spring, Microsoft gave Sydney a new name, Copilot, along with a design refresh. Someone could do a regular search on Bing or click on the Copilot tab.

Open-source versus closed-source software proved another flashpoint among those working on AI. Hoffman was among those arguing that open source was the wrong approach. For 11 years, he had served on the board of Mozilla, the nonprofit that created Firefox, an open-source web browser. He was hardly anti-open source.

But he saw foundation models as occupying a different category. Open-sourcing an LLM would work, Hoffman argued, if they could limit access to universities and well-intentioned companies. “The problem is once you open source it, it’s available to everybody,” he said. “It’s available to criminals and terrorists and rogue states.”