Google's Search Algorithm
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 5 · Section 5 of 6
Google’s Search Algorithm
Back in 1996, two Stanford PhD students were frustrated. Larry Page and Sergey Brin weren’t annoyed about homework or campus food — they were irritated by how hard it was to find anything useful on the internet. The web was like a massive library where all the books had been dumped in random piles, with no catalogue system.
Their solution became Google. And the algorithm at its core didn’t just organise information — it rewired how human knowledge moves around the planet.
The Great Web Sorting Challenge
Before Google, finding information online was a frustrating game of hide-and-seek. Early search engines mostly counted keywords — if you searched for “pizza,” you’d get pages that mentioned “pizza” the most times, regardless of whether they actually helped you find a decent slice.
Page and Brin had a different idea. They thought: what if we treated the web like academic research? In universities, the most important papers are the ones other researchers cite most often. So why not apply this to websites? Their breakthrough insight became PageRank — an algorithm that looked at which sites linked to other sites, treating each link as a vote of confidence.
Think of it like a popularity contest, but for usefulness rather than looks. If lots of respected websites linked to your page about pizza recipes, Google’s algorithm reasoned you probably had genuinely good pizza advice. Millions of people constantly recommending things to each other, with the algorithm tracking all those recommendations.
How Algorithms Became Our Information Gatekeepers
Here’s where things get interesting — and worth paying close attention to. When Google’s search algorithm started working well, it didn’t just help people find information. It began shaping what information people could find in the first place.
Imagine you’re looking for information about climate change. Google’s algorithm decides which articles, studies, and opinions appear on your first page of results. Those choices — made by mathematical formulas, not humans — influence what millions of people read, think, and believe. The algorithm becomes an invisible curator in that giant library, quietly deciding which books end up at eye level and which get buried in the basement.
This power grew as Google refined its algorithm. They added hundreds of factors: How recent is the information? How fast does the website load? Does it work on phones? Are people spending time reading it or leaving immediately? Each improvement made results more helpful, but also gave the algorithm more influence over what we see.
The Bright Side of Algorithmic Organisation
Google’s algorithm has done genuinely remarkable things. It made access to information more equal in ways previous generations couldn’t imagine. A student in a rural area can access the same medical research as a doctor at a major hospital. Small businesses can reach customers worldwide. Curious people everywhere can learn almost anything, for free.
The algorithm also got remarkably good at understanding context. Search for “Mercury” and it figures out whether you mean the planet, the element, or the Queen frontman — based on your search patterns and current events. It translates languages, identifies images, and recognises songs from a few hummed bars.
Perhaps most importantly, it made the internet usable. Without algorithmic sorting, the tens of billions of web pages would be completely overwhelming. The algorithm transforms chaos into meaningful order.
The Shadow Side: When Algorithms Make Choices
But here’s the thing about having an invisible curator — sometimes you don’t notice what’s missing from the shelves.
Google’s algorithm, despite its sophistication, isn’t neutral. It reflects the biases in its training data, the assumptions of its creators, and the patterns of the web itself.
Consider how image searches used to work. For years, searching for “CEO” returned mostly images of men, while “nurse” returned mostly women. The algorithm wasn’t intentionally reinforcing stereotypes — it was reflecting patterns it found in existing data. But by surfacing those patterns, it helped perpetuate them.
The algorithm also creates filter bubbles. If you’ve searched for particular kinds of news before, Google might assume you want more of the same. It thinks it’s being helpful by showing you “relevant” content, but it might be narrowing your perspective rather than expanding it.
The Feedback Loop
Here’s the most important dynamic: Google’s algorithm doesn’t just reflect what information exists — it influences what information gets created.
Website owners study the algorithm trying to decode what will make their content rank higher. This creates a feedback loop where content gets optimised for algorithmic preferences rather than human needs.
Think about clickbait headlines. They exist because algorithms noticed people click on dramatic, emotional titles more often. So content creators learned to write “You Won’t Believe What Happened Next!” instead of “Study Shows Modest Improvement in Battery Technology.” The algorithm rewarded engagement, and content evolved to match.
This same dynamic affects news, entertainment, and education — pretty much everything online. The algorithm’s preferences gradually reshape the information landscape, like a river carving new channels through bedrock over time.
Search as Society’s Operating System
When Page and Brin set out to organise the world’s information, they probably didn’t realise they were building something like an operating system for modern society. Just as your computer’s OS decides which programs can run and how they access resources, search algorithms increasingly determine which ideas get attention and which fade into obscurity.
The same algorithmic approach that helps a parent find reliable medical information can also amplify misinformation if designed poorly. The technology that connects researchers across continents can also create echo chambers that divide communities.
Understanding this matters because algorithms are everywhere now, not just in search. Social media feeds, shopping recommendations, job matching, loan approvals, and hiring decisions increasingly rely on algorithmic sorting. Google’s search algorithm was just the beginning of a much larger transformation in how information moves through society.
From PageRank to Your Prompts
AI is running an algorithm, not thinking. Know what it treats as “votes.”
PageRank works because it treats links as votes — signals that carry meaning. The algorithm doesn’t read the pages or understand the content. It counts signals, weights them, and surfaces the highest-scoring results. It is, at its core, a very sophisticated counting machine.
Large language models — the AI you’re using at work — work on a similar principle. They don’t read your request and think about it the way a colleague would. They identify patterns in your input, match those patterns against what they’ve learned from training data, and generate the statistically most likely response. They are very sophisticated pattern-completing machines.
This matters for how you write prompts, because AI has learned to weight certain signals heavily. Specific, structured inputs get more reliable outputs than vague, open-ended ones. Clear role definitions (“you are a senior editor reviewing a press release”) shape outputs more reliably than general instructions (“be professional”). Explicit constraints produce more consistent results than implicit expectations.
You’re not convincing AI to do something. You’re providing the signals that its algorithm will weight. The question to ask is: what signals am I giving, and how will they be weighted?
Before sending a prompt: Identify the two or three most important signals you want AI to weight — the role, the format, the constraint, the audience. Make those explicit. Everything else is noise that the algorithm will resolve on its own, using defaults you didn’t choose.