The Oldest Algorithm
What AI Dating Gets Wrong About the Most Sophisticated Matching System Ever Built
The Long Game | Dr. Venki Padmanabhan | February 2026
A new app called Fate has arrived to save us from the indignity of choosing our own partners. An AI personality interviews you, runs your data through matching algorithms, and presents five candidates based on what the Guardian describes as “observable complementary language patterning.” No swiping involved. The machine knows who you should love.
When I read this, I did not feel the neo-Luddite despair that the columnist Van Badham describes. I felt something closer to recognition. Because what Silicon Valley is selling as the future of romantic matching, my civilization has been running for three thousand years. We just called it arranged marriage. And the original version was better.
• • •
I am Tamil, from a family with roots in Madurai and Dhanbad. I have been married for thirty-five years to a woman I met through the arranged marriage system. Three children came from that marriage—a cancer doctor, a tax professional, a software engineer. By any honest output metric, the system worked.
But when people hear “arranged marriage” in the West, they imagine something rigid and transactional. Parents picking spouses from a catalog. Families trading daughters for dowry. That image is not entirely wrong historically, but it fundamentally misunderstands what the system became at its best, and what made it work.
At its best, arranged marriage was a deployment of distributed human intelligence.
The system ran on aunties. I use the word with precision and without irony. An experienced intermediary—your mother’s cousin’s colleague’s mother-in-law, your father’s friend from the temple committee, the woman three houses down who has known both families since before the children were born—carried in her head a multi-dimensional model of two families that no database field could capture.
She knew temperament. She knew values. She knew financial stability—not from a salary field on a form, but from watching how the family handled the father’s early retirement, how they responded when the business struggled, whether they donated to the temple renovation or merely attended the ceremony. She knew ambition level—not from a dropdown menu but from watching the boy for fifteen years and knowing whether his quietness was contentment or suppression. She knew how the girl handled conflict, because she had seen her at three family weddings where things went sideways. She knew whether the horoscope concern was genuine devotion or a negotiating tactic, because she had watched the mother deploy it differently with different families.
This woman was running a pattern-matching algorithm across hundreds of variables, most of them invisible to any system that operates on structured data. She was reading between the lines of what both families were really saying. She was doing what the best manufacturing diagnostician does on a factory floor—detecting signals that the instruments cannot measure, interpreting context that the dashboard does not display, and making judgment calls that require decades of accumulated, embodied, unreplicable knowledge.
That is tacit knowledge operating at seventy percent cognitive deployment. And no app has come close to reproducing it.
• • •
What happened next is a story I have seen play out in every industry I have worked in.
The internet arrived, and with it a conviction that any process involving human judgment could be improved by replacing that judgment with data. Matrimony platforms—Bharat Matrimony, Shaadi.com, Tamil Matrimony, and dozens of others—took the most sophisticated matching system in human civilization and reduced it to a search engine.
Height. Salary. Caste. Subcaste. Star sign. Complexion. Vegetarian or non-vegetarian. Professional qualification. City of residence. Willingness to relocate.
These are the fields. These are what the algorithm matches on. And if you think this sounds familiar—if it reminds you of the way companies reduce a worker’s value to a job title, a pay grade, and a set of certifiable skills while ignoring the judgment, creativity, and contextual intelligence they carry—you are seeing exactly what I see.
It is the false baseline, applied to the most consequential decision a family makes.
The platform takes what is measurable and assumes that is the full picture. It treats the structured data as the totality of what matters. And then it optimizes against that impoverished representation, producing matches that look perfect on paper and feel wrong in every room.
My family tried this recently. A close relative—a brilliant young doctor in his late twenties, the kind of person any family should be proud to welcome—spent four months on one of these platforms, including their premium tier with live human counselors. The counselors were working from the same flat profiles the algorithm uses. They know neither family. They have no accumulated context, no network memory, no ability to read what is not written down.
Four months. Certified disaster. We stopped to preserve our sanity.
The platform did not fail because the technology was primitive. It failed because the technology was solving the wrong problem. It was optimizing for data compatibility in a domain where the decisive variables are not data. They are judgment, context, cultural fluency, and the kind of knowledge that lives in a person’s body and accumulated experience—not in a database.
• • •
And now comes Fate, which proposes to solve the failure of algorithm-driven matching with more algorithm. The app does not merely filter profiles. It conducts AI interviews, builds psychological models of each user, and selects your five best matches itself. No browsing. No swiping. No human judgment at any point in the process.
If the matrimony platforms stripped out the auntie’s network and replaced it with a search engine, Fate strips out the last remaining human in the loop—the user—and replaces her too. It is the final step in a familiar sequence: suppress the human intelligence, measure only what is quantifiable, then automate against the suppressed baseline and declare it progress.
I have watched this sequence in manufacturing for thirty-six years. At General Motors, I watched companies automate processes that a properly deployed workforce could have improved for free—because nobody had bothered to ask the workers what they knew. At Royal Enfield in Chennai, I inherited a workforce the organization had written off, deployed their intelligence instead of replacing it, and watched profits grow twentyfold. Today I manage a production facility in Ohio where I see both patterns every day.
The pattern is always the same. An organization operates its people at a fraction of their cognitive capacity. Then it measures their output at that suppressed level. Then it builds a business case for automation that compares the machine’s capability to the human’s suppressed capability. And then it declares the human obsolete.
Fate is doing this to love.
• • •
Van Badham, in her Guardian column, sees two options: we either love the robot or we let the robot do the loving for us. This is the same binary that dominates the AI conversation in every industry. You are either for automation or against it. You either embrace the machine or you are a Luddite.
There is a third position. I have spent my career arguing for it in manufacturing, and it applies here with uncomfortable precision.
Do not automate the auntie away. And do not leave the auntie working off a paper notebook and a landline.
Give her the tool.
Imagine an auntie in Chennai—a real one, a woman who has tracked the Ramachandran family for three generations and knows that the boy says he wants a career-focused girl but his mother actually wants someone who will move back to India within five years. She can read between the lines of what both families are really saying because she has been reading those lines for thirty years.
Now give her Claude on her iPhone.
Suddenly her reach extends from her temple network in Mylapore to the Tamil diaspora across four continents. She can keep detailed notes on fifty families simultaneously. She can cross-reference the Subramaniam daughter in Dallas—who mentioned over coffee six months ago that she values kindness over ambition—with the Venkataraman son in London who his mother describes as driven but who his college roommate’s mother says is actually the gentlest person in any room. She can draft a tactful WhatsApp message to the girl’s father that threads the needle between interest and pressure. She can even check the horoscopes if that matters to the families, without pretending that the horoscope is the decision.
Her judgment stays human. Her knowledge stays tacit. Her reading of what a family really means when they say “we are flexible on location” remains hers—accumulated over decades, embodied in her experience, irreducible to any language model’s pattern matching.
The technology just lets her deploy that intelligence at a scale she never could before. It amplifies. It does not replace.
That is what I call the Twin Helix. Human intelligence and technological capability, ascending together. Neither one dominant. Neither one disposable. Each making the other more powerful than either could be alone.
• • •
Here is what unsettles me most about Fate and its successors. It is not that the technology is bad. It is that the technology assumes the problem is human judgment, when the problem has always been the systematic failure to deploy human judgment at scale.
The arranged marriage system did not fail because aunties were unreliable. It buckled under the pressures of modernity—geographic dispersion, nuclear families, smaller social networks, the collapse of the multigenerational household that kept the knowledge base intact. The diaspora scattered the nodes. The platform promised to reconnect them with data. But data without judgment is not connection. It is browsing.
What was needed was never a replacement for the auntie. What was needed was an extension of her reach.
This is true on the factory floor. It is true in the hospital, in the classroom, in the small business struggling to compete with algorithmic giants. And it is true in the most intimate and consequential domain of all—the search for the person with whom you will build a life.
Silicon Valley keeps building replacements for human intelligence in domains where what is actually needed is amplification of human intelligence. And then, when the replacements fail—when the autonomous factory goes dark, when the AI-matched couple discovers they have nothing to say to each other—the builders blame the users, or the data, or the previous version of the algorithm. They never blame the premise.
The premise is wrong. It has been wrong in manufacturing for forty years. It has been wrong in online dating for twenty. And it will be wrong in AI-rranged marriage for however long we allow it.
• • •
My wife and I were matched by the oldest algorithm in the world. Two families, connected through a network of people who had known both sides for years, made a judgment that no structured data could have produced. It was not a perfect process. It carried the weight of cultural expectations and patriarchal assumptions that deserve scrutiny and reform.
But the core mechanism—human intelligence, accumulated over decades, deployed in service of a decision that requires reading what cannot be written down—was not the part that needed fixing.
Thirty-five years later, I go to the gym at six in the morning with the woman that network found for me. We do not talk much on the treadmill. We do not need to. There is something to be said for two people who have earned the right to be silent together.
No algorithm produced that silence. No AI could have predicted it. A woman in Madurai who knew both families thought we might work. She was right. She was right because she was operating at full cognitive deployment in a domain where the variables are infinite, the data is unstructured, and the outcome is a human life.
Give her the iPhone. Let her keep the judgment.
The intelligence was already there. It was already paid for. It was three thousand years old and it worked.
Stop trying to replace it. Start trying to amplify it.
Dr. Venki Padmanabhan is Plant Manager at Advanced Drainage Systems with 36 years of manufacturing leadership experience across three continents. He previously served as COO/CEO at Royal Enfield (achieving 20x profit growth) and COO at Ather Energy. His book, Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away, is forthcoming. Subscribe to The Long Game at thelonggameforall.substack.com.

