<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Long Game by Dr. Venki Padmanabhan]]></title><description><![CDATA[The Long Game explores why American manufacturing's greatest competitive advantage isn't automation, AI, or robotics — it's the frontline workforce intelligence that's already hired, already trained, and already paid for. ]]></description><link>https://thelonggameforall.substack.com</link><generator>Substack</generator><lastBuildDate>Wed, 13 May 2026 10:34:26 GMT</lastBuildDate><atom:link href="https://thelonggameforall.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dr. Venki Padmanabhan]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[thelonggameforall@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[thelonggameforall@substack.com]]></itunes:email><itunes:name><![CDATA[Dr. Venki Padmanabhan]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dr. Venki Padmanabhan]]></itunes:author><googleplay:owner><![CDATA[thelonggameforall@substack.com]]></googleplay:owner><googleplay:email><![CDATA[thelonggameforall@substack.com]]></googleplay:email><googleplay:author><![CDATA[Dr. Venki Padmanabhan]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Seed Money, No Soil]]></title><description><![CDATA[What Happens When We Fund Ambition but Skip Formation]]></description><link>https://thelonggameforall.substack.com/p/seed-money-no-soil</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/seed-money-no-soil</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Tue, 12 May 2026 11:03:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/87c91dee-d81a-4899-893f-7b1ea698ee86_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-ucjMKuk52Cc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;ucjMKuk52Cc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/ucjMKuk52Cc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><em>Source: &#8220;Getting Hired Is Too Hard, So They&#8217;re Starting Companies Instead&#8221;</em></p><p><em>By Jo Constantz</em></p><p><em>Bloomberg Businessweek, April 14, 2026</em></p><p><em><strong>That&#8217;s the argument: Colleges, employers, and venture funds have formed an accidental coalition&#8212;each withdrawing from formation in their own way&#8212;and together they are producing a generation that skips the very developmental stages that make founders, employees, and citizens worth investing in. They are not launching entrepreneurs. They are extracting children.</strong></em></p><p>An 18-year-old raises $2 million before finishing his first semester at Carnegie Mellon. He joined a fraternity. He enjoyed campus life. Then investors started committing 10 minutes into calls, and he took a leave of absence in January. Bloomberg reports this as resourcefulness. I read it as a system cannibalizing its own seed corn.</p><p>It gets worse. Two Stanford undergraduates take an AI healthcare idea&#8212;hatched as a class project&#8212;to Y Combinator and begin testing it in hospitals before they reach legal drinking age. Zero years of clinical experience. Zero years of medical training. Zero years of understanding what happens when a system fails and a patient pays the price. And we handed them accelerator backing and hospital access. In what universe is this responsible? In a universe that has stopped asking whether founders have been formed before they are funded.</p><p>I know what formed founders look like. I am a venture advisor at Maniv Mobility, and one of our portfolio companies&#8212;Harbinger Motors&#8212;is what happens when formation precedes funding. CEO John Harris spent years at Boeing as a structures engineer, then moved through Faraday Future&#8217;s battery systems, then Xos Trucks where he invented low-cost battery architecture and held multiple patents, then Anduril where he took a product from prototype to volume production in under three years. His co-founder and CTO Phillip Weicker brought 20 years of battery and drivetrain development from QuantumScape, Coda Automotive, and Canoo. Their COO Will Eberts designed control surfaces and landing gear systems on airplanes still flying today. Their VP of Business Development spent 30 years in chassis and commercial truck bodies, growing Spartan Motors from $10 million in sales to more than $500 million.</p><p>These founders did not hatch a class project and pitch Y Combinator. They spent years inside the systems they are now transforming&#8212;long enough to understand not just what was broken, but why previous attempts to fix it had failed. The result? $363 million raised. A $500 million valuation. FedEx ordering trucks. They are building components in-house for $1,500 that tier-one suppliers used to quote at multiples of that. That is not the confidence of youth. That is the authority of formation. And it is the difference between a company that will still exist in 10 years and one that will be a line item in a VC&#8217;s write-off column.</p><p>But formation is not what we celebrate. We celebrate the dropout. The Bloomberg article invokes Steve Jobs, Bill Gates, Michael Dell, and Mark Zuckerberg as proof that skipping formation works. Consider what that mythology actually produced. Zuckerberg built a platform so indifferent to its consequences that it destabilized elections across multiple democracies, amplified genocide in Myanmar, and turned adolescent mental health into a cost of doing business&#8212;and he could not bring himself to take responsibility until Congress physically seated him in a chair. Gates built a monopoly so ruthless the Department of Justice dismantled it. Jobs denied his own daughter and humiliated subordinates as management philosophy. Musk has mass-fired workforces by email, mocked employees publicly, and treated human beings as firmware to be updated or deleted.</p><p>These are not formation success stories. These are formation absence stories&#8212;brilliant minds that were never finished, never formed into the kind of leaders who understand that capability includes how you treat people. And now we are telling 18-year-olds to follow that playbook. We are not just skipping professional formation. We are skipping human formation. And then we celebrate the wreckage as genius.</p><p>The Bloomberg article frames student entrepreneurship as a rational response to a broken entry-level job market. That framing is exactly backwards. Why is the entry-level market broken? Because the same companies these students admire&#8212;and the same AI tools they are building on&#8212;have been systematically hollowing out the starter roles that once formed people. The kids are not escaping a broken system. They are being recycled through it. And here is the cruel irony: some of these AI startups will, by design, further eliminate the very entry-level positions that their founders&#8217; classmates are struggling to find. The ouroboros eats faster.</p><p>The colleges are abetting every step of this withdrawal. Entrepreneurship classes once considered niche are now packed with waitlists. On-campus accelerators are at capacity. Johns Hopkins drew more than 860 incubator applications this year, up from 50 five years ago. Rice&#8217;s entrepreneurship enrollment more than doubled in three years. Stanford&#8217;s engineering professors describe students going from classroom to $4 million in three months. This is not education. This is acceleration without foundation. The university&#8217;s job is to form minds&#8212;to teach a student how to think across disciplines, how to tolerate ambiguity, how to distinguish between a clever idea and a durable one. When a university turns its campus into a pitch competition, it is outsourcing its formative mission to the market. And the market does not form people. The market prices them.</p><p>The venture capitalists are the most honest actors in this story, and that should terrify everyone. One founder in the article says it plainly: investors come in at low valuations because the student does not know what a valuation means, and they take a lot of equity early. That is not investment. That is extraction from people who have not been formed enough to recognize they are being extracted from. When a university tells a 19-year-old she is a founder, and a VC offers her $4 million, no one in that chain is asking the question that matters: Has she been formed?</p><p>What we are watching is a three-party withdrawal from formation. Employers withdrew first, hollowing out entry-level pipelines and replacing apprenticeship with algorithms. Colleges followed, converting classrooms into launchpads because it is easier to celebrate a fundraise than to measure whether a student learned to think. And venture capital completed the circuit, showing up on campus with term sheets before the students had taken enough courses to understand what they were signing.</p><p>The article quotes a career coach who says that even if a student&#8217;s startup does not pan out, the initiative and resilience it demonstrates can impress prospective employers. This is the language of r&#233;sum&#233; decoration, not formation. It treats entrepreneurship as a signaling device&#8212;a way to stand out in a sea of candidates&#8212;rather than as a discipline that requires deep preparation. We have turned founding a company into a line item on a LinkedIn profile, and we are calling it empowerment.</p><p>One administrator at Rice University offers perhaps the most revealing line in the entire piece: students now view leaves of absence the way traditional students view study abroad&#8212;except instead of traveling and learning, they spend a semester at a hacker house in San Francisco. The institution itself frames dropping out of education as an equivalent educational experience. MIT is reevaluating leave policies. Y Combinator has launched an Early Decision program so students can lock in a spot while still enrolled. These are not reforms. They are infrastructure for extraction&#8212;making it easier to pull unformed talent out of the one institution, however imperfect, that was designed to form it.</p><p>I have spent 36 years in manufacturing&#8212;from GM assembly floors to the Royal Enfield turnaround in India to running a plant today in Wooster, Ohio. The framework I have built over that career uses a simple metaphor: Mud, Water, Sun. Three conditions required for capability to grow. Mud is sanctuary&#8212;the safe space to learn without existential consequences. Water is the ascending challenge that builds capacity. Sun is the crucible that tests whether roots hold. A first-semester leave of absence to raise venture capital skips all three. It plants a seed on concrete and calls it a garden.</p><p>I do not blame the students. Avalon Sueiro, the Carnegie Mellon senior building a political campaign simulator, says that even if her idea does not work, at least she has something she cares about on her r&#233;sum&#233;. She is being rational inside an irrational system. The system told her that jobs are scarce, that AI can substitute for experience, and that investors will pay for ambition. She believed it. The system lied.</p><p>The question is not whether these students are talented. They are. The question is whether we are willing to form them before we fund them. Because a founder without formation is not an entrepreneur. A founder without formation is a product&#8212;packaged by a university, priced by a VC, and consumed before she ever had the chance to become what she was capable of becoming.</p><p>That is not the economy working. That is the economy extracting. And the children are paying the price.</p>]]></content:encoded></item><item><title><![CDATA[The $2,000 Nobody Spends: What Ritz-Carlton Understands About Trust]]></title><description><![CDATA[Essay 8 of &#8220;The Evidence They Can&#8217;t Ignore&#8221; &#8212; A Series on Systematic Intelligence Suppression]]></description><link>https://thelonggameforall.substack.com/p/the-2000-nobody-spends-what-ritz</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/the-2000-nobody-spends-what-ritz</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Sun, 10 May 2026 11:02:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/44a8442b-3191-4526-9491-460085004426_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-ku-6Idqdjmw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;ku-6Idqdjmw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/ku-6Idqdjmw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p>Last week I showed you what intelligence suppression costs in healthcare: 98,000 preventable deaths a year, endemic nurse burnout, and a system where 40 percent of clinicians don&#8217;t trust management to act on problems they identify.</p><p>This week I want to show you what intelligence <em>deployment</em> looks like when a company designs it from the ground up &#8212; in an industry where the conventional model says frontline workers should follow scripts, not exercise judgment.</p><p>The company is the Ritz-Carlton. The mechanism is a number: $2,000.</p><h2>The Rule</h2><p>Every Ritz-Carlton employee &#8212; every single one, from housekeeper to front desk agent to bellhop to bartender &#8212; has the authority to spend up to $2,000 per guest, per incident, to solve a problem or create a memorable experience.</p><p>No manager approval required. No forms. No bureaucratic escalation. The employee identifies the situation, exercises judgment, takes action, and the organization backs the decision with real money.</p><p>The rule was created in 1983 by Horst Schulze, the Ritz-Carlton&#8217;s founding president. In 1983, $2,000 would have bought a ten-night stay at the club level. Schulze wasn&#8217;t making a symbolic gesture. He was making an engineering decision about operating system design.</p><h2>The Number Nobody Reaches</h2><p>Here is the detail that most analysts miss when they tell the Ritz-Carlton story, and it changes everything about what the $2,000 actually means:</p><p><strong>The money is almost never spent.</strong></p><p>Ritz-Carlton insiders report that employees rarely come close to the $2,000 limit. The most common service recoveries cost almost nothing &#8212; a handwritten note, a piece of chocolate, a room upgrade that costs the hotel marginal revenue, a staff member driving to a toy store to replace a child&#8217;s lost Thomas the Tank Engine.</p><p>This reveals that the $2,000 is not a spending policy. It is a <em>trust signal.</em>The number communicates to every employee: we believe in your judgment enough to back it with real money. We trust you to make the right call in the moment, without checking with us first.</p><p>And that trust signal produces a cascade of effects that no script, no checklist, and no manager-on-call system can replicate:</p><p><strong>Employees think ahead.</strong> When you trust people to act, they start anticipating. They notice the guest&#8217;s anniversary before the guest mentions it. They spot the toothpaste running low and replace it. They hear the frustration in a voice and intervene before it becomes a complaint. Prevention replaces reaction &#8212; because the system gave them permission to think, not just execute.</p><p><strong>Speed of response matches speed of experience.</strong> Guest experiences happen in real time. A problem that could be resolved in 30 seconds by an empowered front desk agent becomes a 30-minute ordeal when the system requires escalation to a manager. By the time the manager arrives, the guest has written the review. The Ritz-Carlton system eliminates the latency between seeing the problem and solving it.</p><p><strong>The reinforcement cycle turns.</strong> Employees who exercise judgment successfully develop confidence and skill. They get better at reading situations. Their interventions become more precise. The organization&#8217;s service quality improves not through better scripts but through accumulated human capability &#8212; exactly the way Toyota&#8217;s suggestion system develops workers into better problem-solvers over time.</p><h2>The $250,000 Equation</h2><p>Schulze didn&#8217;t arrive at $2,000 through generosity. He arrived at it through data.</p><p>The Ritz-Carlton calculated that the average lifetime value of a Ritz-Carlton guest is approximately <strong>$250,000.</strong> When you know that a single guest relationship is worth a quarter million dollars over their lifetime, spending $2,000 &#8212; or more often, spending $5 on a piece of chocolate and a handwritten note &#8212; to protect that relationship is not an expense. It is the highest-return investment in the entire operation.</p><p>This is the same logic Zeynep Ton documents in retail: when you calculate the <em>total</em> cost of the low-trust model (lost customers, negative reviews, decreased loyalty, increased marketing spend to replace lost guests), the &#8220;expensive&#8221; empowerment model turns out to be the profitable one.</p><p>The economy hotel chain that requires manager approval for a $20 room credit is not saving money. It is <em>destroying</em> customer lifetime value at a rate that dwarfs the $20. But the destruction is invisible because the accounting system tracks the $20 credit, not the guest who never returns.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oPn_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oPn_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png 424w, https://substackcdn.com/image/fetch/$s_!oPn_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png 848w, https://substackcdn.com/image/fetch/$s_!oPn_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png 1272w, https://substackcdn.com/image/fetch/$s_!oPn_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oPn_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png" width="469" height="24" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:24,&quot;width&quot;:469,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oPn_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png 424w, https://substackcdn.com/image/fetch/$s_!oPn_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png 848w, https://substackcdn.com/image/fetch/$s_!oPn_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png 1272w, https://substackcdn.com/image/fetch/$s_!oPn_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae1bffab-03e0-47e4-8e9f-92c7754ad42a_469x24.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2>The Contrast: Hospitality&#8217;s Vicious Cycle</h2><p>The Ritz-Carlton is an outlier. The dominant hospitality operating model &#8212; particularly in mid-market hotels, chain restaurants, and fast-food operations &#8212; is the Taylorist model applied to service work.</p><p>The frontline hospitality worker is designed into a script. The hotel housekeeper follows a checklist of tasks per room. The front desk agent follows a greeting script. The server follows a table-turn procedure. The fast-food worker follows a kitchen protocol. Deviation from the script is a deficiency, not a contribution.</p><p>The results are the same results the suppression model produces everywhere:</p><p><strong>Turnover is catastrophic.</strong> Hospitality turnover runs 70-80 percent annually in many subsectors. Food service is worse &#8212; only 59 percent of food service workers believe their managers lead by example. The industry replaces the majority of its frontline workforce every year.</p><p><strong>The vicious cycle spins.</strong> Low wages attract workers who leave quickly. High turnover means chronic understaffing and undertrained replacements. Undertrained workers deliver poor experiences. Poor experiences reduce revenue. Reduced revenue &#8220;justifies&#8221; cutting labor costs further. Each turn makes the next turn worse.</p><p><strong>Intelligence atrophies.</strong> The experienced housekeeper who knows that the guest in room 412 always wants extra pillows, the bartender who notices a regular seems distressed, the concierge who could create a personalized city tour from a five-minute conversation &#8212; all of this intelligence exists. The system doesn&#8217;t ask for it. And because it doesn&#8217;t ask, it doesn&#8217;t develop. And because it doesn&#8217;t develop, management concludes it doesn&#8217;t exist.</p><h2>The Design Principle</h2><p>Schulze&#8217;s insight &#8212; the one that separates the Ritz-Carlton from the rest of the industry &#8212; is that hospitality is not a script-delivery business. It is a judgment business. Every guest interaction is unique. Every problem is contextual. Every recovery opportunity is time-sensitive. You cannot script your way to exceptional service any more than you can script your way to zero defects on a production line.</p><p>What you <em>can</em> do is build a system that:</p><p>1. <strong>Hires for disposition, not just skill.</strong> The Ritz-Carlton selects employees who have a natural orientation toward service, then trains them extensively in the company&#8217;s values and methods. (Toyota does the same thing. Costco does the same thing. The pattern is universal.)</p><p>2. <strong>Trains deeply and continuously.</strong> Every new Ritz-Carlton employee undergoes rigorous onboarding in the company&#8217;s Gold Standards and service philosophy. Training is ongoing, not one-time. The investment in capability is the precondition for the trust.</p><p>3. <strong>Empowers at the point of contact.</strong> The $2,000 rule is the visible expression, but the principle extends throughout the operation. Employees are encouraged to take ownership of the guest experience, not just execute their assigned function.</p><p>4. <strong>Recognizes the worker as a professional.</strong> Ritz-Carlton employees are called &#8220;Ladies and Gentlemen serving Ladies and Gentlemen.&#8221; This is not corporate jargon. It is a philosophical statement about the dignity and capability of the frontline worker &#8212; the precise opposite of Taylor&#8217;s ox.</p><h2>The AI Bridge</h2><p>Hospitality AI is being deployed for automated check-in, chatbots, dynamic pricing, and guest preference tracking. Most of this technology is designed to <em>reduce</em> human contact &#8212; to make the guest experience more efficient by removing the person from the interaction.</p><p>The Ritz-Carlton model suggests the opposite approach: use technology to make human contact <em>more intelligent</em>, not less frequent.</p><p>What if every frontline hospitality worker &#8212; not just those at luxury properties &#8212; had access to AI that flagged guest preferences, identified recovery opportunities, and suggested personalized touches? What if the Holiday Inn housekeeper had the same situational awareness that the Ritz-Carlton concierge develops through years of experience, delivered through an AI tool on their phone?</p><p>The technology to do this exists today. What doesn&#8217;t exist, at most hospitality companies, is the operating model that trusts the housekeeper to act on the information. The AI is useless if the system says: &#8220;Don&#8217;t think. Follow the checklist. Escalate to your manager.&#8221;</p><p>AI plus the Ritz-Carlton operating model is a revolution in hospitality. AI plus the script-and-checklist model is a faster way to deliver mediocre service.</p><p>The $2,000 was never about the money. It was about the system&#8217;s answer to a single question: do you trust your people to think?</p><p><em>Next week: &#8220;The Shelf That Thinks&#8221; &#8212; Retail has the highest turnover, the lowest engagement, and the most visible proof that treating workers as a cost produces exactly the results you&#8217;d expect. Costco figured this out. The rest of the industry is still catching up.</em></p><p><strong>Dr. Venki Padmanabhan</strong> is a plant manager with 36 years of global manufacturing leadership experience, including executive roles at GM, Chrysler, Mercedes-Benz, Royal Enfield, and Ather Energy. He holds a PhD in Industrial Engineering from the University of Pittsburgh. He is the author of the forthcoming <em>Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away</em> and co-founder of the Capability Capital Institute. He writes The Long Game at thelonggameforall.substack.com.</p>]]></content:encoded></item><item><title><![CDATA[Your Voice Is Fine. It’s Your Back That’s Breaking.]]></title><description><![CDATA[Psychological safety gave workers a microphone. It forgot they also have a body, a family, a bank account, and a clock.]]></description><link>https://thelonggameforall.substack.com/p/your-voice-is-fine-its-your-back</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/your-voice-is-fine-its-your-back</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Thu, 07 May 2026 11:02:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fdf508aa-8511-49e4-b85d-58e45347435c_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-Rg1ncqh0Fl8" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Rg1ncqh0Fl8&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Rg1ncqh0Fl8?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2></h2><p><em>Reactive essay. Source: &#8220;Psychological Safety Isn&#8217;t Enough &#8212; Employees Need Consequence Safety Too&#8221; &#8212; Maria Papacosta, Fast Company.</em></p><p>By Venki Padmanabhan &#8226; The Long Game</p><p>Maria Papacosta wrote a sharp piece in <em>Fast Company</em>this week arguing that psychological safety, as practiced in most organizations, stops too soon. Leaders invite people to speak up, she says, but fail to protect them from the quiet career consequences that follow &#8212; the skipped invitations, the downgraded assignments, the slow social suffocation that punishes candor without ever producing a memo. She calls what&#8217;s missing &#8220;consequence safety.&#8221;</p><p>She&#8217;s right. And she doesn&#8217;t go far enough.</p><p>Notice what&#8217;s still centered in this conversation: the voice. The mouth. The act of speaking. The whole discourse around psychological safety &#8212; even this welcome extension of it &#8212; remains locked inside a single channel of human expression. It addresses the anxieties of people whose basic dignities are already intact: predictable schedules, functioning joints, enough income to absorb a bad quarter. For those workers, the primary workplace risk is social discomfort.</p><p>For the sixty million Americans who manufacture, transport, serve, clean, build, and maintain the physical infrastructure of this economy, the question was never &#8220;Can I speak up?&#8221; The question was always &#8220;Does anyone see me?&#8221; Psychological safety is a white-collar luxury dressed up as a universal principle. What frontline workers need first is not a protected voice. It is a protected life &#8212; their time, their health, their families, their economic futures. Until an organization provides that foundation, asking workers for discretionary effort or frontline intelligence is extraction, not partnership.</p><p><strong>That&#8217;s the argument.</strong></p><p style="text-align: center;">* * *</p><p>I want to tell you about two people I worked with on the Trim 1 floor at GM&#8217;s Lansing Grand River plant.</p><p>Both had close to thirty years on the line. Both carried the accumulated inventory of three decades of production work in their bodies &#8212; a wide variety of ailments, injuries, and physical constraints accumulated from keeping pace at over fifty jobs an hour as vehicles rolled in from the paint floor. They worked three stations apart on the same trim line. They had built, over those thirty years, something that looked less like a professional relationship and more like a covenant.</p><p>When she fell behind, he caught her up. When he suffered a health setback and couldn&#8217;t come in, she was at the committeeman&#8217;s office arguing for him to come fight management on his behalf. They watched over each other the way people watch over each other when institutions cannot be trusted to do it.</p><p>The worst nights were the ones I still carry. She would come in ailing &#8212; barely able to hold pace, clearly in pain, but present &#8212; because missing meant discipline points, and enough discipline points meant suspension, and suspension meant losing income she could not afford to lose. He would watch her from three stations down with an expression I can only describe as empathy mixed with dread. Not able to help enough. Not able to make her go home. Just watching, hoping she would last the night.</p><p>Tell me about psychological safety on that trim line. Tell me what &#8220;consequence safety&#8221; means when the consequence isn&#8217;t a skipped invitation to a strategy meeting but a suspension that breaks the month&#8217;s budget. Tell me what voice protection offers the worker who isn&#8217;t afraid to speak &#8212; who has nothing left to lose by speaking &#8212; but whose body is being consumed at fifty jobs an hour regardless of what she says.</p><p style="text-align: center;">* * *</p><p>At the Capability Capital Institute, we call the foundational layer of organizational obligation <em>Sanctuary</em>. Not a feeling &#8212; a covenant. It operates across four dimensions that psychological safety has never bothered to address.</p><p><strong>Time. </strong>The most intimate resource a human being possesses. When an organization controls your schedule unpredictably &#8212; when you learn your shift forty-eight hours before it starts, when mandatory overtime swallows the weekend you&#8217;d planned with your family &#8212; it doesn&#8217;t just inconvenience you. It colonizes your life. Most companies wouldn&#8217;t dream of telling a vendor &#8220;we&#8217;ll let you know Thursday what we need delivered Saturday.&#8221; They do it to workers every week.</p><p><strong>Health. </strong>Not the fruit bowl in the break room. Not the ergonomics poster nobody reads. Health means the organization does not extract physical or psychological well-being as an unpriced input to production. Psychological safety asks, &#8220;Can you speak without fear?&#8221; Sanctuary asks, &#8220;Will you leave here whole?&#8221;</p><p><strong>Love. </strong>The dimension that makes corporate leaders most uncomfortable, which is exactly why it matters. When an organization structures work so that a worker&#8217;s bonds &#8212; to family, to community, to the people who depend on them &#8212; systematically fray, it is extracting love as fuel. The third-shift operator&#8217;s marriage is not under strain because of &#8220;personal problems.&#8221; The organization made structural choices that put it there.</p><p><strong>Wealth. </strong>Not competitive pay. Wealth means the worker&#8217;s trajectory is pointed upward &#8212; that the job doesn&#8217;t just pay bills today but builds capacity for tomorrow. That the gap between what the frontline produces and what the frontline takes home doesn&#8217;t widen every quarter while the C-suite celebrates efficiency gains.</p><p style="text-align: center;">* * *</p><p>Now re-read Papacosta&#8217;s article. Her analyst who challenged the VP&#8217;s rosy forecast and got quietly sidelined &#8212; yes, that&#8217;s a failure of consequence safety. But that analyst had a predictable schedule, a functioning 401(k), a body not being ground down at fifty cycles an hour, and enough income to absorb the career turbulence. The analyst had <em>reserves.</em></p><p>The frontline worker who speaks up about a safety hazard and gets moved to the worst shift rotation doesn&#8217;t suffer a career setback. She suffers a <em>life</em>setback. Her childcare arrangement collapses. Her sleep breaks. Her marriage absorbs another hit. The consequences aren&#8217;t professional &#8212; they&#8217;re metabolic, relational, financial.</p><p>Sanctuary is prerequisite, not reward. You don&#8217;t earn your way into having your time respected or your body protected. These are the conditions under which human beings can function. An organization that skips Sanctuary and jumps straight to psychological safety &#8212; or worse, to &#8220;empowerment&#8221; and &#8220;engagement&#8221; &#8212; is asking people to bring their brains to work while systematically neglecting their bodies, their families, and their futures. It is building on sand.</p><p style="text-align: center;">* * *</p><p>I don&#8217;t fault Papacosta for writing about psychological safety&#8217;s limits. Someone needed to say that permission without protection is theater. She said it clearly and well.</p><p>But the theater runs deeper than she imagines. The whole production &#8212; the surveys, the workshops, the TED talks about vulnerability &#8212; is staged for workers whose bodies aren&#8217;t on the line. It assumes a worker who sits in a climate-controlled room, earns enough to absorb a bad quarter, and whose primary workplace risk is social discomfort.</p><p>I think about the two people on my Trim 1 floor. Thirty years in. Bodies bearing the full compound interest of that investment. Watching over each other from three stations apart because the institution between them &#8212; the one that had extracted thirty years of their lives &#8212; had never once asked whether their backs would hold.</p><p>For the worker whose back is breaking, whose schedule is chaos, whose paycheck doesn&#8217;t stretch, whose family pays the price for the organization&#8217;s structural choices &#8212; voice is the last thing they need protected.<strong> First, protect the life.</strong></p><p><em>Venki Padmanabhan is Plant Manager at Advanced Drainage Systems in Wooster, Ohio, and founder of the Capability Capital Institute. He writes The Long Game, a Substack publication on manufacturing, leadership, and the deployment of frontline intelligence. His book Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away is forthcoming.</em></p>]]></content:encoded></item><item><title><![CDATA[The Godfather’s Missing Floor]]></title><description><![CDATA[Geoffrey Hinton Sees the Endgame. He&#8217;s Stepping Over the Bodies to Get There.]]></description><link>https://thelonggameforall.substack.com/p/the-godfathers-missing-floor</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/the-godfathers-missing-floor</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Tue, 05 May 2026 11:02:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/49eafed7-3eb0-4c2b-a28c-d349de3067c0_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-cowDHAHNpP4" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;cowDHAHNpP4&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/cowDHAHNpP4?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p><em>Reacting to: &#8220;The &#8216;Godfather of AI&#8217; says Big Tech is only focused on short-term profits &#8212; and it&#8217;s an existential problem,&#8221; by Lila MacLellan, Fortune, March 2026.</em></p><p>There is a defect in injection molding called tiger stripes.</p><p>It happens when the polymer flow hesitates inside the mold &#8212; a differential in speed between the leading edge and the wall creates faint streaks on the surface of the part. On an Engel molding machine running bumper fascias, the stripes are subtle. You can feel them before you can see them. A slight texture where the surface should be glass-smooth. If nobody catches it, the part looks fine. It passes inspection. It moves to paint.</p><p>And then a whole batch comes out of the paint oven and the stripes are magnified &#8212; baked in, visible, irreversible. Every fascia in the lot is scrap. The kit assembly line feeding Mercedes-Benz in Vance, Alabama stops. Mercedes charges ten thousand dollars a minute of downtime. A defect that could have been caught by one pair of trained hands at the molding press just became a six-figure catastrophe at a final assembly plant sixty miles away.</p><p>I know this because I ran plants that fed that line. I watched operators learn to catch tiger stripes by running a palm across the fascia before it left the press &#8212; not because the spec sheet told them to, but because they had been there long enough to know what the paint oven would do to a part their fingers told them was wrong. That knowledge was not programmed. It was not algorithmic. It was formed &#8212; slowly, over years, through repetition and error and the daily discipline of caring about what your hands are telling you.</p><p>Geoffrey Hinton has never stood at that press. And that is the problem with his warning.</p><p>Hinton &#8212; the Nobel laureate, the man who built the neural networks that make modern AI possible &#8212; walked away from Google because his conscience demanded it. He told Fortune this month that tech companies are chasing short-term profits, that researchers are solving curiosity puzzles instead of asking what happens to humanity, and that nobody with actual power is thinking about the endgame.</p><p>All true. All incomplete.</p><p>Because Hinton sees two risks: bad actors using AI for malicious purposes, and AI itself becoming a bad actor once it achieves superintelligence. Both are legitimate. Both deserve urgency. But there is a third risk Hinton never names, and it is the one that is already killing capability on every factory floor in America.</p><p>It is not the risk of AI gone rogue. It is the risk of AI deployed as ideology &#8212; as the justification for treating human intelligence as a cost to be eliminated rather than an asset to be compounded.</p><p>This risk does not require superintelligence. It requires only the quarterly logic already operating in every boardroom: labor is a depreciating asset, automation is the replacement, and the faster you execute the swap, the more value you create for shareholders. That is not a plan. That is a theology. And it is causing damage right now. This quarter. On my floor.</p><p><strong>That&#8217;s the argument.</strong></p><p><strong>The missing middle.</strong></p><p>Hinton worries about extinction in twenty years. I worry about the operator who won&#8217;t be there in three.</p><p>The woman who catches tiger stripes at the molding press is not in Hinton&#8217;s framework. She does not appear in his two-risk model. She is not a bad actor and she is not a superintelligence. She is a human being whose fingers carry thirty years of thermoplastic memory, and whose plant manager is right now being asked by corporate to evaluate whether a vision system could replace her.</p><p>The vision system will catch some defects. It will miss the ones that require knowing what the paint oven does to a surface hesitation mark at 190 degrees. It will miss the ones that require a palm, not a pixel. And by the time the company discovers what it lost, the woman will be gone, her knowledge will be gone, and nobody will be left who remembers what tiger stripes feel like before they become visible.</p><p>That is the midgame Hinton stepped over. The fifteen-to-twenty-five-year window where the actual damage is being done to actual workers, right now. He is worried about the endgame while the midgame is destroying the capability that could make AI work.</p><p><strong>Mothers and fathers.</strong></p><p>Hinton&#8217;s proposed solution is revealing. He wants AI to develop something like maternal instinct &#8212; a built-in impulse to protect and care for humans the way a mother protects a child.</p><p>I understand the appeal. And maybe he is right. Maybe AI can learn to mother &#8212; to protect, to nurture, to cushion the fall.</p><p>But mothering alone does not produce capability. It produces dependence. Ask any parent. The mother comforts. The father builds. The mother says you are safe. The father says you are not ready yet &#8212; and here is what you must learn before you will be.</p><p>What I am describing is the father&#8217;s work. It is architectural. It must be built. It requires leaders who are willing to invest in formation before they invest in automation &#8212; who will hire the nineteen-year-old not because they need another body on the line but because in five years they need someone who can feel tiger stripes with their hands. That is not maternal instinct. That is a builder&#8217;s discipline.</p><p>We may yet get AI mothers. Hinton may be right about that. But without manufacturing fathers &#8212; leaders willing to build the conditions, the safety to fail, the development pathway, the real-world crucible of production &#8212; the mothering has nothing to protect. You cannot nurture a capability that was never formed.</p><p>The operator at the Engel press is not a baby. She is an adult whose intelligence was formed over decades and has never been asked for in any strategic plan. Hinton cannot see her because his framework has no floor. It has a lab and a boardroom and an apocalypse. The space in between &#8212; where the actual work happens, where the actual intelligence lives &#8212; is invisible to him.</p><p><strong>The best defense Hinton never considered.</strong></p><p>Here is the part that should keep Hinton up at night &#8212; not because it is frightening, but because it is hopeful, and he missed it.</p><p>The formation he cannot see is not just a manufacturing argument. It is the best foil against the very Armageddon he fears.</p><p>Hinton worries that superintelligent AI will one day act against human interests, and that no one will be skilled enough to notice until it is too late. But what if the answer is not to build maternal instinct into the machine? What if the answer is to build formed humans who are capable of directing it?</p><p>A workforce that has been trained to read a process &#8212; to feel the tiger stripes, to hear the press cycling wrong, to notice the pattern before the dashboard &#8212; is a workforce that will also notice when the AI starts doing something it should not. The operator who catches what the vision system misses is exactly the person who will catch what the algorithm misses. The judgment that detects a subtle defect in a bumper fascia is the same judgment that detects a subtle drift in an autonomous system.</p><p>Formation does not just protect the product. It protects us.</p><p>Strip that away &#8212; replace every formed human with an unquestioning operator who trusts the screen &#8212; and you get exactly the vulnerability Hinton fears. Not because the AI became too powerful. Because the humans became too passive to question it. The Armageddon is not the machine rising. It is the human capacity to challenge the machine being allowed to atrophy.</p><p>Invest in the middle, and you build the safety architecture Hinton is looking for &#8212; not inside the machine, but inside the people who use it. That is cheaper than maternal instinct. It is more reliable. And it is available right now, on every factory floor in the country, waiting to be asked for.</p><p><strong>The floor Hinton has never visited.</strong></p><p>I have enormous respect for Geoffrey Hinton. The man bet his reputation on a warning nobody wanted to hear. That takes courage.</p><p>I do not pretend to understand everything he sees in the technology. My first cousin, Ramanathan V. Guha, might &#8212; he spent a decade with Doug Lenat teaching machines common sense, and went on to become a Fellow at Google, a technical advisor at OpenAI, and now a Technical Fellow at Microsoft. I am not that person. But I have spent thirty-six years standing where the knowledge lives, watching people do things no machine has learned to do. That is not a lesser intelligence. It is a different one. And it is the one being destroyed while the Nobel laureates debate the endgame.</p><p>Courage without a floor is philosophy. And philosophy, however brilliant, does not catch the defect before it reaches the paint oven.</p><p>The third risk &#8212; the one Hinton does not name &#8212; is not that AI will become too powerful. It is that we will use AI as the excuse to stop investing in the people who make it useful. The operator whose hands know what the camera cannot see. The technician who hears the press cycling a half-second slow. The line lead who notices a new hire struggling and pulls them aside before the mistake becomes a customer problem.</p><p>Those people are already paid for. Their intelligence is already there, waiting. The question is whether anyone will ask for it before it is too late.</p><p>Hinton sees the endgame. He is right to worry.</p><p>But the bodies he is stepping over are not casualties of superintelligence. They are casualties of a management theology that decided human intelligence was a cost, not an asset, long before the first neural network learned to dream.</p><p>Start with the floor. The endgame can wait.</p><p><em>That is the Long Game.</em></p>]]></content:encoded></item><item><title><![CDATA[The Ward That Heals Itself: Ninety-Eight Thousand Reasons to Listen to Nurses]]></title><description><![CDATA[Essay 7 of &#8220;The Evidence They Can&#8217;t Ignore&#8221; &#8212; A Series on Systematic Intelligence Suppression]]></description><link>https://thelonggameforall.substack.com/p/the-ward-that-heals-itself-ninety</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/the-ward-that-heals-itself-ninety</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Sun, 03 May 2026 11:01:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/11049358-5f5e-42b9-bdd6-e1a83471087a_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-HbEBOYkuEuQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;HbEBOYkuEuQ&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/HbEBOYkuEuQ?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div><hr></div><p>For six weeks this series has lived on the factory floor. This week we leave manufacturing. Not because the argument changes &#8212; it doesn&#8217;t &#8212; but because the skeptic&#8217;s last retreat is that intelligence suppression is a manufacturing-specific phenomenon. &#8220;Fine,&#8221; they say. &#8220;Taylor messed up factories. That&#8217;s a factory problem.&#8221;</p><p>It&#8217;s not. The suppression pattern runs through every industry where frontline workers possess knowledge the system doesn&#8217;t ask for. And in no industry is the cost of that silence measured more precisely &#8212; or more tragically &#8212; than in healthcare.</p><p>The unit of measurement in manufacturing is defects. In healthcare, it&#8217;s deaths.</p><div><hr></div><h2>The Indictment</h2><p>In 1999, the Institute of Medicine &#8212; now the National Academy of Medicine &#8212; published a report titled <em>To Err Is Human: Building a Safer Health System.</em> Its central finding landed like a bomb: an estimated <strong>98,000 Americans die every year</strong> from preventable medical errors in hospitals.</p><p>Subsequent research has suggested the actual number may be significantly higher. A 2016 analysis in the BMJ estimated that medical errors may be the third leading cause of death in the United States, behind only heart disease and cancer.</p><p>But the finding that matters for this series was not the death toll. It was the diagnosis. The IOM concluded that <strong>health system design, rather than individual clinicians, was responsible for medical errors.</strong> The report explicitly called for &#8220;increased participation of employees in work design, problem-solving, and organizational decision-making.&#8221;</p><p>Read that again through the lens of Deming&#8217;s 94 percent. The IOM &#8212; the most authoritative voice in American medicine &#8212; was saying: the problem is the system. The system belongs to management. And the solution is to give frontline workers more participation in how the system is designed and improved.</p><p>That was 1999. Twenty-six years ago.</p><div><hr></div><h2>The Suppression Architecture in Healthcare</h2><p>Healthcare is Taylor&#8217;s operating model in a white coat. The hierarchy is explicit, rigid, and reinforced at every level of training and practice.</p><p><strong>Physicians think. Nurses execute.</strong> This is the foundational assumption of the dominant hospital operating model. Physicians diagnose, prescribe, and decide. Nurses carry out orders, administer medications, monitor patients, and document compliance. The model is so deeply embedded that challenging it &#8212; a nurse questioning a physician&#8217;s order, for instance &#8212; requires overcoming enormous cultural and institutional barriers.</p><p>The consequences of this hierarchy are documented with unusual precision because healthcare tracks errors, injuries, and deaths by regulatory requirement:</p><p><strong>Fear-based silencing:</strong> Research has identified the top reasons nurses fail to report medication errors: fear of accusations, fear of negative reactions from patients or families, fear of management reactions, and fear of physician reactions. The system produces errors and then suppresses the reporting of those errors &#8212; a double layer of intelligence suppression.</p><p><strong>Burnout as system output:</strong> A 2024 meta-analysis across 85 studies including nearly 289,000 nurses found that approximately 31 percent experience burnout. This isn&#8217;t a personal resilience problem. It&#8217;s a system design problem. The same meta-analysis found that nurse burnout is associated with more medication errors, more patient falls, more hospital-acquired infections, more adverse events, more missed care, and lower patient satisfaction.</p><p>Think about what this means. The system exhausts nurses through understaffing, excessive documentation requirements, rigid hierarchical constraints, and chronic operational failures &#8212; and then the system attributes the resulting errors to the nurses. Deming would recognize the pattern instantly. The workers are being blamed for variation that belongs to the system.</p><p><strong>The management trust deficit:</strong> A 2023 Penn Nursing study surveyed over 21,000 physicians and nurses at 60 hospitals &#8212; Magnet-designated hospitals, the <em>best</em> hospitals &#8212; and found that more than <strong>40 percent of clinicians were not confident that hospital management would act to resolve problems they identify in patient care.</strong> These are clinicians at elite institutions telling researchers that when they see something wrong, they don&#8217;t believe the system will respond.</p><p>That is intelligence suppression, measured by survey, at the top hospitals in the country. Imagine what the number looks like at the other 93 percent.</p><div><hr></div><h2>The Deployment Proof: Magnet Hospitals</h2><p>In 1983, researchers studied hospitals that seemed to have unusually high nurse retention during a national nursing shortage. They identified a set of characteristics &#8212; &#8220;forces of magnetism&#8221; &#8212; that distinguished these hospitals from the norm. This research became the foundation for the American Nurses Credentialing Center&#8217;s Magnet Recognition Program.</p><p>Today, approximately 7 percent of U.S. acute care hospitals hold Magnet designation. The model rests on five pillars: transformational leadership, <strong>structural empowerment</strong>, exemplary professional practice, new knowledge and innovation, and empirical outcomes.</p><p>The second pillar is the one that matters for this argument. Structural empowerment means the organizational architecture <em>gives nurses power</em> &#8212; formal participation in governance, autonomy at the bedside, decision-making authority over their clinical practice and working conditions. It is the explicit inversion of the hierarchical suppression model.</p><p>And the results track NUMMI with eerie precision:</p><p><strong>Lower mortality.</strong> Research has documented significantly lower mortality rates in Magnet hospitals compared to non-Magnet hospitals.</p><p><strong>Higher patient satisfaction.</strong> Patients at Magnet hospitals are significantly more satisfied and more likely to recommend the hospital.</p><p><strong>Less burnout.</strong> Nurses report better work environments, less emotional exhaustion, and lower intent to leave.</p><p><strong>Better safety culture.</strong> Magnet designation is associated with improved safety climate and reduced adverse events.</p><p><strong>Sustained improvement over time.</strong> Longitudinal research shows that Magnet recognition is associated with improvements in nurse and patient outcomes that exceed those of non-Magnet hospitals over time.</p><p>The same nurses. The same patients. The same diseases. A different operating model. Better outcomes.</p><div><hr></div><h2>The NUMMI Parallel</h2><p>The parallel is precise enough to be structural, not just metaphorical.</p><p>At NUMMI, Toyota took the same workforce GM had failed with and produced world-class results by changing the operating system &#8212; giving workers problem-solving authority, building feedback loops, creating a culture where the person closest to the work had the standing to improve it.</p><p>At Magnet hospitals, the same nursing workforce that produces burnout, turnover, and preventable errors under the standard model produces lower mortality, higher satisfaction, and sustained improvement under a model that explicitly empowers nurses to participate in system design and clinical decision-making.</p><p>In both cases, the conventional explanation for poor performance &#8212; bad workers, insufficient skill, inadequate motivation &#8212; is demolished by the evidence. The workers are the same. The system is different. The outcomes are different.</p><div><hr></div><h2>The Operational Failure Tax</h2><p>In manufacturing, I called this the suppression tax. In healthcare, the concept has a specific research-backed name: <strong>operational failures.</strong></p><p>A 2015-2016 study of nearly 12,000 nurses across 415 hospitals measured the frequency of operational failures &#8212; missing supplies, missing or wrong orders, missing medications, wrong patient diets, electronic documentation problems, insufficient staffing, and time spent on workarounds and non-nursing tasks.</p><p>These operational failures were significantly associated with lower patient safety, more adverse events, more missed nursing care, lower patient satisfaction, higher nurse burnout, and lower job satisfaction.</p><p>Notice what these failures have in common. None of them are caused by nurses. All of them are caused by the system. Missing supplies is a logistics system failure. Wrong patient diets is an information system failure. Insufficient staffing is a management decision. Documentation system errors are technology failures. The nurse is left to manage the consequences of system failures that are beyond their control &#8212; and is then held accountable when patients are harmed.</p><p>Deming&#8217;s 94 percent, measured at the bedside.</p><div><hr></div><h2>The AI Deployment Question</h2><p>Healthcare AI is arriving at speed &#8212; clinical decision support, automated charting, diagnostic assistance, workflow optimization. The investment is enormous. The promise is transformational.</p><p>And the deployment is following the same failed sequence as every other industry.</p><p>AI is being layered onto the existing operating model &#8212; the one where 40 percent of clinicians don&#8217;t trust management to act on problems they identify, where nurses are silenced by hierarchy, where burnout is endemic, where operational failures are chronic.</p><p>The Magnet evidence suggests a different sequence. Build the operating model that empowers nursing intelligence first. Create the structural conditions for frontline clinical knowledge to flow into system improvement. Then deploy AI as an amplifier of intelligence that is already being expressed and valued.</p><p>A burned-out nurse whose observations are ignored by the hierarchy will not be saved by an AI charting tool. But an empowered nurse in a Magnet environment, supported by AI that amplifies her clinical pattern recognition and reduces her documentation burden? That&#8217;s the healthcare version of Toyota&#8217;s andon cord, backed by 21st-century technology.</p><p>The ward that heals itself doesn&#8217;t heal because of the technology on the wall. It heals because the system was designed to listen to the people at the bedside.</p><div><hr></div><p><em>Next week: &#8220;The $2,000 Nobody Spends&#8221; &#8212; We move from the hospital to the hotel lobby, where one company proved that trusting your housekeeper&#8217;s judgment is worth a quarter million dollars.</em></p><div><hr></div><p><strong>Dr. Venki Padmanabhan</strong> is a plant manager with 36 years of global manufacturing leadership experience, including executive roles at GM, Chrysler, Mercedes-Benz, Royal Enfield, and Ather Energy. He holds a PhD in Industrial Engineering from the University of Pittsburgh. He is the author of the forthcoming <em>Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away</em> and co-founder of the Capability Capital Institute. He writes The Long Game at thelonggameforall.substack.com.</p>]]></content:encoded></item><item><title><![CDATA[The Number Goldman Sachs Didn’t Calculate]]></title><description><![CDATA[They Measured the Scar. They Missed the Hemorrhage.]]></description><link>https://thelonggameforall.substack.com/p/the-number-goldman-sachs-didnt-calculate</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/the-number-goldman-sachs-didnt-calculate</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Thu, 30 Apr 2026 11:02:47 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4b4f15c8-d5c8-4d2b-b83e-68a77cbd605a_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div id="youtube2-_hesQ_UULQc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;_hesQ_UULQc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/_hesQ_UULQc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p style="text-align: center;">Reactive Essay | Source: &#8220;People Who Lose Their Job to AI Are in for a World of Pain, Goldman Sachs Report Finds&#8221;</p><p style="text-align: center;">By Joe Wilkins | Futurism | April 11, 2026</p><p style="text-align: center;">Goldman Sachs Research Note by Pierfrancesco Mei and Jessica Rindels, April 6, 2026</p><p>Somewhere in Ohio this week, a woman who spent fourteen years inspecting stormwater pipe opened her phone on break and read that Goldman Sachs had finally measured what losing your job to AI would cost. Earnings scarring. Delayed homeownership. Lower chance of marriage. She recognized every word. What she didn&#8217;t recognize was the number. Goldman said the damage was about $212,000 in lost lifetime earnings. She knew it was worse than that. She just didn&#8217;t have the language for how much worse. I do. And the number Goldman missed is not thousands. It is millions. Per worker. Let me show you.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>Goldman Sachs released a study this month on the &#8220;scarring&#8221; effects of AI-driven job displacement. Their economists, Pierfrancesco Mei and Jessica Rindels, examined four decades of individual-level data and found that workers displaced by technology suffer earnings growth nearly 10 percentage points slower than their peers over the following decade. They found delayed homeownership, slower wealth accumulation, and even lower rates of marriage. They called it scarring, and the data is devastating.</p><p>But Goldman measured the wrong wound.</p><p>They measured what happens to the worker after displacement. They never asked what the organization was doing with that worker&#8217;s intelligence before the displacement. They never calculated the economic value of the capability that was purchased through wages, warehoused for years, never deployed, and then discarded when the machine arrived. They performed a financial autopsy on the victim and never examined the system that created the injury.</p><p>Here is what they missed: the true economic cost of AI displacement is not the $212,000 in lost lifetime earnings per worker that their scarring data implies. It is the millions of dollars in foregone firm value that accumulates when you compare a worker whose intelligence was suppressed and then discarded against a worker whose intelligence was developed and then amplified by the same technology. The first worker gets displaced. The second worker wields AI as a power tool and generates compound value. Same worker. Same technology. The difference is formation &#8212; whether the organization invested in developing the capability it had already paid for through every paycheck. That difference, across a 25-year career in a single manufacturing facility, represents a gap of over $11 million in firm value per worker. For a 500-person plant, the number exceeds $5 billion. Goldman Sachs measured the scar. They missed the hemorrhage. That&#8217;s the argument.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>I have managed plants on three continents &#8212; GM, Mercedes-Benz, Royal Enfield, Ather Energy, and now Advanced Drainage Systems. In every single one, I watched the same pattern: workers arrived with intelligence the organization never intended to use. They were hired for their hands. Their minds were an externality. The job was designed to extract repetitive labor, not to develop judgment, problem-solving, or process insight. And every paycheck the company issued was, in economic terms, a payment for capability that the company then refused to deploy. I have stood on shop floors at 2 a.m. and watched operators solve problems that the engineering department couldn&#8217;t see &#8212; not because the engineers were incompetent, but because the operator had 15 years of pattern recognition the system never asked for.</p><p>At GM Lansing Grand River, I saw what happens when you reverse this. We achieved the JD Power Gold Plant Quality Award &#8212; not by replacing the workforce, but by trusting it. Same people. Same union. Same contract. The difference was that we treated their intelligence as an asset to be compounded, not a cost to be minimized. That is the difference between formation and extraction.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>Let me walk you through the economic model, because this is where the $212,000 turns into $11.5 million.</p><p>A worker arrives at a plant. She brings intelligence the organization could deploy &#8212; pattern recognition, judgment, process insight, the capacity to learn. The organization, operating under conventional task-based job design, deploys roughly 25 percent of that intelligence. The other 75 percent is purchased through wages and never deployed. That is Capability Capital &#8212; intelligence already paid for, systematically warehoused. Then AI arrives. The organization deploys it as a replacement, not a tool. The worker is displaced. Goldman measures the scarring: 10 percentage points of slower earnings growth over a decade, delayed homeownership, reduced wealth accumulation.</p><p>But that measurement captures only the worker&#8217;s visible wound. It misses two much larger numbers.</p><p>First: the cumulative value of the intelligence that was purchased but never used. Over 15 years, if the organization deployed even 80 percent of the worker&#8217;s capability instead of 25 percent, the firm value generated would have been dramatically higher &#8212; not by a small margin, but by multiples. The wasted Capability Capital for a single worker over a 25-year career, using conservative multipliers grounded in value-added-per-employee data, exceeds $10 million. That is money the firm already spent through wages and benefits. It was already paid for.</p><p>Second: the delta between a displaced worker and a deployed worker wielding AI. This is the number that should haunt every boardroom. A worker whose intelligence was developed over 15 years &#8212; whose judgment was sharpened, whose problem-solving authority was expanded, whose pattern recognition was trusted &#8212; does not get displaced by AI. She wields it. And the economic value of that formed-worker-plus-AI combination dwarfs what either the worker or the AI produces alone. Using illustrative but grounded assumptions &#8212; a value multiplier of 6x for a formed worker equipped with AI tools, versus 2.5x for a suppressed worker in a conventional role, with a 40 percent AI productivity amplification consistent with McKinsey and BCG estimates &#8212; the gap in cumulative firm value over a 25-year career exceeds $11.5 million per worker.</p><p>Let me say that differently, because this is the number Goldman didn&#8217;t calculate. Goldman measured $212,000 in lifetime earnings damage per displaced worker. The true economic cost &#8212; the value the firm destroyed by suppressing intelligence it had already purchased and then discarding the worker instead of equipping her &#8212; is $11.5 million. Goldman found the scar. They missed the hemorrhage.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>And this is not just a factory floor problem. Goldman&#8217;s own data shows that Gen Z workers &#8212; concentrated in routine white-collar roles like data entry, customer service, billing, and legal support &#8212; are being displaced at a rate of roughly 16,000 jobs per month. These workers face the same dynamic: their employers hired them for task execution, never invested in developing their judgment or problem-solving capacity, and then replaced them with software that can execute those same narrow tasks faster. The intelligence was there. It was already paid for. It was never deployed. And now it has been discarded.</p><p>The scarring Goldman describes &#8212; delayed homeownership, lower lifetime earnings, reduced marriage rates &#8212; is not a natural consequence of technology. It is a consequence of extraction. It is what happens when organizations treat human intelligence as a cost to be minimized rather than an asset to be compounded. Displacement is not the disease. It is the final symptom of a system that was already sick.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>If Capability Capital were on the balance sheet &#8212; if organizations had to account for the intelligence they purchased through wages and the percentage they actually deployed &#8212; the Goldman Sachs report would read very differently. Instead of measuring scarring, it would measure write-offs. Every displaced worker would represent not just a human cost but a capital destruction event: intelligence acquired, warehoused, depreciated through neglect, and then abandoned. No CFO would tolerate that pattern with physical equipment. We would never buy a $2 million machine, use 25 percent of its capacity for 15 years, and then scrap it when a newer model arrived. But that is precisely what we do with human intelligence. Every single day. In every industry. Across every continent I have worked on.</p><p>The formation investment to prevent this is modest by any corporate standard. Training, mentoring, autonomy systems, and AI tool integration cost roughly $5,000 per worker per year. For a 500-person facility over 25 years, that is $62.5 million &#8212; a rounding error against the $5.8 billion in foregone value. The return on investment, even using conservative assumptions, exceeds 90x.</p><p>Goldman Sachs performed a valuable service. They quantified the scarring. But they performed a financial autopsy and called it a diagnosis. The diagnosis is extraction &#8212; the systematic suppression of intelligence that was already paid for. The prescription is formation &#8212; the deliberate development of Capability Capital before the technology arrives, so that when it does arrive, workers wield it instead of being replaced by it. The intelligence is already there. It is already paid for. The only question is whether we will use it.</p><p>That is not an economic prediction. It is a management choice. And every day we delay making it, the hemorrhage continues &#8212; invisible on every income statement, absent from every balance sheet, devastating in every community where a worker gets the call that the machine has arrived and no one thought to hand her the controls.</p><p style="text-align: center;"><em><strong>That is the Long Game.</strong></em></p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p style="text-align: center;"><strong>A Note on the Economic Model</strong></p><p><em>The illustrative calculations in this essay use the following assumptions, each grounded in publicly available data:</em></p><p><em>Base salary: $55,000/year (consistent with BLS median manufacturing earnings of ~$52,000 and ZipRecruiter&#8217;s $51,890 median, rounded modestly upward to reflect total cash compensation before benefits). Annual raises: 2.5% for suppressed workers, 4% for formed workers reflecting higher organizational value. Value-added multipliers: 2.5x salary for a suppressed worker (conservative against NIST&#8217;s national average manufacturing value-added per employee of $176,000 on ~$96,000 total compensation); 4x for a formed worker pre-AI (reflecting lean manufacturing productivity gains of 30&#8211;50%); 6x for a formed worker with AI tools (reflecting McKinsey&#8217;s $4.4 trillion AI productivity estimate and BCG&#8217;s finding that top-performing AI companies achieve outsized returns through workforce upskilling). AI productivity amplification: 40% (within the range of published estimates from McKinsey, BCG, and industry studies showing 20&#8211;80% gains depending on sector and deployment maturity). Intelligence utilization rates: 25% suppressed, 80% formed (illustrative, based on the well-documented gap between task-level job design and full cognitive engagement in lean manufacturing research). Displacement occurs at career year 15 with an 8-month reemployment gap and 18% earnings penalty, consistent with labor economics literature on involuntary separation. Scarring rate: 10% slower earnings growth over the following decade, per the Goldman Sachs finding. Formation investment: $5,000/year per worker ($3,000 training and mentoring plus $5,000 AI tools post-deployment year), totaling $125,000 over 25 years. These are illustrative figures designed to demonstrate the order-of-magnitude gap between the cost Goldman measured and the cost the system actually imposes.</em></p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p><em>Dr. Venki Padmanabhan is a plant manager at Advanced Drainage Systems in Wooster, Ohio, and author of the forthcoming book Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away. A former CEO of Royal Enfield and veteran of GM, Chrysler, and Mercedes-Benz, he holds a PhD in Industrial Engineering from the University of Pittsburgh and writes The Long Game on Substack.</em></p>]]></content:encoded></item><item><title><![CDATA[Don’t Add the Third Shift]]></title><description><![CDATA[Before you scale with AI, fix what&#8217;s broken with the people you already have.]]></description><link>https://thelonggameforall.substack.com/p/dont-add-the-third-shift</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/dont-add-the-third-shift</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Tue, 28 Apr 2026 11:02:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/690addb3-b3c4-4349-9406-a04bdaf080de_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p style="text-align: center;"></p><div id="youtube2-K8TBILsdGmI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;K8TBILsdGmI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/K8TBILsdGmI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>In 2003, General Motors was trying to resurrect Cadillac.</p><p>The plan was audacious: a brand-new plant in Lansing, Michigan &#8212; Lansing Grand River &#8212; built on Toyota production principles, launching three vehicles that would prove Cadillac wasn&#8217;t finished. The CTS. The SRX, one of the first full-size luxury SUVs, with a panoramic glass roof. The STS, the flagship. And eventually a third production shift to hit the volume the market was already screaming for.</p><p>I was a shift leader on the trim line. Within weeks of launch, we were drowning.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>The metric that matters in an assembly plant is first-pass rate &#8212; the percentage of cars that clear end-of-line inspection, water test, and dynamic vehicle testing without needing any repair. We were running about 25 cars an hour. Cars were getting knocked off at every hurdle.</p><p>The SRX was the worst. That glass roof was a beautiful design &#8212; a flat piece of glass sealed to the body with a two-part epoxy, applied by hand on a moving line. If the placement was even slightly off, it created a path for water. We were flooding brand-new Cadillacs. Our first-pass rates were in the 20s and 30s. They needed to be in the 90s.</p><p>Here&#8217;s how the system was supposed to work. When a team member found a defect they couldn&#8217;t fix in the moment, they&#8217;d pull the andon cord and the defect would be written on a repair ticket. The team leader would then chase the car down the line &#8212; jogging through the plant, looking for a gap between stations where they could sneak in with tools, make the repair, and buy off the ticket. By the time the car rolled off the flat top, the ticket was clean. First-pass. Good car.</p><p>When it worked, it was beautiful. When too many defects overwhelmed the team leaders&#8217; ability to chase them down, you drowned.</p><p>We&#8217;d planned for a small rework lot &#8212; 30, maybe 40 cars. But the launch curve had its own logic. Marketing was already out. Dealer commitments were locked in. So when the lot filled up, we didn&#8217;t stop the line. We kept pounding cars off. By midweek, we&#8217;d have 300, 400, sometimes 500 brand-new Cadillacs sitting in lots around the plant, in the weather, waiting for someone to find time to fix them.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>The pressure to launch the third shift was enormous. More people, more hours, more volume &#8212; that was the path to the numbers. From 30,000 feet it made sense. Demand is there. Capacity isn&#8217;t. Add the shift.</p><p>But anyone standing on the floor could see the truth: we weren&#8217;t capacity-constrained. We were quality-constrained. A third shift of new workers would introduce an entirely new wave of defects on top of the ones we couldn&#8217;t fix. We wouldn&#8217;t triple our output. We&#8217;d triple our rework.</p><p>So leadership made a decision that went against every instinct the launch curve was demanding. They pulled four out of five of us shift leaders off third-shift training. All of our horsepower was redirected to one thing: floor-level problem solving on the first two shifts.</p><p>We brought the crisis to the floor.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>I got my five group leaders and thirty-odd team leaders together and told them exactly which defects were coming off our section of the line. Not abstractions. Specific defects on specific cars. Then I asked them: what are you going to do about it?</p><p>Every team leader took that question back to their team members &#8212; the people whose hands were on the epoxy, on the wiring harnesses, on the trim panels. The instruction was simple: if you see yourself producing a defect, pull the andon cord. Stop the line. Fix it at your station.</p><p>We&#8217;d been saying this in theory since the plant opened. Now we had to live it.</p><p>The first few days were brutal. Out of a hundred cars scheduled, we booked twenty. Leadership had to stand in front of the workforce and say something counterintuitive: Twenty is good. Twenty is a win. Because those are twenty good cars that don&#8217;t need repair.</p><p>That was the moment the culture shifted. Team leaders did overtime with the repair crews &#8212; not just to clear backlog, but to study the defects, bring the knowledge back, revise their standardized work, and solve problems at the station so they&#8217;d never be produced in the first place.</p><p>The SRX water leak &#8212; the one engineering couldn&#8217;t design their way out of &#8212; was killed in a week. Not by engineers in a conference room. By production workers who figured out how to ensure complete urethane adhesion to the metal, right there on the floor.</p><p>The intelligence had been there the whole time. It just needed a system that asked for it &#8212; and leaders brave enough to stop the line while it was deployed.</p><p>First-pass rates climbed. The rework lot shrank. We launched the STS. We launched the third shift. Lansing Grand River won JD Power Gold. Not despite the slowdown. Because of it.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>I think about that parking lot every time I hear a CEO talk about scaling AI.</p><p>An assembly plant makes the invisible visible. You can&#8217;t hide 500 defective cars in a parking lot. But the same dynamics play out in every frontline business in America &#8212; they&#8217;re just harder to see. A nurse running patients through a ward is running an assembly line: triage, diagnosis, treatment, discharge. A restaurant on a Friday night is a production line: greet, seat, fire, plate, serve, turn. Retail, construction, hospitality &#8212; all production systems with their own first-pass rate, their own chase-and-repair, their own rework lots. These sectors represent roughly half of America&#8217;s GDP. And in every one of them, the people closest to the work are carrying intelligence nobody has asked for.</p><p>Right now, every company in these industries is trying to add the third shift. Buying AI agents, deploying automation, launching agentic workflows &#8212; all to get more volume, faster.</p><p>But walk the floor of most organizations and you&#8217;ll see the rework lot filling up. Failed implementations. AI tools producing confident nonsense because nobody asked the frontline what the actual process looks like. Their first-pass rate &#8212; if they were honest enough to measure it &#8212; is in the 20s and 30s. And the plan is to add more capacity.</p><p>This is the same mistake we almost made at Lansing. The fix is the same too.</p><p><strong>Stop. Triage. Deploy the intelligence you already have.</strong></p><p>Before you add the AI, ask your people what&#8217;s broken. Bring the crisis to the floor. Show them the specific defects and ask them what they&#8217;re going to do about it. Then give them the authority to pull the andon cord.</p><p>This will feel like going backwards. Your first few days will look like twenty cars instead of a hundred. Leadership will have to stand up and say: Twenty is good. Twenty is a win. And mean it.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>Here&#8217;s what we discovered at Lansing that should keep every CEO up at night. When we fully deployed our people&#8217;s intelligence, we didn&#8217;t just fix the defects. We transformed the economics of the entire operation. Same scale, radically better cost structure. The capacity wasn&#8217;t missing. It was being consumed by dysfunction.</p><p>In a physical plant, you still need the third shift if you want more cars. But in the digital world &#8212; in the processes AI is poised to transform &#8212; deployed intelligence changes the math entirely. When your people&#8217;s intelligence is fully engaged, and you hand them AI as a power tool, you get three shifts&#8217; worth of output from two shifts&#8217; worth of people. Not because AI replaced them. Because AI multiplied them.</p><p>Siemens proved this at Amberg, Germany. Over two decades, they held their workforce steady at roughly 1,100 people. By systematically deploying their workers&#8217; intelligence alongside automation &#8212; not instead of it &#8212; they increased output eightfold. Not 8%. Eight times.</p><p>But the detail that makes the difference: those 1,100 weren&#8217;t interchangeable bodies. They were formed. They&#8217;d come up through Germany&#8217;s Handwerk apprenticeship tradition &#8212; years of disciplined capability-building, the kind of formation that produces a worker with thirty years of compounding judgment, not thirty years of repetition. The automation didn&#8217;t replace that formation. It multiplied it.</p><p>Hand the same technology to a workforce whose intelligence has been suppressed for decades, whose judgment has never been asked for &#8212; and you won&#8217;t get 8x. You&#8217;ll get a faster version of the same dysfunction.</p><p>The people who can transform your business are already on your payroll. The AI that will multiply their impact is ready. The only question is whether leadership is brave enough to stop the line, deploy the intelligence, and put the power tools in the right hands.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p><em>Dr. Venki Padmanabhan is a plant manager at Advanced Drainage Systems in Wooster, Ohio, and author of the forthcoming book Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away. A former CEO of Royal Enfield and veteran of GM, Chrysler, and Mercedes-Benz, he holds a PhD in Industrial Engineering from the University of Pittsburgh and writes The Long Game on Substack.</em></p>]]></content:encoded></item><item><title><![CDATA[The $900 Billion Autopsy: Why Digital Transformation Keeps Failing]]></title><description><![CDATA[Essay 6 of &#8220;The Evidence They Can&#8217;t Ignore&#8221; &#8212; A Series on Systematic Intelligence Suppression]]></description><link>https://thelonggameforall.substack.com/p/the-900-billion-autopsy-why-digital</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/the-900-billion-autopsy-why-digital</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Sun, 26 Apr 2026 11:03:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a8fc5b1b-63fb-4e3a-b138-61fa13cbc367_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-36ynDBxWCGg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;36ynDBxWCGg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/36ynDBxWCGg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><div><hr></div><p>For five weeks I&#8217;ve been building the case from the ground up. The NUMMI proof. Taylor&#8217;s confession. The suggestion gap. Deming&#8217;s statistics. Ton&#8217;s economics. Each essay adds another pillar to the same structure: frontline intelligence exists, was deliberately suppressed, and produces extraordinary results when the system deploys it instead.</p><p>This week I want to talk about what happens when you skip the deployment step and go straight to automation. The evidence comes from the largest, most expensive, most thoroughly documented management initiative of the past two decades: the digital transformation movement.</p><p>The price tag for the failure I&#8217;m about to describe is approximately $900 billion. In a single year.</p><div><hr></div><h2>The Promise</h2><p>Starting around 2015, a consensus formed among management consultants, technology vendors, and corporate boards that the path to competitive advantage ran through digital technology. Companies needed to &#8220;digitally transform&#8221; &#8212; to adopt cloud computing, AI, the Internet of Things, advanced analytics, robotics, and automation across their operations.</p><p>The logic was compelling on paper. Technology had transformed consumer experience (smartphones, e-commerce, social media). Surely it could transform operations with equal force. The McKinsey Global Institute estimated that digital technologies could unlock trillions in economic value. Boards allocated massive budgets. Chief Digital Officers were hired. Transformation offices were established. Vendors lined up.</p><p>The promise was that technology would solve the productivity problem that had plagued manufacturing, construction, healthcare, and services for decades. Automate the repetitive work. Digitize the information flows. Let algorithms optimize what human judgment had failed to improve.</p><div><hr></div><h2>The Results</h2><p>The results are now in. They are devastating.</p><p>BCG studied 850 companies undertaking digital transformations. Only <strong>35 percent</strong> reached their stated goals. The remaining 65 percent fell short &#8212; many dramatically.</p><p>Bain&#8217;s 2024 analysis found that <strong>88 percent</strong> of digital transformations fail to achieve their original ambitions.</p><p>McKinsey reported failure rates between <strong>70 and 95 percent</strong>, depending on the scope and definition of failure used.</p><p>The total value destroyed is almost incomprehensible. In 2018 alone, failed digital transformations wasted an estimated <strong>$900 billion</strong> globally. Not $900 billion invested &#8212; $900 billion <em>wasted</em>, producing no measurable return.</p><p>To put that in perspective: $900 billion is roughly the GDP of the Netherlands. It is more than the annual revenue of the entire U.S. auto industry. It was spent on technology that didn&#8217;t deliver, organizational change that didn&#8217;t stick, and automation that didn&#8217;t work &#8212; because the underlying problem was never technological.</p><div><hr></div><h2>The Autopsy</h2><p>When researchers examined why digital transformations fail, the findings were remarkably consistent &#8212; and remarkably damning for the technology-first thesis.</p><p><strong>The primary cause of failure is not technology. It is culture and organization.</strong></p><p>BCG found that companies focused on culture were <strong>5.3 times more likely</strong> to achieve breakthrough performance from their digital transformations than those focused on technology alone. The technology was rarely the bottleneck. The organizational capacity to use it was.</p><p>McKinsey identified several recurring failure patterns: leadership that mandated transformation without changing its own behavior, insufficient investment in building employee capability, organizational resistance that was treated as a &#8220;change management&#8221; problem rather than a rational response to a system that was ignoring frontline expertise, and &#8212; most tellingly &#8212; the inability to move pilot programs to scale.</p><p>That last finding deserves emphasis. Organizations reported that <strong>74 percent</strong> struggle to scale AI value beyond pilot programs. Only <strong>21 percent</strong> of AI pilots reach production deployment. The technology works in the lab. It works in the proof of concept. It fails when it meets the actual operating environment &#8212; the messy, variable, human-dependent reality of the shop floor, the hospital ward, the retail store, the construction site.</p><div><hr></div><h2>The Hidden Diagnosis</h2><p>Here is what the autopsy reports describe but do not name: <strong>the digital transformation movement tried to automate intelligence that the operating system had spent a century suppressing.</strong></p><p>Consider what &#8220;digital transformation&#8221; actually requires at the operational level. It requires:</p><p>1. <strong>Accurate process knowledge</strong> &#8212; a detailed understanding of how work is actually done, not how it&#8217;s documented on paper.</p><p>2. <strong>Tacit knowledge capture</strong> &#8212; the undocumented rules, judgment calls, and pattern recognition that experienced workers use to manage variation.</p><p>3. <strong>Frontline buy-in</strong> &#8212; the willingness of the people doing the work to participate in changing how the work is done.</p><p>4. <strong>Continuous feedback</strong> &#8212; real-time information from the point of execution about what&#8217;s working and what isn&#8217;t.</p><p>Every one of these requirements depends on frontline intelligence. And every one of them is systematically undermined by a Taylorist operating model.</p><p><strong>Process knowledge:</strong> In a suppression model, the documented process and the actual process diverge &#8212; sometimes dramatically. Workers develop workarounds, shortcuts, and adaptive practices that keep production running despite system deficiencies. These practices are never documented because the system never asks. When the digital transformation team arrives to map the process, they map the documented version, not the real one. The technology is built on a fiction.</p><p><strong>Tacit knowledge:</strong> The most valuable knowledge on any factory floor &#8212; the experienced operator&#8217;s ability to hear a machine drifting, to feel a material variation, to sense when a sequence is about to produce a defect &#8212; lives in the bodies and minds of the workers. It was never written down because the system was designed to make worker knowledge unnecessary. When the automation team tries to encode this knowledge into an algorithm, they discover it doesn&#8217;t exist in any capturable form. Nobody ever asked. Nobody ever recorded it. Nobody valued it enough to preserve it.</p><p><strong>Frontline buy-in:</strong> Workers who have spent careers inside a system that ignores their intelligence are rationally skeptical of a new initiative that asks for their cooperation while planning to eliminate their jobs. The &#8220;resistance to change&#8221; that appears in every failed transformation post-mortem is not irrational. It is the predictable response of intelligent people who have learned that the system does not have their interests at heart.</p><p><strong>Continuous feedback:</strong> A Taylorist system is designed for information to flow downward &#8212; from management to workers &#8212; not upward. When the digital system needs feedback from the frontline about what&#8217;s working and what isn&#8217;t, it discovers that the feedback channel doesn&#8217;t exist. Workers have been trained, by decades of operating model design, not to volunteer information. The system never asked. Why would they assume it&#8217;s asking now?</p><div><hr></div><h2>The AI Acceleration of the Same Mistake</h2><p>The digital transformation failure should have been a wake-up call. Instead, the same playbook is being repeated with artificial intelligence.</p><p>The AI deployment narrative follows an identical pattern: technology will solve the productivity problem. Automate the repetitive tasks. Let the algorithm optimize. Reduce dependence on human judgment.</p><p>And the early results are tracking the same failure curve. Organizations report that the majority of AI pilots don&#8217;t reach production. The technology works in controlled environments. It fails at scale, for the same reasons digital transformation failed at scale: the operating model doesn&#8217;t support it, the frontline intelligence required to implement it hasn&#8217;t been developed, and the tacit knowledge needed to train it was never captured.</p><p>There is a particular irony in the AI case. Machine learning systems require training data &#8212; examples of how the work is done, including the judgment calls, the exception handling, the pattern recognition that distinguishes competent performance from excellent performance. The richest source of this training data is the frontline workforce. But a century of Taylorist operating model design has ensured that this knowledge was never documented, never valued, and never captured. The AI system can&#8217;t learn what the organization never bothered to record.</p><p>Companies are now spending millions trying to extract from their operations the very intelligence they spent a century designing out of them. The knowledge existed. The workers had it. The system told them it didn&#8217;t matter. Now the system wants it back, and it&#8217;s gone &#8212; retired, resigned, or simply never articulated because nobody ever asked.</p><div><hr></div><h2>The Sequencing Problem</h2><p>The core error in both digital transformation and AI deployment is a sequencing error.</p><p>The sequence most organizations follow: Technology &#8594; Process change &#8594; Hope for cultural adaptation.</p><p>The sequence that works: Frontline intelligence deployment &#8594; Process improvement &#8594; Technology amplification.</p><p>Toyota understood this sequencing intuitively. The Toyota Production System was not a technology system. It was a human system &#8212; a method for deploying frontline intelligence to identify and solve problems. Technology was added later, <em>on top of</em> a functioning human intelligence network. The technology amplified capability that already existed.</p><p>When you reverse the sequence &#8212; deploying technology on top of a suppression model &#8212; you get exactly what the data shows: a 70 to 95 percent failure rate and $900 billion in waste.</p><p>The technology is not the problem. The technology works. What doesn&#8217;t work is installing it in an organization that has spent a century ensuring that the human intelligence required to <em>use</em> the technology effectively doesn&#8217;t exist at the point of implementation.</p><div><hr></div><h2>What the $900 Billion Could Have Bought</h2><p>Here is a thought experiment.</p><p>Imagine if the $900 billion wasted on failed digital transformations in a single year had instead been invested in deploying frontline intelligence &#8212; in training workers to see and solve problems, in building suggestion and feedback systems, in creating the operational infrastructure that Toyota, Costco, and the Ritz-Carlton have demonstrated works.</p><p>At Zeynep Ton&#8217;s estimated cost of building a good jobs operating system, $900 billion would have been enough to transform the labor model of virtually every major employer in the developed world. Instead, it was spent on technology that failed because nobody built the human foundation it needed to succeed.</p><p>The suppression tax isn&#8217;t just the value forgoing by not deploying frontline intelligence. It&#8217;s the cost of every failed technology investment that assumed frontline intelligence didn&#8217;t matter.</p><div><hr></div><p><em>Next week: &#8220;The Ward That Heals Itself&#8221; &#8212; We leave the factory floor and enter the hospital, where the same suppression pattern produces a body count. Ninety-eight thousand preventable deaths a year, and the solution has been known since 1999.</em></p><div><hr></div><p><strong>Dr. Venki Padmanabhan</strong> is a plant manager with 36 years of global manufacturing leadership experience, including executive roles at GM, Chrysler, Mercedes-Benz, Royal Enfield, and Ather Energy. He holds a PhD in Industrial Engineering from the University of Pittsburgh. He is the author of the forthcoming <em>Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away</em> and co-founder of the Capability Capital Institute. He writes The Long Game at thelonggameforall.substack.com.</p>]]></content:encoded></item><item><title><![CDATA[Even LinkedIn Admits It Can’t Find You a Job]]></title><description><![CDATA[LinkedIn says careers are now a climbing wall. But someone has to bolt the holds to the rock.]]></description><link>https://thelonggameforall.substack.com/p/even-linkedin-admits-it-cant-find</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/even-linkedin-admits-it-cant-find</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Thu, 23 Apr 2026 11:40:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/lH0B1XzaOLg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-lH0B1XzaOLg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;lH0B1XzaOLg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/lH0B1XzaOLg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><em>Reactive essay. Source: &#8220;The career ladder is fading as AI reshapes work, LinkedIn exec says&#8221; &#8212; Thibault Spirlet, Business Insider, April 1, 2026.</em></p><p>By Venki Padmanabhan &#8226; The Long Game &#8226; April 1, 2026</p><p>LinkedIn&#8217;s chief economic opportunity officer, Aneesh Raman, says the career ladder is dead. What&#8217;s replacing it is a climbing wall &#8212; careers that move sideways, diagonally, unpredictably. His new book with LinkedIn CEO Ryan Roslansky is called <em>Open to Work: How to Get Ahead in the Age of AI.</em> His advice to workers: figure out what AI can do for you, adapt in real time, and don&#8217;t plan ten years ahead.</p><p>It&#8217;s a good metaphor. Better than he realizes. Because a climbing wall is not a rock face. It&#8217;s an engineered environment. Every hold is deliberately placed. Someone designed the routes. Someone bolted the holds to the rock.</p><p>And that changes everything about who&#8217;s responsible.</p><p>Raman&#8217;s advice puts the burden entirely on the worker: &#8220;No one is going to come knock on your door and say, &#8216;We&#8217;ve figured out what your job is in the AI era.&#8217;&#8221; He means this as empowerment. I hear it as abandonment. Because here&#8217;s the asymmetry his metaphor hides &#8212; the worker didn&#8217;t choose to automate their tasks. The company did. The capital allocation committee signed off on the vision system that replaced the manual inspection. The VP of operations approved the AI that took over the first draft, the research summary, the financial model. And now someone with a title and a budget needs to answer a question that LinkedIn&#8217;s climbing wall conveniently sidesteps: what are you going to do with the human being whose tasks you just automated?</p><p>This isn&#8217;t just a blue-collar question. AI is consuming the entire junior tier of white-collar work &#8212; the tasks that used to be how young professionals learned the craft. The junior analyst didn&#8217;t build the DCF model because it was efficient. She built it because building it taught her how to think. When AI takes over the apprenticeship tasks, the climbing wall loses its bottom holds &#8212; for every collar.</p><p>So who builds the wall? The employer. And why don&#8217;t they? Because our accounting systems cannot see the asset they&#8217;re being asked to invest in. I call this HVAC &#8212; not the system that controls the air in your plant, but the one that measures the people. Hiring Value, Vocational Value, Accreditation Value, Contribution Value. Until you can put Capability Capital on the books, formation will always lose the budget fight to the next piece of automation. The wall won&#8217;t get built because the accountants can&#8217;t see the climbers.</p><p><strong>That&#8217;s the argument. Now let me show you what it looks like on the ground.</strong></p><p style="text-align: center;">* * *</p><p>I say all of this with genuine respect for Raman. In 2007, my family was living in Stuttgart &#8212; I was working for Mercedes-Benz &#8212; and CNN was about the only English-language channel we could get. My oldest son, nine years old, watched Raman reporting from the Middle East and announced he wanted to be Aneesh Raman when he grew up. (He became a hematology-oncology fellow instead &#8212; its own kind of nonlinear career, made possible by twelve years of structured medical formation. The climbing wall had holds.)</p><p>Raman&#8217;s own path &#8212; CNN war correspondent to unpaid intern on Obama&#8217;s 2008 campaign to LinkedIn executive &#8212; is proof that nonlinear careers are possible. It is not proof they&#8217;re available to everyone. That path required education, network, and the financial cushion to take an unpaid internship during a presidential campaign. His book is called <em>Open to Work.</em> That phrase tells you everything about who the advice is for: people who have profiles, networks, credentials. It is not for the sixty people on my production floor, most of whom have never posted on LinkedIn and never will.</p><p style="text-align: center;">* * *</p><p>Picture the advice landing differently.</p><p>You are a pipe extrusion operator in Wooster, Ohio. AI hasn&#8217;t taken your job &#8212; not yet &#8212; but it has restructured the control systems you interact with. The HMI panels are smarter. The quality sensors generate data you weren&#8217;t trained to read. The preventive maintenance system now flags anomalies using pattern recognition that used to live in your supervisor&#8217;s head.</p><p>&#8220;Figure out what AI can do for you&#8221; is not helpful here. It floats above the shop floor like a motivational poster in a break room nobody uses.</p><p>And here&#8217;s the deeper irony: LinkedIn itself is structurally useless for this worker. A pipe extrusion operator doesn&#8217;t have a profile. A welder doesn&#8217;t list &#8220;can hear a die going bad before any sensor catches it&#8221; as a skill endorsement. The entire architecture of &#8220;open to work&#8221; assumes you live in the knowledge economy. So when LinkedIn&#8217;s chief opportunity officer says the climbing wall gives workers &#8220;more control over their careers,&#8221; you have to ask: which workers?</p><p>The formation path I&#8217;ve built at my plant looks like this: an operator who used to visually inspect pipe joints learns to interpret the data stream from the vision system that replaced them. They learn to calibrate it, recognize when its algorithms are drifting, troubleshoot the edge cases that confuse the AI. They become the person who makes the automation work &#8212; not the person the automation replaced. From executing tasks to understanding systems. From running the machine to reading the data it generates. From labor as a depreciating asset to labor as an appreciating one.</p><p>That&#8217;s the climbing wall. That&#8217;s what building it actually looks like. And it doesn&#8217;t happen because someone told the operator to figure it out.</p><p style="text-align: center;">* * *</p><p>The same logic applies in white-collar work, and this is where Raman&#8217;s framework fails most dangerously. AI isn&#8217;t nibbling at the edges of knowledge jobs &#8212; it&#8217;s consuming the entire junior tier. The first draft. The research summary. The slide deck. The legal brief. These weren&#8217;t grunt work. They were the curriculum.</p><p>A law firm that deploys AI to draft contracts but doesn&#8217;t redesign how junior associates learn contract law hasn&#8217;t empowered anyone. It&#8217;s pulled up the ladder behind the partners. A consulting firm that automates analyst-level research but offers no structured path for analysts to develop judgment is making the same mistake the factory makes when it automates a task and fires the operator.</p><p>The institutional obligation is universal. If you deploy AI that eliminates the formative tasks through which your people develop judgment, you owe them an alternative path to judgment. The collar doesn&#8217;t matter.</p><p style="text-align: center;">* * *</p><p>Here is why the wall so rarely gets built. A company buys a robotic welding cell for $1.2 million. It goes on the balance sheet. It depreciates over seven years. Every CFO knows how to model that return because the asset is visible &#8212; it has a serial number, a depreciation schedule, a line on the P&amp;L.</p><p>Now consider the welder who spent fifteen years learning to read a joint by sound, by color, by the way the arc behaves in a crosswind. She can train the next generation. She can diagnose failures the robotic cell&#8217;s error codes never anticipated. She is, by any honest measure, a capital asset &#8212; one that appreciates with every year of experience rather than depreciating.</p><p>But she doesn&#8217;t appear on the balance sheet. When the CFO asks for the ROI of forming this person to supervise the automation that replaced her manual tasks, there is no financial instrument that lets you answer with a number. You&#8217;re asking for investment in an asset your books say doesn&#8217;t exist.</p><p>HVAC changes that. What did it cost to bring this person in? What training have they accumulated? What certifications validate their capability? What value have they created through problem-solving and knowledge transfer? Add those up, subtract amortization, and you have a real, auditable number &#8212; Capability Capital resident in every worker on your floor. Not just &#8220;adapt.&#8221; Not just a moral argument. A capital allocation. An investment in an appreciating asset that happens to be a human being.</p><p style="text-align: center;">* * *</p><p>Thirty-six years in manufacturing have taught me one thing about this: the wall doesn&#8217;t build itself. The holds don&#8217;t appear because workers are curious. The people at the top of the wall &#8212; the ones with the capital and the authority to deploy automation &#8212; have a duty to the people at the bottom.</p><p><strong>Build the wall. Bolt the holds. Count the climbers as assets. Then watch them climb.</strong><em>That is the Long Game.</em></p><p><em>Venki Padmanabhan is a plant manager, writer, and founder of the Capability Capital Institute. His book Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away is forthcoming. He writes The Long Game at thelonggameforall.substack.com.</em></p>]]></content:encoded></item><item><title><![CDATA[My Union]]></title><description><![CDATA[For Jayanthi, who climbed in and drove. On 35 Years of Deployed Intelligence]]></description><link>https://thelonggameforall.substack.com/p/my-union</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/my-union</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Tue, 21 Apr 2026 10:54:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/VL9hHJ8hUsQ" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div id="youtube2-VL9hHJ8hUsQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;VL9hHJ8hUsQ&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/VL9hHJ8hUsQ?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p style="text-align: justify;">Today Jayanthi turns fifty-nine. In six weeks, we will have been married for thirty-five years. I have spent those decades leading manufacturing operations across three continents, managing thousands of people, launching vehicles, turning around companies. I have a PhD in Industrial Engineering. I have written a book about the intelligence that organizations fail to deploy.</p><p style="text-align: justify;">And the smartest operational decision I have ever made was marrying an electrical engineer from Madurai who designed the rear vision system for a car the world was not ready for, launched locomotives on four continents, and can see through every one of my theories in about four seconds.</p><p style="text-align: justify;">I am writing this now, at sixty-two, because I do not know how many more chances I will have to say it properly. A man who spends his life on factory floors knows that systems fail without warning. I would rather she read this while I am alive to receive the look than leave it for someone to find in a drawer.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p style="text-align: justify;">I have spent years developing a thesis about what I call the false baseline in manufacturing. Most organizations operate their workforce at a fraction of its cognitive capacity. Workers bring intelligence, judgment, pattern recognition, and creativity to the plant floor every morning, and most of it goes home unused every night. Companies are paying for capability they have never unwrapped.</p><p style="text-align: justify;">I did not learn this at General Motors. I did not learn it at Royal Enfield. I learned it at home.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p style="text-align: justify;">Jayanthi is a program manager. Not by title &#8212; by nature. She is the person who takes a dream and turns it into a deliverable. She is the person who asks the question nobody else in the room wants to ask: how much will this actually cost, and who is going to do the work?</p><p style="text-align: justify;">At General Motors, she spent ten years in Warren, Michigan. She worked on the EV1 &#8212; the electric car that proved the future was possible and that GM then crushed in the desert. She designed the Rear Vision System demonstrated at the 2000 North American Auto Show, a camera-based system that replaced conventional mirrors with a panoramic flat-panel display. She did this in the late nineteen-nineties. Twenty-five years later, the industry is still catching up to what she built.</p><p style="text-align: justify;">Then I moved. To Chrysler. To Mercedes in Germany. To Chennai. And she followed &#8212; kit and caboodle, every time &#8212; packing up the household, pulling the children out of schools, finding new ones, rebuilding the architecture of a family from scratch while I walked into a new office with a title and a parking spot. She did not leave GM because she wanted to. The program manager assessed the situation and determined that the family was the program that mattered most.</p><p style="text-align: justify;">Once we were in India, she did what she always does. At Ashok Leyland, she led a hybrid bus project. At Daimler in Tamil Nadu, she ran planning and quality for commercial vehicles. At Renault-Nissan, she launched the Fluence and Koleos. At GE Transportation in Bengaluru, she was Program Manager for the Electrical Center of Excellence &#8212; delivering 250 locomotives for South Africa, twelve for Brazil, fifty-five for Pakistan, and digital rail products for India and China.</p><p style="text-align: justify;">Two hundred and fifty locomotives. Let that settle. While I was building motorcycles at Royal Enfield and electric scooters at Ather, she was shipping locomotives across four continents. She did not write essays about it. She did not build a Substack. She shipped.</p><p style="text-align: justify;">I dream. She executes.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p style="text-align: justify;">A marriage is not a contract. It is not a partnership, though that is closer. A marriage is a site of mutual deployment. Two people, each carrying capabilities the other cannot fully see, slowly learning to call those capabilities into service.</p><p style="text-align: justify;">When I came home at midnight from the factory floor, smelling of paint and coolant and frustration, she did not ask me to talk about it. She asked me whether I had eaten. This is not a small thing. It is a woman telling you: I am not going to fix your problem tonight, but I am going to make sure you survive it. That is a form of intelligence no org chart recognizes.</p><p style="text-align: justify;">And when the bug bit me &#8212; as it has, repeatedly, across our entire life together &#8212; she watched. She always watches first. She does not say yes and she does not say no. She observes the scope of the obsession, estimates the cost and the duration, assesses the risk to the family, and then &#8212; reluctantly, always reluctantly &#8212; she climbs in and drives.</p><p style="text-align: justify;">She has climbed in and driven across three continents. Through a turnaround that required us to uproot everything. Through my PhD. Through every career change. She does not climb in with enthusiasm. She climbs in with competence. There is a difference, and the difference matters. Enthusiasm fades. Competence delivers.</p><p style="text-align: justify;">That is not support. That is deployment.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p style="text-align: justify;">In manufacturing, we obsess over output metrics. Line rate, first-pass yield, throughput, cost per unit. We measure everything that comes off the line.</p><p style="text-align: justify;">The output metrics of our marriage are three human beings. Our son Dakshin is twenty-eight and fights cancer for a living &#8212; a hematology-oncology fellow who chose the hardest path in medicine. Our daughter Veda is getting married on May third &#8212; twelve days from now &#8212; a CPA at EY in Manhattan. Our son Vyas is an engineering manager at Northrop Grumman in Huntsville, carrying a quiet competence that reminds me of his mother every time I see it in him.</p><p style="text-align: justify;">Three children. Three lives that are, by any honest measure, the most important things either of us has ever produced.</p><p style="text-align: justify;">And Jayanthi managed every deliverable. Schooling across four countries. College applications. Moves. The thousand invisible logistics that keep three human beings alive and presentable while their father is solving die-gap problems at midnight. And now she is executing Veda&#8217;s wedding in twelve days with the calm of a woman who has shipped two hundred and fifty locomotives and knows that a wedding is just another program with a fixed deadline and a client who cannot be disappointed.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p style="text-align: justify;">These days, Jayanthi and I go to the gym together at six in the morning. This is new for us. We do not talk much while we are there. We do not need to. There is something to be said for two people who have earned the right to be silent together.</p><p style="text-align: justify;">Another bug has bitten me, and she can see it. She watches with curiosity but not yet commitment. I recently suggested, gently, that she might serve as my program manager for this new chapter. She gave me the look. If you have been married for any length of time, you know the look. It means: I love you, but absolutely not.</p><p style="text-align: justify;">She is right. The program manager has to be able to execute &#8212; and execution can mean ending something as easily as completing it. She declined the scope.</p><p style="text-align: justify;">But she has not looked away. She is next to me on the treadmill at six in the morning, and in thirty-five years I have learned that this is how Jayanthi says: I am still here. Show me it&#8217;s worth it.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p style="text-align: justify;">I would not have the language for any of what I write &#8212; the conviction, the framework, the thesis about deployed intelligence &#8212; if I had not spent thirty-five years in a union with a woman who deployed her own intelligence so completely into our shared life that I could see, by her example, what full deployment actually looks like.</p><p style="text-align: justify;">Jayanthi. Thirty-five years.</p><p style="text-align: justify;">I am still deploying. Everything I am trying to build now &#8212; the books, the institute, the essays, this impossible second act &#8212; is built on the foundation you laid when I was not paying attention. Every word I write about the intelligence we fail to use is a word I learned from watching you use yours &#8212; completely, daily, without recognition, without fanfare, without once asking for an essay.</p><p style="text-align: justify;">You designed the vision system for a car the world was not ready for. You shipped locomotives across four continents. You launched vehicles in India while raising three children who became a cancer doctor, a missile systems engineer, and a CPA who is getting married in twelve days by a program her mother is running with the same precision she brought to the Renault Fluence launch. And you did all of this while married to a man who kept writing frameworks about intelligence deployment and never once realized he was living inside the best example of it.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p style="text-align: justify;">If you are looking for a partner, here is the only question that matters: what kind of person would be central to my formation over the next thirty, forty, fifty years of life? Not who will entertain you. Not who will complete you. Who will form you. Who will make the version of you that does not yet exist &#8212; the one you cannot see from where you stand today &#8212; more possible than it would otherwise be.</p><p style="text-align: justify;">If you believe you have found that person, you have found the right partner. Everything else is detail.</p><p style="text-align: justify;">Mine was that. She has been forming me now for thirty-five years. And I cannot think of anybody else to help me through the next twenty or thirty.</p><p style="text-align: justify;">You are the longest game I have ever played. And the only one where winning means we both cross the finish line together.</p><p style="text-align: justify;">The best things in my life were already paid for. By you.</p><p style="text-align: justify;">Happy birthday, Jayanthi</p>]]></content:encoded></item><item><title><![CDATA[Snapchat Fired 1000 Today: The Ghost is no longer in the Machine]]></title><description><![CDATA[What Snapchat&#8217;s 95% Stock Collapse Reveals About the Twin Helix of Capital and Labor]]></description><link>https://thelonggameforall.substack.com/p/snapchat-fired-1000-today-the-ghost</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/snapchat-fired-1000-today-the-ghost</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Wed, 15 Apr 2026 22:25:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/u7gsvSWTR1w" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-u7gsvSWTR1w" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;u7gsvSWTR1w&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/u7gsvSWTR1w?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p><em>Source: &#8220;Snap Inc blames AI as it lays off 1,000 workers&#8221;</em></p><p><em>Nick Robins-Early, The Guardian, April 15, 2026</em></p><p>In Odesa, Ukraine, a young coder named Yurii Monastyrshyn won a programming contest&#8212;twice. Victor Shaburov offered him the co-founder role at Looksery, a startup that built real-time facial modification technology. In 2015, Snap acquired Looksery for $150 million and used it to launch Lenses. Monastyrshyn became Senior Director of Engineering, where he and his team built the most-used AR platform in the world. Today, 350 million people use that technology every day. That is formation crystallized into capital. It came from a programming contest in Odesa, not a headcount optimization.</p><p>Today, Snap announced it would lay off 1,000 workers, roughly 16 percent of its workforce, citing &#8220;rapid advancements in artificial intelligence.&#8221; The stock&#8212;which closed at $83.11 in September 2021 and hit $3.81 in March 2026, a 95 percent destruction of market capitalization&#8212;rose 6 percent on the news. The market rewarded the sever.</p><p>That&#8217;s the argument: When a company&#8217;s Capital strand is broken, the instinct is always to sever the Labor strand. And severing the Labor strand is precisely what guarantees the Capital strand will never recover.</p><p>The Twin Helix is a structural model: Capital&#8217;s goals and Labor&#8217;s goals spiral upward together when built on systematically deployed human intelligence. Capital&#8217;s strand is EGIB: Earnings, Growth, Innovation, Brand Equity. Labor&#8217;s strand is TLHW: Time, Love, Health, Wealth. Capital is crystallized labor. Labor is capital in formation. The helix either spirals upward together, or it pulls apart under pressure. Snap is pulling apart.</p><p><strong>THE CAPITAL STRAND: SNAP&#8217;S EGIB</strong></p><p>E &#8212; Earnings. Snap generated $5.9 billion in revenue in 2025, up 11 percent year-over-year. Free cash flow doubled to $437 million. The company turned its first meaningful quarterly profit in Q4&#8212;$45 million. But it still posted a $460 million annual net loss. Fourteen years after founding, never a full year of profit. Earnings are moving in the right direction, slowly&#8212;and the market has lost patience.</p><p>G &#8212; Growth. Snap has 474 million daily active users. But growth is happening in the wrong geography. Rest of World represents 57 percent of daily users yet generates only $1.17 per user, compared to $8.43 in North America&#8212;a 7.2x ARPU gap. Worse, North America daily active users declined from 98 million to 94 million in Q4 2025. The most lucrative market is shrinking while global ARPU has fallen 27 percent from its 2021 peak.</p><p>I &#8212; Innovation. Snap has spent more than $3.5 billion on its Spectacles augmented reality glasses, with an ongoing annual drain of $500 million. The first consumer Spectacles in 2016 resulted in $40 million of unsold inventory written off. Every subsequent generation flopped. Meanwhile, Meta generated $131.9 billion in ad revenue in 2023 and can subsidize AR hardware indefinitely. Snap cannot. Innovation that doesn&#8217;t connect to earnings is aspiration with a price tag.</p><p>B &#8212; Brand Equity. Snap&#8217;s stock has collapsed 95 percent. The company still owns Gen Z culturally&#8212;Snapchat remains the second-most-important social network among American teenagers, and 75 percent of daily users engage with AR lenses. But cultural relevance without financial credibility produces what Snap has become: a beloved product attached to a stock no institutional investor wants to hold.</p><p><strong>THE INTERVENTION</strong></p><p>On March 31, activist investor Irenic Capital Management published a letter to CEO Evan Spiegel under the banner &#8220;Snap Back to Reality.&#8221; The demands: shut down Spectacles, reduce headcount by 21 percent, shift to AI-driven advertising, reform governance. Irenic projected a stock price of $26.37&#8212;nearly seven times current&#8212;if management complied.</p><p>Two weeks later, Spiegel complied with the easiest part. He fired 1,000 people. Not the hardest part&#8212;fixing the ARPU gap, monetizing Rest of World, rethinking Spectacles, reforming governance. The labor line on the P&amp;L. The one line where you show immediate savings without solving any structural problem.</p><p><strong>THE LABOR STRAND: SNAP&#8217;S TLHW</strong></p><p>T &#8212; Time. The 1,000 workers being fired built the machine learning infrastructure that drove 89 percent growth in in-app optimization revenue. They built the ad platform that grew active advertisers 60 percent in a single year. They built Snapchat+ into a $1 billion recurring revenue stream with 24 million subscribers. Firing them doesn&#8217;t create organizational time. It destroys institutional tempo&#8212;the rhythm of teams that know how to ship together. Tempo, once broken, takes years to rebuild.</p><p>L &#8212; Love. A thousand colleagues are gone. What message does this send to the 4,200 survivors? Seventy-nine percent of workers who feel they belong plan to stay. Thirty-three percent of those who don&#8217;t. Layoffs do not produce belonging. They produce fear. And fear does not produce the creative risk-taking that built AR lenses, My AI, or Spotlight&#8212;the products that keep 474 million people opening the app every day.</p><p>H &#8212; Health. Snap&#8217;s remaining 4,200 employees now absorb the work of 5,200. Spiegel says AI will fill the gap. AI doesn&#8217;t fill the gap on day one, or month one, or often year one. What fills the gap immediately is longer hours, higher stress, and the cognitive load of doing your job while wondering if you&#8217;re next. Adding a thousand people&#8217;s workload to the survivors is not a productivity strategy. It&#8217;s a health crisis with a delayed fuse.</p><p>W &#8212; Wealth. When a startup engineer deploys her intelligence and the company succeeds, she walks away with equity. Nobody considers this radical. But when a Snap engineer deploys the same intelligence to build a subscription product worth $1 billion in annual recurring revenue, she walks away with severance. The intelligence created the value. The compensation structure denied the ownership. The person who built the thing that works is gone, while Spectacles&#8212;$3.5 billion and counting&#8212;keeps burning cash under founder control.</p><p><strong>THE DIAGNOSIS</strong></p><p>Snap&#8217;s Capital strand didn&#8217;t break because it has too many employees. It broke because the company never built the Labor strand properly.</p><p>The ARPU gap is not an algorithm problem. It&#8217;s a formation problem. Monetizing users in India, Indonesia, and Brazil requires people who understand local advertising ecosystems and purchasing behavior. No AI model trained on North American ad data will solve this. Growth in EGIB depends on Love and Wealth in TLHW&#8212;people valued enough to deploy their intelligence on hard problems.</p><p>The Spectacles failure is not an innovation problem. It&#8217;s an atmosphere problem. Somewhere inside Snap, engineers knew consumer AR glasses weren&#8217;t ready. Every generation proved it. But the organizational atmosphere didn&#8217;t allow that intelligence to surface with enough force to redirect $500 million a year. Innovation in EGIB depends on Time in TLHW&#8212;workers with enough mental space to challenge assumptions, to say what they actually see.</p><p>The North America DAU decline is not a product problem. It&#8217;s a Brand Equity problem created by Capital decisions that degraded the Labor strand. The app that 94 million North Americans open today was built by a larger, more confident team. The app they&#8217;ll open next year will be built by a smaller, more frightened one. Brand Equity in EGIB depends on Health in TLHW.</p><p><strong>THE ALTERNATIVE</strong></p><p>Instead of cutting 1,000 people, redeploy them. The ARPU gap is the single largest growth opportunity in the company. Move Rest of World ARPU from $1.17 to even $3.00 across 280 million daily users, and that&#8217;s $1.5 billion in incremental annual revenue&#8212;dwarfing the savings from layoffs.</p><p>Instead of burning $500 million a year on Spectacles, invest in the workforce that already produces returns. Snapchat+ is growing 71 percent year-over-year. Someone built that. Give them more resources, not pink slips.</p><p>Instead of telling survivors that AI will fill the gap, build the conditions for human intelligence to surface. AI can amplify deployed intelligence. It cannot replace intelligence that was never deployed. You have to form the minds before you automate the outputs.</p><p><strong>THE GHOST</strong></p><p>Snapchat&#8217;s logo is a ghost&#8212;meant to evoke the ephemeral, messages that disappear, moments that don&#8217;t last. But today the ghost means something else. It&#8217;s the ghost of the Labor strand that Capital keeps trying to kill.</p><p>The 1,000 people walking out of Snap today carry with them the intelligence that built everything the company still sells. They are capital in formation, being discarded as cost in excess.</p><p>The helix either spirals upward together, or it pulls apart under pressure.</p><p>On a podcast last year, Spiegel said he felt &#8220;a sense of shame&#8221; when conducting layoffs. He said companies should focus on &#8220;making sure great ideas are coming from anywhere, getting surfaced, and being built.&#8221; That is the Twin Helix. Great ideas from anywhere is the Labor strand. Getting surfaced is Love. Being built is Time. Spiegel described the helix&#8212;and then fired a thousand of the anywheres.</p><p>Snap didn&#8217;t choose to pull. Snap was forced to pull&#8212;because nobody in that building ever learned how to form human beings so their complete intelligence is deployed. That&#8217;s the ghost in the machine. The stranded asset: intelligence inside a thousand people that was never seen, never formed, never deployed. And now, instead of using AI to arm them with an extraordinary tool, we are using AI as an excuse for their replacement.</p>]]></content:encoded></item><item><title><![CDATA[Unzipped.]]></title><description><![CDATA[In his Annual Letter to Shareholders Larry Fink says that Capital and Labor are coming apart.]]></description><link>https://thelonggameforall.substack.com/p/unzipped</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/unzipped</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Wed, 15 Apr 2026 00:59:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/uYMzwdwMwSc" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div id="youtube2-uYMzwdwMwSc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;uYMzwdwMwSc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/uYMzwdwMwSc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p><em>Reactive: &#8220;Larry Fink&#8217;s 2026 Annual Letter to Shareholders&#8221; &#8212; BlackRock, March 2026 &#8212; blackrock.com/us/individual/larry-fink-annual-chairmans-letter</em></p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p>Larry Fink just told you the helix is coming apart. He didn&#8217;t use that word. He used charts and footnotes and the careful language of a man who manages eleven trillion dollars. But the picture is unmistakable: the two strands that hold an economy together &#8212; capital and labor &#8212; are being unzipped.</p><p>Since 1989, he wrote, a dollar in the stock market has grown more than fifteen times a dollar tied to median wages. Fifteen to one. That&#8217;s not a gap. That&#8217;s a separation. One strand racing upward, the other barely moving, and the rungs between them snapping one by one.</p><p>He&#8217;s right about the diagnosis &#8212; possibly the most powerful person in finance to say it this plainly. He runs BlackRock. He <em>is</em> capital. When capital tells you that capital is winning too much, you should listen.</p><p>But Larry Fink has no zipper.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p>Let me explain what I mean.</p><p>In molecular biology, DNA is a double helix &#8212; two strands connected by rungs. Neither strand carries the full code alone. The information lives in the <em>pairing</em> &#8212; in the connection between base pairs, in the rungs that hold the structure together. Separate the strands and the code degrades. The molecule stops functioning.</p><p>An economy works the same way.</p><p>Capital and labor are not opponents. They are two strands of the same helix, and the productive intelligence of an enterprise &#8212; its capability, its capacity for sustained value creation &#8212; lives in the rungs between them. In the connections. In the pairing.</p><p>When a line worker on third shift notices a bearing running three degrees hot and flags it before the motor seizes &#8212; that is a rung. When a plant manager invests six months training a team to read statistical process control charts, and that team reduces scrap by forty percent without a single capital expenditure &#8212; that is a rung. When an operator who has run the same machine for eleven years teaches a new engineer something no graduate program covered &#8212; that is a rung.</p><p>Every one of those rungs connects labor to capital and makes both more valuable. Every one is a place where labor <em>becomes</em> capital &#8212; where the intelligence of the person doing the work gets encoded into the productive capacity of the enterprise.</p><p>That is what I mean when I say: labor is capital in formation.</p><p>And that is the rung that keeps snapping.</p><p>Fink sees the unzipping clearly. What he does not name is the enzyme that caused it: forty years of management treating labor as a depreciating cost. And what his solutions cannot provide is a zipper &#8212; a way to reconnect the strands by rebuilding the rungs. That&#8217;s the argument.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p>Fink&#8217;s data is devastating. Stock returns outpacing median wages fifteen to one since 1989. The top one percent holding as much wealth as the bottom ninety. And AI threatening to accelerate every one of those trends.</p><p>But here is what his letter does not say. It does not ask <em>why</em> the strands separated. He describes the unzipping. He does not name the enzyme. And without naming the enzyme, he cannot offer a zipper &#8212; only a rope.</p><p>For forty years, American management treated labor as a depreciating asset &#8212; not on a whiteboard, but in every decision that mattered. Headcount is a cost line. Training is cut in a downturn. Experienced workers are candidates for replacement. Labor is overhead, and less is better.</p><p>That is the enzyme. That is what unzipped the helix.</p><p>Treat labor as a cost and you optimize for its reduction. Stop investing in the rungs. The strands separate. Capital floats upward, untethered. Labor sinks. And the productive intelligence of the enterprise degrades.</p><p>This is not a metaphor. It is what happens when you replace experienced operators with temps who can&#8217;t read a vibration analysis. When you automate a process that only worked because a human was making seventeen micro-adjustments per shift that nobody documented. When you cut training for ten years and then wonder why your automation investments fail at ninety-five percent.</p><p>The helix unzipped. The code degraded. And Larry Fink &#8212; bless him &#8212; is standing on the capital strand looking down at the labor strand and saying: <em>we should really do something about that distance.</em></p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p>His solution is ownership. Investment accounts seeded at birth. Digital wallets for index funds. Tokenization. The BlackRock Foundation has committed a hundred million dollars to train electricians and tradespeople &#8212; and Fink is right that the four-year degree is cracking as the only path to a middle-class life.</p><p>But all of these solutions operate on the capital strand. They try to give labor <em>access</em> to capital. They do not try to reconnect the strands. They do not rebuild the rungs.</p><p>Giving a factory worker an index fund does not teach her plant manager to see her intelligence. It does not make her more valuable to her employer. It gives her a ticket to watch the capital strand rise and hope she rises with it.</p><p>That is not a zipper.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p>Here is the zipper.</p><p>I call it the Bloom System, and it works like this: Mud. Water. Sun. Bloom.</p><p>Mud is the factory floor. The hospital ward. The warehouse. The place where work gets done in conditions no one in a corner office would tolerate for a week. This is not a complaint. This is the starting condition. The lotus does not grow on marble. It grows in mud.</p><p>Water is the medium &#8212; the management system, the daily cadence. Is it poisoned with fear, arbitrary metrics, supervisors who punish questions? Or clean &#8212; structured for learning, built for candor, designed to let intelligence move?</p><p>Sun is the energy &#8212; the investment. The training. The time a plant manager spends on the floor not checking up but checking in. The patience to let a frontline worker fail, learn, and fail better. Sun costs money. Sun costs time. Sun is what every cost-cutting initiative eliminates first.</p><p>Bloom is what happens when all three conditions are met. Intelligence surfaces. Problems get solved at the source. The worker who was invisible becomes the person who saves the line. The capability that was always there finally expresses itself.</p><p>And here is what Fink&#8217;s model misses entirely: when a worker blooms, she does not merely <em>earn</em> more. She <em>becomes</em> more valuable. Her capability compounds. Her intelligence gets encoded into the enterprise. She is no longer a cost offset by an index fund. She is capital &#8212; appreciating, compounding capital &#8212; and the enterprise that invested in her formation holds an asset no competitor can buy.</p><p>That is how you zip the helix back together. Not by giving labor access to capital markets. By making labor <em>into</em> capital. By rebuilding the rungs.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p>Fink writes that when market capitalization rises but ownership stays narrow, prosperity feels distant to those on the outside.</p><p>He is describing the symptom. The disease is the belief &#8212; embedded in forty years of management practice &#8212; that the people doing the work are not worth investing in. That their intelligence is a rounding error. That they are inputs to be optimized, not assets to be formed.</p><p>You cannot index-fund your way out of that belief. You cannot tokenize your way past it. You have to go to the floor. Sit with a third-shift operator. Build systems that let her knowledge travel upward without being filtered or ignored. Invest in her the way you invest in a machine &#8212; except that unlike a machine, she appreciates. She teaches others. She compounds.</p><p>Fink quotes Jensen Huang: &#8220;Everybody should be able to make a great living. You don&#8217;t need a PhD in computer science to do so.&#8221; I agree. But making a great living is not the same as being treated as a great asset. Compensation puts money in your pocket. Formation puts intelligence in the enterprise. Fink&#8217;s solutions address the first. The Bloom System builds the second.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p>I don&#8217;t fault Larry Fink. He is using the most widely read document in global finance to say that capitalism has a structural problem &#8212; and he is right. But he is looking from the top of the capital strand. I am looking from the factory floor, where the two strands are not separate populations in need of financial products. They are one helix, pulled apart by a management philosophy that treated human intelligence as waste.</p><p>The helix unzipped. Forty years of treating labor as cost did that, and AI is accelerating it.</p><p>You don&#8217;t fix a separated helix by throwing one strand a rope. You fix it by rebuilding the rungs &#8212; by investing in formation, by creating the conditions where the intelligence that was always there can finally bloom.</p><p>Mud. Water. Sun. Bloom.</p><p>That&#8217;s the zipper.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p><em>Dr. Venki Padmanabhan is the founder of the Capability Capital Institute and author of &#8220;Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away.&#8221; He has spent thirty-six years in global manufacturing leadership, including roles at GM, Royal Enfield, and Ather Energy. He writes at thelonggameforall.substack.com.</em></p>]]></content:encoded></item><item><title><![CDATA[Brain Well Done]]></title><description><![CDATA[AI Doesn&#8217;t Fry Your Brain. Your Boss Does.]]></description><link>https://thelonggameforall.substack.com/p/brain-well-done</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/brain-well-done</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Tue, 14 Apr 2026 11:02:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/657193b5-f79d-4f69-8a0d-44f5c5dded45_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-aJX13wh7uHo" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;aJX13wh7uHo&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/aJX13wh7uHo?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><em>&#8220;I was working harder to manage the tools than to actually solve the problem.&#8221;</em></p><p>&#8212; Senior engineering manager, BCG/UC Riverside study on AI cognitive load, 2026</p><p>There is a new clinical term making the rounds in knowledge work. Researchers at Boston Consulting Group and the University of California, Riverside have coined it <em>AI brain fry</em> &#8212; the mental fatigue that comes from overseeing too many AI agents doing too many things on your behalf. Fourteen percent of nearly 1,500 surveyed American workers report experiencing it. The symptoms are familiar: a buzzing behind the eyes, mental fog, slower decisions. The highest rates show up in marketing, software, HR, finance, and IT &#8212; precisely where AI adoption has been most aggressive.</p><p>John Herrman, writing in &#8239;New York&#8239; magazine, frames this as being involuntarily promoted into management. The new agentic tools turn every knowledge worker into a supervisor. You are no longer doing the work. You are delegating it, checking it, correcting it. You have acquired a staff of eager but unreliable direct reports who have no judgment, no liability, and no memory of what went wrong last time.</p><p>Herrman is right about the feeling. But his diagnosis stops one layer too shallow.</p><p>Brain fry is not a technology problem. It is a deployment problem. The tools are not burning people out. The conditions are &#8212; an extractive labor model that hands workers unlimited cognitive load with no formation, no recovery time, and no one authorized to say <em>that&#8217;s enough for today.</em> And we have been here before. Every generation of manufacturing technology since Taylor&#8217;s scientific management has arrived with the same promise and produced the same result: reduced physical effort, increased cognitive demand, and a workforce left to manage the gap alone. The BCG researchers have not discovered a new pathology. They have discovered the oldest pathology in industrial capitalism, arriving for the first time on the doorstep of people who write about it for a living.</p><p><strong>That&#8217;s the argument.</strong></p><p style="text-align: center;">* * *</p><p>I have spent thirty-six years building things in factories &#8212; General Motors, Chrysler, Mercedes-Benz, Royal Enfield in India, Advanced Drainage Systems in Ohio. Every generation of technology I watched arrive created what I now call the Abstraction Tax. Every time you move human work up one layer &#8212; from doing to supervising, from operating to monitoring &#8212; you reduce physical effort and increase cognitive demand. The demand is not just intellectual. It is existential. Because the person who used to <em>know</em>how to do the work now has to <em>trust</em> that the system is doing it correctly, without the means to verify it where errors actually live.</p><p>That is not brain fry. That is the universal condition of the manager separated from the work. I know it intimately not from managing AI, but from managing sixty human beings across three production lines.</p><p style="text-align: center;">* * *</p><p>Here is where I part company with the researchers, gently but firmly. They frame the experience as damage. <em>Brain fry</em> implies something ruined &#8212; overcooked, done. The metaphor encodes passivity. You are the egg. The technology is the flame.</p><p>I want to offer a different metaphor. Not brain fry. <strong>Brain well done.</strong></p><p>Brain well done is what happens when you operate closer to your actual intellectual capacity than you are accustomed to. It is the cognitive equivalent of the burn at the end of a hard set in the gym &#8212; not injury, but intensity. You are processing more, synthesizing more, deciding more, because the tools have finally given you the leverage to operate at a level previously reserved for people with staffs and corner offices.</p><p>But intensity without recovery is injury. A gym without rest days produces breakdown, not strength. Brain well done becomes brain fry the moment the person in the chair loses the ability to modulate the load. And that is precisely what is happening in most workplaces adopting AI today.</p><p>The skill missing from every AI rollout I have observed is what I think of as the dimmer switch &#8212; not an on/off toggle, but a dial. Full intensity for the sprint. Half-light for the review. Off when off means off. This is learnable. It is not instinctive, any more than operating a lathe is instinctive. But it requires someone to teach it and a culture that rewards the discipline to use it.</p><p style="text-align: center;">* * *</p><p>Before I put a new operator on a production line, they go through lockout-tagout procedures, supervised practice, incremental exposure to the full speed of the machine. We do not hand them the keys and say <em>figure it out &#8212; and by the way, you&#8217;re responsible for three lines now instead of one.</em> That would be reckless. That would be an OSHA violation.</p><p>That is exactly what we are doing with knowledge workers and AI. We hand them agentic tools with no formation, no supervised practice, no safety protocol &#8212; and then diagnose them with brain fry when they buckle. The diagnosis blames the worker&#8217;s brain. The actual failure is in the system.</p><p style="text-align: center;">* * *</p><p>Florence Nightingale arrived at Scutari in November 1854 and found soldiers dying at catastrophic rates. The accepted explanation was that war kills soldiers. Nightingale looked at the same data and saw something different. Ten times more soldiers were dying of typhus and cholera than of battlefield injuries. The killer was not the war. The killer was the conditions: the sewers, the overcrowding, the filth, the absence of sanitation.</p><p>She did not propose that soldiers stop fighting. She proposed that someone clean the drains.</p><p>We are at our own Scutari moment. <strong>The tools are not killing people. The conditions are.</strong>The extractive deployment model that demands more output for the same pay under constant threat of elimination &#8212; that is the sewage under the ward. Clean the drains, and the same technology that today produces brain fry could produce brain well done: human beings operating closer to their cognitive potential, with the formation and the institutional permission to modulate their own intensity.</p><p style="text-align: center;">* * *</p><p>In the medieval guild system &#8212; the <em>Zunft </em>tradition that shaped European craft production for centuries &#8212; the master&#8217;s role was to stand between the apprentice and the market. The guild regulated the pace of skill acquisition, ensured no journeyman was exposed to demands beyond their current formation, and created a structure in which competence could develop without exploitation destroying the learner first.</p><p>We need that function now &#8212; not as nostalgia, but as organizational design. Someone in every workplace whose job it is to watch the human in the chair. Not an AI ethics board. Not a wellness webinar. A structural role with the authority and mandate to say: <em>you&#8217;ve absorbed enough for today. The agents will keep.</em></p><p>The BCG researchers recommend limiting the number of AI agents a worker oversees. That is sensible &#8212; and it is also the equivalent of telling Nightingale&#8217;s soldiers to drink cleaner water without addressing the sewers. It treats the symptom. The disease is a labor model that treats human cognitive capacity as an extractable resource rather than an appreciating asset.</p><p style="text-align: center;">* * *</p><p>I feel the buzz myself. Plant manager, writer, institution builder, father &#8212; all of it in continuous dialogue with an AI tool that, on its best days, feels like the most intelligent colleague I have ever had. The pull toward <em>one more synthesis before bed</em> is real. But I am not being fried. I am being well done. I can feel the difference because I have managed enough human systems to know what overload feels like from the inside &#8212; and to reach for the dimmer switch before the convulsions start.</p><p>Millions of workers do not have that luxury. Not because they lack the cognitive capacity &#8212; the BCG study makes clear that brain fry hits the highest performers hardest. They lack the formation. They lack the institutional permission to turn the dial down without it being read as underperformance, or as a signal that their name belongs on the next layoff list.</p><p>Nightingale cleaned the sewers and the death rate at Scutari fell from forty-two percent to two. The technology of war did not change. The conditions changed.</p><p>The technology of AI will only intensify. The question is whether we will build the conditions &#8212; the formation, the guilds, the dimmer switches &#8212; that let human beings operate at their cognitive peak without being destroyed in the process.</p><p>Brain well done is not the same thing as brain fry. But it can become brain fry in an instant, the moment you lose agency over the interaction.</p><p>The difference is not the technology. <strong>The difference is who controls the switch.</strong></p><p><em>Venki Padmanabhan is a plant manager, writer, and founder of the Capability Capital Institute. His book Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away is forthcoming. He writes The Long Game at thelonggameforall.substack.com.</em></p>]]></content:encoded></item><item><title><![CDATA[Ben Sasse Has Only a Few Months to Live. What will we do with our time left?]]></title><description><![CDATA[A tribute to Ben Sasse, and a charge to American business]]></description><link>https://thelonggameforall.substack.com/p/ben-sasse-has-only-a-few-months-to</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/ben-sasse-has-only-a-few-months-to</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Mon, 13 Apr 2026 02:18:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/i5kkNVfNqSA" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-i5kkNVfNqSA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;i5kkNVfNqSA&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/i5kkNVfNqSA?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p style="text-align: center;"></p><p><em>Source: &#8220;How Ben Sasse Is Living Now That He Is Dying,&#8221; Interesting Times with Ross Douthat, The New York Times, April 9, 2026.</em></p><p><em>https://www.nytimes.com/2026/04/09/opinion/ben-sasse-cancer-death.html</em></p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p>My father died of pancreatic cancer. I know what the remaining months look like &#8212; not from a distance, but from the inside of a family that watched it. The negotiations with hope. The morning inventory of what still works. The slow, irreversible transfer of weight from the body to the spirit.</p><p>Ben Sasse is dying the same way. He is 54 years old, his face bloodied by the medication keeping him alive a few months longer, and he is still thinking harder about America&#8217;s future than most healthy people half his age. In a conversation with Ross Douthat of <em>The New York Times</em> this week, Sasse said something that stopped me cold. He said that higher education has abandoned its core responsibility: not research, not credentialing, not political combat &#8212; but <em>formation</em>. The hard, slow, unglamorous work of helping a young person become a capable adult.</p><p>He&#8217;s right. But he stopped one step short.</p><p>The formation crisis is not only a university problem. It is a business problem. The formation arc does not end at twenty-one. It runs to thirty. High school plants the seed. University deepens the root. The first employer &#8212; if they understand what they hold &#8212; produces a fully formed human being: someone with judgment, character, institutional wisdom, and the kind of tacit knowledge that no algorithm has ever encoded. Germany has known this for eight hundred years. America abandoned it a generation ago. And now the AI wave is arriving to sort the companies that understood this from the companies that did not.</p><p>That&#8217;s the argument.</p><p>My son Dakshin is a hematology-oncology fellow at Karmanos Cancer Institute in Detroit, spending his fellowship years searching for the genetic solutions that might one day break this disease &#8212; the disease that took his grandfather, and that is now taking a man I regard as one of the finest public servants America has produced in my lifetime. The Long Game is not a metaphor in our family. It runs across generations. My father formed me. I tried to form Dakshin. Dakshin is now on the frontier of the science that might save the next Ben Sasse. That is the formation arc made flesh.</p><p>I have spent thirty-six years on manufacturing floors across three continents. I have seen what formed people can do &#8212; at GM Lansing, at Royal Enfield Chennai, at Ather Energy. I have also seen what happens when institutions treat people as costs to be minimized rather than capabilities to be developed. The results are not subtle. You can see them in quality numbers, in turnover rates, in the hollow look of a worker who has been told for twenty years that his judgment does not matter.</p><p>Sasse diagnosed the university version of that hollow look. I am here to tell you it lives on the shop floor too. And in the corporate office. And in the HR function that replaced formation with compliance. We are not talking about two different diseases. We are talking about the same disease in different buildings.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p><strong>What Sasse actually said</strong></p><p>Here is what stopped me. Sasse told Douthat: academia is a total mess, and yet we need institutions to help people go from fifteen to twenty-one. You have to do home leaving. First job. Habit and character formation. Higher education could be a genuinely useful transitional institution. Right now it enables endlessly deferred adolescent behavior.</p><p>Read that again. Home leaving. First job. Habit. Character. He is not talking about curriculum reform or campus politics. He is talking about the ancient responsibility of institutions to form people &#8212; to take the raw material of a young human being and return something capable and whole.</p><p>Douthat pushed Sasse into AI. Sasse&#8217;s answer was the most clarifying thing in the entire conversation: AI is going to be human activity and behavior at warp speed &#8212; <em>for good and for ill</em>. Six words that contain the entire argument. Formation determines which side of that amplification you land on.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p><strong>Meet Lukas</strong></p><p>Let me introduce you to someone. I call him Lukas.</p><p>Lukas is every young worker I have watched arrive on a manufacturing floor at twenty-two with raw talent, genuine curiosity, and no formation. He has a high school diploma that certified his attendance and sometimes a college degree that certified his exposure to information. Neither institution asked him to produce a masterpiece. Neither institution said: here is something that matters, and we believe you can do it, and we will stay with you until you can.</p><p>Now here is what Lukas becomes when one institution &#8212; just one &#8212; decides to take formation seriously. By twenty-three, after 7,500 hours of deliberate practice, mentorship, and genuine challenge, Lukas earns what the German guild tradition calls the Gesellenbrief &#8212; a certification that he can do the work at any shop, to a standard that masters recognize on sight. By twenty-six, after the Wanderjahre &#8212; the deepening years across different environments and mentors &#8212; he has accumulated tacit knowledge that no database contains. By thirty, with 25,000 hours of formed experience, Lukas is a Meister. He diagnoses bearing failure through a housing wall by touch. He reads a weld by the color of the heat-affected zone. He teaches the next Lukas coming up behind him &#8212; not because he was told to, but because that is what formed people do.</p><p>Germany has been producing Lukas for eight hundred years through the Ausbildung. America abandoned this in the 1990s when we decided that college for all was more equitable than formation for all. We got neither. We got a generation of Lukases with student debt and no formation &#8212; and a generation of companies that decided, since formation was expensive and poaching was cheap, to stop investing in it altogether.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p><strong>The relay race nobody is running</strong></p><p>The formation failure is a relay race in which every runner drops the baton and blames the runner before them. High school says: college will handle it. College says: employers will handle it. Employers say: HR will run an onboarding. HR runs a two-day compliance training and calls it done. And Lukas &#8212; talented, curious, capable Lukas &#8212; arrives at thirty having been processed by four institutions and formed by none of them.</p><p>The baton is lying on the track. It has been lying there for thirty years. And now an AI tool is being handed to the runner who never learned to run.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p><strong>What the shop floor already knows</strong></p><p>Formation is not a new idea on well-run manufacturing floors. We call it development. We call it mentorship. We call it the practice of walking the line not to inspect but to listen.</p><p>When I was at Royal Enfield, we grew from fifty thousand motorcycles a year to one hundred and thirteen thousand. Profit grew twentyfold. The honest answer is that we decided to treat our workers as the primary source of operational intelligence rather than the primary source of operational cost. The person who has been running a welding station for eight years knows things about that process that no industrial engineer has ever written down. That knowledge is capability capital. It compounds if you invest in it. It evaporates if you ignore it.</p><p>Sasse talks about universities enabling endlessly deferred adolescent behavior. I have seen the corporate equivalent &#8212; the endlessly deferred worker development that never quite happens because Q3 targets are due and the training budget was the first line item cut. The logic is identical. In both cases, the institution takes the short-term extraction and defers the formation cost to someone else. In the university case it is the graduate who cannot function in the world. In the corporate case it is the workforce that cannot function in the AI age.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p><strong>The AI deadline</strong></p><p>AI does not arrive as a neutral tool. It arrives as an amplifier. It will amplify whatever your organization already is.</p><p>If your organization has spent years forming people &#8212; building judgment, growing capability, investing in frontline intelligence &#8212; AI will make those people extraordinary. The fully formed Lukas at thirty, with a mature AI tool in his hands, catches what the algorithm misses. He asks the questions the model cannot formulate. He exercises discretion that no training data can encode.</p><p>If your organization has spent years extracting from people &#8212; deskilling the workforce, replacing judgment with procedure &#8212; AI will finish the job. Not because AI is malevolent. Because you spent decades ensuring your people have nothing left that a machine cannot replicate.</p><p>I call this <em>Already Paid For</em>. The capability is already in your workforce. You paid for it in wages, in years, in accumulated experience. The question is whether you have been drawing on that account or letting it sit there, unacknowledged and underused, while you searched for the next automation solution that would let you need fewer of them.</p><p>The AI wave is arriving on a fixed schedule. Which kind of company you are is not determined by the tools you buy. It is determined by the formation investments you have already made &#8212; or failed to make &#8212; in the people running your operation today.</p><p>I am not asking corporations to invest in formation because it is the right thing to do. I am making a self-interest argument, as cold and clear as any argument I know how to make. Business does not have the luxury of a decade-long reform debate. The sorting is happening now. And the distinguishing variable is not your technology budget. It is whether you treated your people as capital to be formed or cost to be extracted.</p><p style="text-align: center;">&#8212;&#8212;&#8212;</p><p><strong>Pick up the baton</strong></p><p>What makes a human being distinct from a machine is not processing speed. It is judgment in conditions of genuine uncertainty. It is care for the person standing next to you. It is the willingness to say <em>this is wrong</em>even when the procedure says otherwise. It is the knowledge that accumulates not in a database but in a body &#8212; in Lukas&#8217;s hands after 25,000 hours, in eyes that have read ten thousand quality signals.</p><p>None of that can be automated. All of it can be destroyed &#8212; by institutions that refuse to form it, by corporations that refuse to develop it, by leaders who see the worker as a unit of input rather than a source of intelligence.</p><p>Ben Sasse is dying, and he is still planting. He planted at Midland. He planted in the Senate. He planted in Gainesville. He is planting now, from a chair, with a bloodied face, speaking to anyone who will listen about the formation we forgot.</p><p>I am not dying. As far as I know. But I have watched enough shop floors, enough careers, enough institutional failures &#8212; and enough hospital rooms &#8212; to understand that the window for doing the thing that matters is always shorter than you think.</p><p>I know what some of you are thinking. He&#8217;s gone off his rocker. An essay a day. Two books coming in October. A nonprofit board forming in June. A workforce venture. A man in his sixties still running a manufacturing plant in Ohio while writing about the long game at midnight. Maybe. But Sasse did not decide to die in public. He ended up with a calling to die. The cancer forced the clarity that most of us avoid for decades &#8212; the clarity that the time is now, the work is the testimony, and the only thing that compounds across a lifetime is the formation you invested in other people.</p><p>So I write. I build. I argue. Not because I can measure the influence. Not because the follower count justifies the effort. Because Ben Sasse is planting seeds from a chair with a bloodied face and if he can do that, the least I can do is pick up the baton and run.</p><p>What are you going to do?</p><p>The formation arc runs from fifteen to thirty. You hold part of it. Every manager, every teacher, every mentor, every parent reading this holds part of it. Ben Sasse is running his leg of the race with months to live. Dakshin is running his in a cancer research lab in Detroit. I am running mine with whatever time I have left.</p><p>The clock is running. Pick up the baton. Form Lukas.</p>]]></content:encoded></item><item><title><![CDATA[The Good Jobs Evidence: When Paying Workers More Makes Shareholders Richer]]></title><description><![CDATA[Essay 5 of &#8220;The Evidence They Can&#8217;t Ignore&#8221; &#8212; A Series on Systematic Intelligence Suppression]]></description><link>https://thelonggameforall.substack.com/p/the-good-jobs-evidence-when-paying</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/the-good-jobs-evidence-when-paying</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Sun, 12 Apr 2026 12:23:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/hWGgAJkmx_g" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-hWGgAJkmx_g" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;hWGgAJkmx_g&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/hWGgAJkmx_g?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Four essays in, and the evidence pattern is clear: worker intelligence exists (NUMMI), was deliberately suppressed (Taylor), flourishes when the system asks for it (Toyota), and accounts for marginal improvement when most variation belongs to the system itself (Deming).</p><p>But a skeptic can still dismiss all of this as manufacturing romanticism. &#8220;Toyota is unique,&#8221; they&#8217;ll say. &#8220;Japanese culture is different. NUMMI was a special case. You&#8217;re cherry-picking.&#8221;</p><p>This week I want to show you that the pattern holds across industries, across cultures, across decades &#8212; and that the evidence comes not from quality gurus or lean consultants but from one of the most rigorous operations researchers in the world, working out of MIT Sloan.</p><p>Her name is Zeynep Ton. Her research program has been documenting, for over 15 years, what happens when companies invert the dominant labor model. The title of her most recent book names the finding: <em>The Case for Good Jobs.</em></p><h2>The Vicious Cycle</h2><p>Ton&#8217;s starting point is a description of the operating model that governs most frontline work in retail, hospitality, food service, healthcare, and &#8212; to a significant degree &#8212; manufacturing. She calls it the <strong>vicious cycle</strong>, and it works like this:</p><p>Companies treat labor as a cost to be minimized. They pay the lowest wages the labor market will bear. They invest minimally in training. They staff thinly, so that workers are constantly stretched across too many tasks. They design jobs to be simple enough that anyone can do them, because the expectation is that anyone will have to &#8212; turnover is a feature, not a bug, of the model.</p><p>The result: workers are undertrained, overstretched, and disengaged. Service quality suffers. Execution suffers. Shelves are empty, customers can&#8217;t find help, errors multiply. Revenue declines or stagnates. And management, looking at the deteriorating performance, concludes that the solution is to cut labor costs further &#8212; because what else can you do with workers who can&#8217;t perform?</p><p>The cycle feeds itself. Each turn makes the next turn worse. And at every stage, the system attributes the failure to the workers rather than to the model that produced the failure.</p><p>If this sounds familiar, it should. The vicious cycle is the economic expression of intelligence suppression operating at industry scale. The system is designed to prevent workers from contributing judgment, capability, and problem-solving &#8212; and then it interprets the absence of those contributions as evidence that the workers are incapable.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!awDb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!awDb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 424w, https://substackcdn.com/image/fetch/$s_!awDb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 848w, https://substackcdn.com/image/fetch/$s_!awDb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 1272w, https://substackcdn.com/image/fetch/$s_!awDb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!awDb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png" width="469" height="24" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:24,&quot;width&quot;:469,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!awDb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 424w, https://substackcdn.com/image/fetch/$s_!awDb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 848w, https://substackcdn.com/image/fetch/$s_!awDb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 1272w, https://substackcdn.com/image/fetch/$s_!awDb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2>The Inversion</h2><p>Ton identified a cluster of companies that broke the cycle &#8212; not through philanthropy or idealism, but through a different operating logic. These companies invest in their workers: higher wages, better training, more stable schedules, cross-training that develops capability, and &#8212; critically &#8212; they give frontline workers the authority and information to make decisions at the point of service.</p><p>The companies Ton studied most extensively include:</p><p><strong>Costco</strong> operates with wages approximately $10 per hour above the retail industry average. It employs more staff per store than competitors. It cross-trains workers across departments. It promotes overwhelmingly from within. And it consistently outperforms on revenue per employee, customer satisfaction, employee retention, and shareholder return.</p><p><strong>QuikTrip</strong>, a convenience store chain, pays above-market wages, invests heavily in training, cross-trains all workers, and operates with enough staff to maintain service quality even during peak periods. The result is same-store sales growth that consistently beats competitors, employee turnover far below industry average, and profitability that funds continued investment in the workforce.</p><p><strong>Mercadona</strong>, Spain&#8217;s largest supermarket chain, adopted what Ton calls the good jobs strategy in the late 1990s. It pays above industry average, offers permanent contracts instead of temporary ones, invests in training, and empowers store-level workers to make decisions about product placement, inventory, and customer service. Over 25 years, Mercadona has seen continuous growth in labor productivity, profitability, and market share &#8212; during a period when most European grocers were racing to the bottom on labor costs.</p><p><strong>Sam&#8217;s Club</strong> provides a recent case of an established company making the transition. After raising team lead pay by $5-$7 per hour, creating stable schedules, reducing product variety by 25 percent (which simplifies execution and gives workers more time per task), and empowering frontline workers with more decision authority, Sam&#8217;s Club saw turnover costs drop by more than 25 percent with corresponding improvements in labor productivity, customer satisfaction, and sales.</p><h2>The Mechanism</h2><p>What Ton&#8217;s research makes clear is that this isn&#8217;t about generosity. It&#8217;s about operating system design.</p><p>Higher wages attract and retain better candidates and reduce the enormous cost of turnover. (In retail, replacing a single frontline worker costs $2,000 to $10,000. At 60 percent annual turnover, a 100-person store is spending $600,000 per year just on churn.) But higher wages alone don&#8217;t produce the results Ton documents. The companies in her research do four things simultaneously:</p><p><strong>Invest in people:</strong> Not just wages, but training. Workers are developed into problem-solvers, not just task-executors.</p><p><strong>Design jobs with meaning:</strong> Cross-training, broader responsibilities, and involvement in improvement activities mean the worker is using more of their brain, not less. The job becomes worth staying for.</p><p><strong>Operate with slack:</strong> These companies deliberately staff above the minimum required to &#8220;cover the floor.&#8221; The slack gives workers time to serve customers well, maintain displays, solve problems, and &#8212; crucially &#8212; <em>think</em> about how to improve the operation. Running with zero slack means every minute is spoken for. There is no time for intelligence to operate.</p><p><strong>Simplify operations:</strong> By reducing unnecessary complexity (fewer SKUs, simpler promotions, streamlined processes), these companies make it possible for frontline workers to master their domains and exercise real judgment. Complexity without capability produces chaos. Simplicity with capability produces excellence.</p><h2>The Financial Proof</h2><p>The &#8220;business case&#8221; framing is important because it disarms the objection that good jobs are a luxury only rich companies can afford. Ton&#8217;s data shows the opposite: good jobs are an <em>investment</em> that produces measurable financial returns.</p><p>The Costco comparison is the cleanest. For decades, analysts on Wall Street pressured Costco to cut labor costs, arguing that its labor model was inefficient compared to competitors. Costco&#8217;s leadership refused. The stock outperformed. Revenue per employee outperformed. Customer satisfaction outperformed. The analysts were measuring the wrong thing &#8212; they were measuring labor cost per hour when they should have been measuring labor <em>value</em> per hour.</p><p>This is the same error Taylor made in 1911. He measured the cost of worker discretion (slower production, &#8220;soldiering&#8221;) without measuring the value of worker intelligence (problem-solving, quality, improvement). Ton&#8217;s research corrects that error with 21st-century financial data: when you account for turnover costs, training costs, execution quality, customer retention, and revenue per employee, the &#8220;expensive&#8221; workforce is the profitable one.</p><h2>Why the Model Doesn&#8217;t Spread</h2><p>If the evidence is this clear, why hasn&#8217;t every retailer, restaurant chain, and hotel company adopted the good jobs strategy?</p><p>Ton addresses this directly, and her answer maps precisely onto the suppression thesis. The vicious cycle persists because:</p><p><strong>The costs are visible and the benefits are distributed.</strong> A wage increase hits the P&amp;L immediately and visibly. The benefits &#8212; reduced turnover, better execution, higher customer satisfaction, improved revenue &#8212; show up over quarters and years, distributed across multiple line items. CFOs see the cost. They have to be taught to see the return.</p><p><strong>The assumption of worker interchangeability is deeply embedded.</strong> The Taylorist model assumes that frontline workers are substitutable &#8212; that a $16/hour worker and a $26/hour worker will produce roughly the same output, because the job is designed to be that simple. Ton&#8217;s research shows this assumption is catastrophically wrong, but it is so deeply embedded in management thinking that it functions as an invisible axiom.</p><p><strong>The model requires management capability that most organizations lack.</strong>Running a good jobs operation requires managers who can train workers, involve them in problem-solving, create meaningful work, and lead through engagement rather than compliance. That is a fundamentally different skill set than managing a low-cost labor model, and most organizations have not invested in developing it.</p><p>The parallels to the NUMMI story are exact. GM saw Toyota&#8217;s system, understood the results, and couldn&#8217;t replicate it &#8212; because replication required changing the management operating system, not just the labor policy. The same barrier prevents the good jobs strategy from spreading. The evidence is overwhelming. The execution requires a transformation that most management teams are unwilling or unable to undertake.</p><h2>The Connection to Everything Else</h2><p>Ton&#8217;s good jobs research is the economic complement to every other evidence stream in this series:</p><p>&#8226; <strong>NUMMI</strong> proved the principle in a single factory.</p><p>&#8226; <strong>Taylor</strong> explained why the suppression model exists.</p><p>&#8226; <strong>Toyota&#8217;s suggestion system</strong> showed what deployed intelligence produces.</p><p>&#8226; <strong>Deming</strong> provided the statistical framework.</p><p>&#8226; <strong>Ton</strong> proves the economics across industries and at scale.</p><p>The vicious cycle is intelligence suppression measured in dollars. The good jobs strategy is intelligence deployment measured in dollars. The difference between the two is the suppression tax &#8212; and Ton&#8217;s research gives us the clearest picture yet of how large that tax actually is.</p><p>When a retailer spends $600,000 per year on turnover churn, that&#8217;s the suppression tax. When a hotel chain runs at 80 percent annual turnover and blames the labor market, that&#8217;s the suppression tax. When a manufacturer automates a process that a $26/hour worker with training and authority could have improved for free, that&#8217;s the suppression tax.</p><p>The intelligence is there. It has always been there. Zeynep Ton proved that deploying it is not just ethically right &#8212; it&#8217;s the most profitable thing you can do.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!awDb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!awDb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 424w, https://substackcdn.com/image/fetch/$s_!awDb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 848w, https://substackcdn.com/image/fetch/$s_!awDb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 1272w, https://substackcdn.com/image/fetch/$s_!awDb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!awDb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png" width="469" height="24" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:24,&quot;width&quot;:469,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!awDb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 424w, https://substackcdn.com/image/fetch/$s_!awDb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 848w, https://substackcdn.com/image/fetch/$s_!awDb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 1272w, https://substackcdn.com/image/fetch/$s_!awDb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0ae83b1-d59b-457b-9bbc-87ed7c681dc9_469x24.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><em>Next week: &#8220;The $900 Billion Autopsy&#8221; &#8212; The digital transformation movement promised that technology would solve the productivity problem. It failed. Seventy percent of the time. And the reason it failed is the same reason everything else in this series fails: they tried to automate intelligence that had been suppressed out of existence.</em></p><p><strong>Dr. Venki Padmanabhan</strong> is a plant manager with 36 years of global manufacturing leadership experience, including executive roles at GM, Chrysler, Mercedes-Benz, Royal Enfield, and Ather Energy. He holds a PhD in Industrial Engineering from the University of Pittsburgh. He is the author of the forthcoming <em>Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away</em> and co-founder of the Capability Capital Institute. He writes The Long Game at thelonggameforall.substack.com.</p>]]></content:encoded></item><item><title><![CDATA[The Matrix Hired 640,000 People]]></title><description><![CDATA[When AI &#8220;creates&#8221; work, ask who it&#8217;s actually employing]]></description><link>https://thelonggameforall.substack.com/p/the-matrix-hired-640000-people</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/the-matrix-hired-640000-people</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Fri, 10 Apr 2026 11:12:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/HY6qJblsUk8" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p></p><div id="youtube2-HY6qJblsUk8" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;HY6qJblsUk8&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/HY6qJblsUk8?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Source: &#8220;Wanted: Head of Human AI Solutions. The New Jobs Being Created by AI&#8221;</p><p>&#8212; Te-Ping Chen, The Wall Street Journal, April 2, 2026</p><p>https://www.wsj.com/tech/ai/new-ai-jobs-data-annotator-head-of-ai-c0870d2b</p><p></p><p>In the first <em>Matrix</em> film, the machines do not kill the humans. They farm them. Millions of bodies suspended in pods, tubes running in and out, each person generating just enough bioelectric energy to power the system that imprisoned them. The humans were alive. You could even say they were employed &#8212; the machines needed them. But nobody in the audience mistook the pods for jobs. It was harvest. The humans were the fuel. The machine was the point.</p><p>I thought about that scene when I read the Wall Street Journal this morning.</p><p>The Journal reports that artificial intelligence created 640,000 jobs in the United States between 2023 and 2025. Head of AI. AI engineer. Data annotator. The numbers come from LinkedIn, and the headline is meant to reassure: AI is not just killing jobs. It is creating them.</p><p>I want to tell you why that number should terrify you.</p><p>A pathologist in Galveston types out hypothetical medical scenarios after his hospital shift and grades chatbot responses for ninety dollars an hour. A woman in Austin who lost her job at Meta spends her days describing the emotional impact of AI-generated images. A twenty-five-year-old in San Diego with the title Head of Human AI Solutions describes his approach to the role as &#8220;fake it till you make it.&#8221; These are real people doing real work. Tubes in, tubes out. Energy flowing one direction.</p><p>Every one of these 640,000 jobs exists to make a machine smarter. The data annotators label images so models can learn to see. The AI trainers grade responses so chatbots can learn to speak. The heads of AI help companies figure out which human tasks to automate next. The humans are the input. The algorithm is the product. This is not employment that develops human capability. It is employment that develops machine capability. Neo in a pod, generating power for the Matrix, dreaming he has a career.</p><p>I have spent thirty-six years building things in factories &#8212; Bonnevilles, LeSabres, Impalas, and Cadillac CTS at GM, painted panels at SMP and Rehau in Alabama, motorcycles at Royal Enfield. The panels we painted at those suppliers fed the Mercedes-Benz plant in Vance &#8212; the first major Mercedes factory ever built outside Germany, the facility that just celebrated its five millionth vehicle and produces roughly 260,000 luxury SUVs a year for customers in 135 countries. Every GLE and GLS that rolls off that line carries parts that human beings painted, inspected, and perfected. In every one of those plants, the question that determined whether the operation succeeded or failed was not <em>what technology did you install?</em> It was <em>what did your people become?</em>Did the painter who ran the spray line for five years learn to read a film thickness gauge and trace a defect back to the booth&#8217;s humidity settings? Did the quality technician who flagged the orange peel learn to adjust the electrostatic charge before the whole batch went bad? Did the team lead who managed seven people learn to teach, not just schedule?</p><p>When technology creates jobs that make <em>people</em>smarter, the economy grows a capability it can compound. When technology creates jobs that make <em>machines</em> smarter, the economy grows a dependency it cannot escape. The first is formation. The second is feeding. And the difference between the two is the difference between building a life and being plugged into a pod.</p><p>Six hundred and forty thousand feeding jobs is not a labor market success story. It is a confession.</p><p><strong>That&#8217;s the argument. Now let me show you what it looks like on the ground.</strong></p><p>I think about Daniel Millian, the pathologist in Galveston. He reads biopsy slides during the day &#8212; real medicine, real patients, real stakes. Then he comes home and spends four to five hours grading AI responses. He makes an extra seventy-five thousand dollars a year doing it. Good money. Flexible hours. He told the Journal he wants to improve the technology to better address patient and clinician needs.</p><p>I believe him. And I want to ask a question that the article does not: what happens to Daniel&#8217;s field when the model he is training gets good enough?</p><p>Because that is the inversion nobody in this article confronts. The AI trainer&#8217;s job is to make the AI better at the AI trainer&#8217;s own job. The pathologist is teaching the machine to read slides. The journalist-turned-annotator is teaching the machine to write. The coder grading model outputs is teaching the machine to code. Every hour of training data accelerates the day when the model no longer needs the trainer. This is not a career. It is a countdown. The pod keeps you alive exactly as long as it needs your energy.</p><p>Contrast that with something I saw at General Motors&#8217; Lansing Grand River Assembly Plant. We had operators who started on the line doing repetitive assembly. Over years &#8212; through structured job rotation, problem-solving training, and a deliberate progression from task execution to process ownership &#8212; those operators became the most valuable people in the building. They could hear a weld gun going bad before the quality data showed it. They understood the upstream and downstream consequences of their station in ways no engineer fresh out of Kettering could. Their capability <em>compounded</em>. Every year they worked, they were worth more &#8212; not because the market said so, but because what they <em>knew</em> had deepened.</p><p>That is what I call an appreciating asset. A human being whose capability compounds over time, whose judgment deepens with experience, whose value to the organization increases precisely because the organization invested in her development. The annotator labeling images in Austin is a depreciating one &#8212; not because she lacks intelligence, but because the system she feeds is designed to make her unnecessary. Nobody is asking what she should become next. Nobody is building her a path from annotation to ownership. She is raw material with a master&#8217;s degree.</p><p>The article celebrates one number that should alarm anyone who thinks carefully about labor markets: twelve thousand data annotators working for a single AI training company, Telus Digital. Many of them hold PhDs. The company&#8217;s senior director explains the logic plainly: if you&#8217;re training a model to do scientific discovery, you need people who do scientific discovery.</p><p>Read that again. You need people <em>who do</em>scientific discovery &#8212; so the machine can learn to <em>mimic</em> scientific discovery. The PhD is not being hired to do science. She is being hired to <em>describe</em>science in a format a model can digest. Her expertise is being extracted, not exercised. This is the opposite of formation. This is strip-mining.</p><p>I ran paint shops in Alabama &#8212; automotive panels, high-gloss finish, zero tolerance for defects. When we brought in a newer robotic spray line, the question was not <em>how do we replace the operator?</em> The question was <em>what do we train him on to increase his capability so that in addition to having Mercedes as my customer, I can win parts from Nissan or Toyota for the same shop?</em> The answer was never less skill. It was more. Always more.</p><p>The painter who used to spray panels by hand learned to program the robot&#8217;s path, monitor its film build in real time, and troubleshoot when the finish went wrong. He went from pulling a trigger to owning a system &#8212; understanding viscosity, electrostatics, booth airflow, and cure temperature as an integrated whole. And because he understood all of that, I could quote new work from new customers and win it, because my people could handle the complexity. That is Ascension. That is what formation looks like in a factory. The robot did not replace the human. The human made the robot profitable. And it did not happen by accident. It happened because someone &#8212; the plant manager, the operations director, the CEO &#8212; decided that the human being standing next to the machine was worth investing in.</p><p>Nobody at Telus Digital is asking what a PhD annotator should become next. The job has no progression arc. The job has no second act. The job is the act of being consumed.</p><p>Now extend this to white-collar work, because that is where the Journal&#8217;s article is really pointed. The Goldman Sachs estimate &#8212; AI could automate tasks accounting for a quarter of all working hours &#8212; targets administrative support, legal work, architecture, and engineering. These are not factory jobs. These are the professions parents tell their children to pursue. These are the jobs that were supposed to be safe.</p><p>And the &#8220;new jobs&#8221; AI is creating for these displaced professionals? Victoria Chapa, the intellectual property specialist laid off from Meta, now takes short-term gigs describing the emotional impact of AI-generated images. She told the Journal it makes her feel crazy. She is looking for work in AI governance and ethics &#8212; a field that barely exists and that has no institutional structure to train her for it.</p><p>This is what happens when an economy creates jobs for the machine and calls it progress. The junior analyst who used to build the DCF model &#8212; the one who learned to think <em>by building it</em> &#8212; is replaced by a prompt. The associate who drafted the legal brief &#8212; who learned the law <em>by practicing it</em> &#8212; is replaced by a chatbot. And the new job? Grade the chatbot&#8217;s output. Label its errors. Describe its emotional impact. The formation task disappears. The feeding task takes its place.</p><p>The twenty-five-year-old in San Diego with the title Head of Human AI Solutions told the Journal his most important skill is explaining the technology in accessible terms. His approach: fake it till you make it. I do not blame him. He is twenty-five, he has a master&#8217;s degree, and he landed a job in a growing field. But &#8220;fake it till you make it&#8221; is a confession that the role has no apprenticeship structure, no mastery arc, no formation pathway. Compare that to a surgeon who spends twelve years in structured medical training before she is trusted to cut. Compare it to an electrician who apprentices for four years before he is licensed. Compare it to the operator at Lansing Grand River who spent a decade learning to hear a die going bad. Those people were <em>formed</em>. This young man is winging it. And the economy is calling that a success.</p><p>So why does this keep happening? Why do we celebrate 640,000 jobs that feed the machine and ignore the question of whether anyone is building jobs that grow the human?</p><p>Because our accounting systems cannot see the difference.</p><p>I call this the HVAC problem &#8212; not the system that controls the air in your plant, but the framework that measures the people. Hiring Value, Vocational Value, Accreditation Value, Contribution Value. Until a company can put Capability Capital on its balance sheet &#8212; until the CFO can quantify what a formed worker is worth compared to a replaced one &#8212; formation will always lose the budget fight. The annotator shows up as a variable cost. The automated task shows up as a savings. The operator who spent a decade becoming irreplaceable shows up as &#8212; nothing. She is invisible to the ledger.</p><p>The Journal article notes that only six percent of companies even mention AI in their job postings, and one percent of companies account for ninety percent of those posts. This is not broad-based job creation. This is concentration. A handful of tech firms are hiring thousands of annotators to feed their models, and the rest of the economy is watching. The CEO of micro1, an AI staffing company, says these jobs are not temporary. In the same breath, he says AI will do increasingly complex and specialized tasks in the future. If the machine keeps getting better at what the annotators do, someone will need to explain to me what the permanent career path looks like.</p><p>Unless you believe the career path is: teach the machine to do your job, then teach it to do a harder job, then teach it to do the hardest job, and then &#8212; what? What is left for the human who fed the machine everything she knew? A LinkedIn badge that says Open to Work?</p><p>Here is what I know after thirty-six years of building things with human beings. The best technology I ever deployed was not a robot. It was not a vision system. It was not an algorithm. It was a structured formation program that took an operator from task execution to process ownership to system thinking. That operator became more valuable every year. Her capability compounded. The plant got better because <em>she</em> got better. And no machine I have ever installed can say the same about itself.</p><p>Six hundred and forty thousand jobs that make machines smarter is an economy building its own Matrix. It is an economy that has decided the human being is the battery and the algorithm is the city the battery powers. It is an economy that cannot tell the difference between <em>employing</em>people and <em>consuming</em> them.</p><p>The question is not whether AI creates jobs. It does. The question is whether those jobs create<em>humans</em> &#8212; humans who are more capable, more valuable, more sovereign over their own working lives at the end of the job than they were at the beginning. If the answer is no, then the job is not employment. It is extraction dressed in a paycheck. It is a pod with a direct deposit.</p><p><strong>Unplug. Build jobs that build people. Count the humans as appreciating assets. Measure what they become, not just what they produce.</strong></p><p><em>That is the Long Game.</em></p>]]></content:encoded></item><item><title><![CDATA[Elon Musk wants to make human labor obsolete.]]></title><description><![CDATA[He&#8217;s building the most expensive argument against his own workforce.]]></description><link>https://thelonggameforall.substack.com/p/elon-musk-wants-to-make-human-labor</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/elon-musk-wants-to-make-human-labor</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Thu, 09 Apr 2026 11:02:55 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9815ffe8-1626-4cdb-8e27-edf20f217554_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-sOzLEHghsrM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;sOzLEHghsrM&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/sOzLEHghsrM?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p style="text-align: center;"><em> </em></p><p style="text-align: center;">Venki Padmanabhan &#8226; The Long Game</p><p><em>This essay responds to &#8220;Musk races to build a robot army at Tesla. Silicon Valley is following,&#8221; by Faiz Siddiqui, The Washington Post, March 27, 2026.</em></p><p>https://www.washingtonpost.com/technology/2026/03/27/musk-optimus-robot-physical-ai/</p><p>In Elon Musk&#8217;s utopia, billions of robots perform all necessary work. Autonomous vehicles and humanoids, fueled by solar energy, provide boundless resources. Poverty is eliminated. Work is optional. And the world&#8217;s richest person becomes the first trillionaire in the process.</p><p>That last sentence is the tell.</p><p>The Washington Post reported this week that Musk has recast his companies to chase this future&#8212;pivoting Tesla to prioritize building robots, phasing out car models including its popular luxury sedan to stand up a new production line of Optimus humanoids. Amazon, Nvidia, and a new startup from Uber co-founder Travis Kalanick have all made fresh forays into advanced robotics this month. </p><p>Figure, a leading robotics startup, put a humanoid robot in the White House that walked the red carpet alongside the first lady. Observers were divided on which one moved more naturally.</p><p>Let me say that again. A robot walked the red carpet at the White House. And we are supposed to be inspired.</p><p>Here is what none of these men can see, because their wealth depends on not seeing it: abundance has never&#8212;not once, anywhere&#8212;flowed from replacing human capability. It has only ever flowed from deploying it.</p><p>The Marshall Plan did not rebuild Europe by automating Germans out of their own reconstruction. It invested in the people standing in the rubble. Soichiro Honda did not build the world&#8217;s most reliable engines by eliminating the judgment of his machinists. He built a system that made their judgment sharper every year. Florence Nightingale did not save soldiers at Scutari by replacing nurses with better logistics. She gave nurses the authority and the data to act on what they already knew.</p><p>You get abundance when you treat labor as an appreciating asset. You get wreckage when you treat it as a depreciating cost. I have seen both, up close, for thirty-six years. Musk is building the most expensive depreciating-cost argument in history.</p><p>There is a quiet admission buried in the Post&#8217;s reporting that I think deserves more attention than the red-carpet robot. Tesla, we learn, has been &#8220;aggressively recruiting workers from other parts of the tech industry, seizing on specific areas of expertise&#8212;such as mimicking the capabilities and range of motion of the human hand.&#8221;</p><p>Read that slowly.</p><p>To build the machine that replaces the human hand, you need humans who have spent decades studying&#8230; the human hand. Think about that for a moment. The entire project of displacement depends, at every critical juncture, on the irreplaceable intelligence of the people it claims to make obsolete. The robot that walked the red carpet was carried there on the backs of ten thousand engineers whose expertise Musk cannot automate, cannot commoditize, and cannot do without.</p><p>That is not irony. That is a structural contradiction. You cannot build a system to eliminate human intelligence without continuously depending on human intelligence to build it.</p><p>And here is the part that should keep those hand-movement engineers up at night. They were recruited&#8212;at good salaries, with stock options, from comfortable desks at other tech companies&#8212;to pour their intelligence into building a machine that replaces blue-collar workers. They probably felt safe. They had degrees, specialized knowledge, the kind of credentials that were supposed to protect them. But every insight they contribute to Optimus&#8217;s locomotion, every algorithm they refine for robotic dexterity, feeds the same AI infrastructure that is quietly learning to do their jobs too. The hand-movement engineer who teaches a robot to grip a doorknob is, in the same motion, training the system that will eventually write the next version of the code she just wrote. She is complicit in her own displacement and doesn&#8217;t know it yet. The snake is eating its own tail, and the people inside the snake are still collecting a paycheck. Can I get a volunteer to create a new word, because Ouroboros doesn&#8217;t quite cut it.</p><p>I sometimes describe the human worker to my colleagues as the H-1&#8212;the most advanced autonomous system ever engineered. Not a product of venture capital. A product of evolution. It runs on a neural processor with 86 billion interconnected nodes. Processing capacity: approximately one exaFLOP&#8212;comparable to the Oak Ridge Frontier supercomputer, which cost $600 million and consumes 21 megawatts. The H-1 does it on 20 watts. A dim light bulb. A million times more energy-efficient. It ships with two manipulators&#8212;hands&#8212;each with 27 degrees of freedom and roughly 17,000 tactile sensors. Optimus recently upgraded to 22 degrees of freedom, celebrated as a breakthrough. Over 200 degrees of freedom across the full kinematic chain. It walks, runs, climbs, and recovers from unexpected perturbations&#8212;autonomously. Russia&#8217;s showcase humanoid fell on its face on a flat stage last year in front of dozens of journalists. The H-1 has been walking reliably for 200,000 years.</p><p>It self-fuels from widely available organic compounds (we call it &#8220;food&#8221;). It self-repairs minor structural damage. It updates continuously through a process called learning. Over 150 million units are currently operational worldwide, available for immediate deployment at competitive pricing. No venture capitalist has ever seen this pitch deck. Because I have just described the person standing at your production line.</p><p>And here is what makes the blindness criminal: this extraordinary system ships with a formation pathway that has been working for eight hundred years. Walk with me through Sindelfingen, Germany. A young man named Lukas finishes school at sixteen and enters the dual-track Ausbildung system&#8212;the Zunft, the guild tradition that has produced the world&#8217;s most capable industrial workforce since the medieval Handwerksordnung. For 42 months he splits time between the plant and technical school, learning under a certified Meister. At nineteen he passes the Gesellenpr&#252;fung&#8212;his journeyman&#8217;s exam. He works across departments, building mastery. At twenty-six he passes the Meisterpr&#252;fung&#8212;practical expertise, theoretical knowledge, business management, and the ability to train the next person. He is entered into the Handwerksrolle, the rolls of master craftsmen. An unbroken registry since the medieval guilds.</p><p>At thirty, Lukas has 25,000 hours of combined training and application. He reads engineering drawings, interprets statistical process data, diagnoses failures by sound and touch, trains the next generation, and improves processes nobody asked him to improve. He carries the kind of organizational intelligence no AI system can replicate: what was tried in 2019, what failed in 2021, why the data from 2023 is misleading, and how that particular machine behaves when it&#8217;s humid outside.</p><p>That is what deploying the H-1 looks like. Not replacing the asset. Forming it. (I have written about this at length in &#8220;The Most Sophisticated Robot Ever Built&#8221; on my Substack, The Long Game. The full H-1 spec sheet and Lukas&#8217;s formation journey are there for anyone who wants to see what $500 billion in venture capital is trying&#8212;and failing&#8212;to replicate.)</p><p>Germany invests 42 months of structured development before a worker is even considered proficient. The return: German manufacturing productivity per worker is among the highest in the world. Not because of more robots. Because of more capable humans operating alongside robots.</p><p>America abandoned this model. We dismantled vocational education, stigmatized the trades, and told an entire generation that dignity required a bachelor&#8217;s degree and a desk. Now we have the most advanced robot on earth standing on every factory floor in the country&#8212;unformed, undeployed, treated as a cost to be eliminated rather than a capability to be activated. And Elon Musk&#8217;s solution is to spend $500 billion building an inferior replacement.</p><p>And if you took that bachelor&#8217;s degree and got the desk? He&#8217;s coming for you too. Musk co-founded OpenAI, then left to build xAI&#8212;now valued at $200 billion, merged with X, acquired by SpaceX, powered by the largest supercomputer on earth. His chatbot Grok is already integrated into Department of Defense networks. His AI company raised $20 billion in a single round. Optimus replaces the body. Grok replaces the mind. The pincer is closing from both ends, and there is no workstation in America&#8212;blue collar or white&#8212;that Musk is not building a machine to sit behind.</p><p>The displacement advocates will counter with economics. A robot works twenty-four hours a day. It does not take a salary. It does not unionize. It does not call in sick. The math, they will insist, is obvious.</p><p>But the math is only obvious if you refuse to count what you are losing. I know this terrain. What is the value of a Toyota production associate who has submitted 724 improvement suggestions over a twenty-year career, of which 680 were implemented? What is the value of a frontline worker at a stormwater pipe plant&#8212;my plant&#8212;who notices a subtle change in material behavior during an extrusion run and adjusts before the defect propagates down the line? What is the value of the nurse who reads a patient&#8217;s face and overrides the protocol because she has seen this exact presentation before and the protocol is wrong?</p><p>These are not edge cases. These are the load-bearing moments in every production system, every hospital, every logistics network on earth. Not one of them shows up in the spreadsheet that justifies the robot.</p><p>The trillionaire&#8217;s blind spot is not technological. It is epistemic. He cannot see what frontline workers know, so he concludes they know nothing worth preserving.</p><p>In January 1914, Henry Ford doubled his workers&#8217; wages to five dollars a day. The business press was apoplectic. The Wall Street Journal called it &#8220;an economic crime.&#8221; Ford&#8217;s competitors predicted ruin.</p><p>What happened instead is that Ford created the middle class that bought his cars. The abundance did not come from eliminating workers. It came from investing in them so aggressively that an entire consumer economy ignited.</p><p>A century later, we have learned nothing.</p><p>Musk&#8217;s Optimus robot, at an estimated production cost of twenty to thirty thousand dollars per unit, is designed to perform tasks currently done by workers earning roughly the same amount annually. The economic proposition is not abundance. The economic proposition is arbitrage&#8212;replacing a recurring labor cost with a capital asset that depreciates on a balance sheet but never asks for a raise.</p><p>That is not a vision. That is a liquidation strategy with a TED Talk attached.</p><p>And the liquidation is already underway. Tesla&#8217;s pivot away from car manufacturing is not a strategic evolution&#8212;it is an abandonment. The people who welded the frames, painted the bodies, and assembled the battery packs that made Tesla the most valuable automaker on earth are now being told, implicitly, that their contribution was temporary. That they were placeholders until the real workers&#8212;the ones made of titanium and silicon&#8212;could take over.</p><p>I know what that moment looks like. I was there when they turned the line off at GM&#8217;s Buick City plant in Flint, Michigan&#8212;ninety-five years of continuous operation, shut down a few months after winning the J.D. Power Platinum Award for one of the top three plants in the world. Think about that. The best plant GM ever built, and they closed it anyway. Three generations of families had walked through those gates. The knowledge in that building&#8212;the way a trim operator fished a trapped harness under the carpet to plug the electrical to the seat, the way a welder read the puddle&#8212;didn&#8217;t get archived or transferred. It just went dark. I stood there and watched ninety-five years of accumulated human intelligence walk out the door and not come back.</p><p>That is what displacement actually looks like. Not a robot walking a red carpet. A parking lot emptying for the last time.</p><p>I have spent thirty-six years on production floors across four continents&#8212;General Motors, Chrysler, Mercedes-Benz, Royal Enfield. I have never once encountered a production problem that was solved by removing human judgment from the system. I have encountered thousands that were solved by trusting it.</p><p>The tech press has largely covered the humanoid robot race as an innovation story. It is not an innovation story. It is a labor story. And like most labor stories in America, it is being told by people who have never worked a production shift in their lives.</p><p>There is another model. It has been running for seventy years, and it works. The Toyota Production System rests on an insight so simple it embarrasses the robotics evangelists: the person closest to the work knows the most about the work.</p><p>Toyota&#8217;s production associates pull the andon cord an average of twenty-seven times per shift. Each pull is an act of intelligence&#8212;a human being exercising judgment about quality that no sensor array, no vision system, no large language model can replicate. The judgment depends on tacit knowledge accumulated through years of physical practice. You cannot download it. You cannot shortcut it. You can only form it.</p><p>Toyota&#8217;s market capitalization exceeds Tesla&#8217;s in most quarters. Its vehicles consistently rank at the top of every reliability index. Its profit margins are robust. And it achieves all of this not by replacing its workers but by systematically making them smarter, more capable, more consequential every day they show up.</p><p>The evidence is not ambiguous. The companies that invest in frontline intelligence outperform the companies that try to automate it away. This is not ideology. It is data.</p><p>But data requires humility to read. And humility is not a quality that accrues to men who put robots in the White House. The displacement thesis persists not because the evidence supports it, but because the people funding it are insulated from its consequences. When your net worth is measured in hundreds of billions, the distinction between &#8220;work is optional&#8221; and &#8220;work is unavailable&#8221; is purely theoretical. For the thirty-year-old in a fulfillment center in Memphis or a meatpacking plant in Dodge City, that distinction is the difference between a life with dignity and a life on a universal basic income check that arrives with someone else&#8217;s name on the return address.</p><p>UBI, incidentally, is always the answer when you press these men on what happens to the displaced. It is the intellectual equivalent of a fire exit sign&#8212;reassuring in theory, never tested under the conditions that would actually require it. No nation has ever successfully transitioned a hundred and fifty million workers from productive employment to permanent subsidized idleness. The displacement advocates know this. They propose UBI anyway, because the alternative&#8212;admitting that their entire thesis is built on the erasure of human value&#8212;is too uncomfortable to say out loud at Davos.</p><p>I manage sixty people across three production lines in Wooster, Ohio. We make stormwater pipes. It is not glamorous work, and no robot has walked a red carpet on our behalf. But I can tell you this: the intelligence on my production floor&#8212;the pattern recognition, the material intuition, the process judgment that my operators exercise every hour of every shift&#8212;is the most underleveraged asset in American manufacturing. Not because it is primitive. Because nobody has bothered to build the systems that deploy it.</p><p>That is the work I have given my life to. Not building robots to replace the people. Building systems to unleash them. Treating every frontline worker as a capability that appreciates with investment, not a cost that depreciates with time.</p><p>Musk wants to be the first trillionaire. He might get there. But the wealth he creates will be extracted, not generated&#8212;pulled from the pockets of displaced workers and concentrated in the balance sheets of shareholders who will never set foot on a production floor.</p><p>The alternative is harder. It requires patience, and humility, and the willingness to believe that the person running the extrusion line at two in the morning knows something you don&#8217;t. It will not make anyone a trillionaire.</p><p>But it will make something better.</p><p>A civilization that works.</p>]]></content:encoded></item><item><title><![CDATA[The Handmaid’s Tale]]></title><description><![CDATA[How Standardized Work Lost Its Soul &#8212; and Nobody Noticed]]></description><link>https://thelonggameforall.substack.com/p/the-handmaids-tale</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/the-handmaids-tale</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Wed, 08 Apr 2026 11:02:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/91b79716-bb7f-484e-b5c5-4cd32c35d688_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 style="text-align: center;"></h1><div id="youtube2-uexsLvnV31E" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;uexsLvnV31E&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/uexsLvnV31E?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Source Note: This is a Foundation Essay from The Long Game. It draws on five specific examples from my time as Trim Shift Leader at GM&#8217;s Lansing Grand River (LGR) and Lansing Delta Township (LDT) assembly plants &#8212; the same city, the same company, in many cases the same people, twenty years apart. The contrasts are not hypothetical. They happened. I was there.</p><p>___________________________________________________________________________</p><p>Twenty years is long enough to forget why you built something. Long enough to keep the shape of a thing while hollowing out everything that made it work. I know this because I watched it happen. Twice, in the same city, with many of the same people.</p><p>In the early 2000s, at GM&#8217;s Lansing Grand River Assembly Plant, we built a standardized work system from scratch. Not because someone told us to. Because it was the foundational act of deploying frontline intelligence &#8212; the thing that made everything else possible.</p><p>Here is what that looked like.</p><p>Every team leader &#8212; every one of them &#8212; wrote their own job element sheets. By hand. They would go to the station, study the work, practice it, time it. Then they would take photographs of the parts, the positions, the sequences. They would print those photographs, cut them out, paste them onto the sheets. Tedious? Dramatically. Imprecise in places? Of course. But here is what mattered: when a team leader finished writing a job element sheet, the work lived in their head and in their hands. They owned the method. And because they owned it, they used it. They trained their people from it. They certified their people against it. They improved it when something better emerged. The document was not a bureaucratic artifact filed in a binder. It was the current best method, owned and continuously refined by the person closest to the work.</p><p>If there were two hundred stations on my line, there were thousands of job element sheets &#8212; because the content changed with every car line, every option mix. As a shift leader, part of my job was to read and sign every single one. That discipline existed because the system was alive. It breathed. It was the team leader&#8217;s instrument.</p><p>Fast forward twenty years. I arrived at Lansing Delta Township &#8212; same city, same company, in many cases the same people &#8212; and found something that looked identical from a distance. Standardized work binders on the floor. Job element sheets at every station. All the right words in all the right places.</p><p>But something fundamental had changed.</p><p>The sheets were computer-generated.</p><p>So I asked the obvious question: who wrote them? Well, there is a pilot team that reviews them. Who enters the data? Before long, the trail led exactly where I feared it would. Industrial engineers &#8212; or their functional equivalent &#8212; were writing the standardized work. A centralized group, removed from the floor, creating documents that the floor was expected to follow.</p><p>This is not a small difference. This is a civilizational difference.</p><p>When a team leader writes the job element sheet, they own the method. When someone else writes it and hands it to them, they are handed a procedure. The document looks the same. The binder looks the same. The station looks the same. But ownership has been transferred from the person doing the work to the person designing the system. We had, in the span of twenty years, returned to exactly what we started from &#8212; Taylorism &#8212; while maintaining the entire liturgical apparatus of lean. The vestments were worn. The hymns were sung. The congregation had no idea the theology had changed.</p><p>That is why I call it The Handmaid&#8217;s Tale. It <em>looks</em> like standardized work. It performs the rituals of standardized work. But the intent &#8212; ownership on the floor, intelligence deployed at the point of action &#8212; has been quietly, efficiently, irreversibly removed.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>The second example is subtler, but if anything more profound.</p><p>At LGR, a basic first step for any operator at any station was this: look at the manifest of the vehicle coming toward you. Get a good sense of what you are about to build. Then, when you pick up the part, inspect it. Here are the specific defects you should look for. Then install it.</p><p>Read. Inspect. Install. Simple. Foundational. The operator was not just a pair of hands attaching components. They were a quality gate. They understood the relationship between the part, its condition, and its impact on the line. The job element sheet codified this &#8212; it told you which parts went where, the min-maxes, how to pick them up, and critically, what to look for before you installed them.</p><p>Twenty years later at LDT, those sheets were gone. The manifest-reading step? Gone. The inspection step? Gone.</p><p>The logic was impeccable. We pay the supplier to ensure quality. We pay the sequencing center to sequence correctly. Why should we pay our people to duplicate that effort?</p><p>Efficient. Absolutely efficient.</p><p>And absolutely devastating.</p><p>What had been removed was not redundancy. What had been removed was the operator&#8217;s relationship to the part. Their understanding of what good looks like. Their role as the last line of intelligence before a component disappeared into a vehicle that a customer would drive home. The floor had been reduced from an intelligent system to a logistics exercise &#8212; parts flowing to positions, hands attaching them, no cognition required or expected.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>The third example involves a million-dollar decision that, on paper, looked like exactly the kind of investment a serious organization makes in quality.</p><p>We were launching a new vehicle model, and the persistent problem with the previous generation had been electrical defects &#8212; improper connections, bent prongs, modules not properly grounded. These defects were found at the end of the line, which meant the car had to be parked in the repair area, diagnosed, and fixed. And anyone who has worked in an assembly plant knows the real cost of an electrical defect. The diagnosis itself is usually straightforward &#8212; you read the error code. The agony is the disassembly. You strip layers of vehicle interior to reach where the defect lives, fix it, and then put everything back together. And here is the thing that no efficiency model captures: a car that is assembled right the first time and a car that is disassembled and reassembled are not the same car. The parts are interchangeable in theory. In practice, the customer is not getting a pristine vehicle. They are getting a vehicle that has been operated on.</p><p>So a decision was made: install a million-dollar diagnostic inspection system at the end of the trim shop. Plug into different aspects of the vehicle, run diagnostics, flag error codes. Catch the defects before the vehicle moves to chassis. Stop the line, fix it right there.</p><p>Logical. Expensive. And operationally catastrophic.</p><p>The system was installed at the end of the trim shop &#8212; a location with almost no footprint for stopping vehicles, diagnosing them, and repairing them. It sat in the buffer between trim and chassis. If you were aggressive enough with it &#8212; if you actually stopped every vehicle the system flagged &#8212; you would drain chassis of work and shut down the other half of the plant. Within the first few months, the reality became clear: it was infeasible. But telling the emperor he has no clothes is not how these things work. So instead, the theater continued. The system ran. It flagged. And quietly, nothing much happened with the flags.</p><p>Meanwhile, what actually solved the problem was what always solves it. The team members gained comfort and facility with the new product. They figured out the nuances. The electrical defects reduced &#8212; slowly, naturally, through repetition and learning and the accumulated intelligence of people who build cars every day. The million-dollar system did not teach anyone anything. It did not build capability. It consumed attention and resources that could have been directed at the other critical quality issues screaming for help. And worse &#8212; far worse &#8212; it reframed the quality problem as a detection problem. When defects were found, the response was to trace them back to the point of cause and discipline the person who made the wrong connection.</p><p>Efficient at detecting defects. Efficient at alienating the people who built the car. Six months into the launch of the new program, the million-dollar system was quietly ripped out. No announcement. No postmortem. People simply agreed, without saying so, that it did not do what it was supposed to do. And it disappeared. The Handmaid&#8217;s Tale again &#8212; the appearance of a quality system, performing the rituals of quality improvement, while the actual mechanism &#8212; frontline learning and ownership &#8212; was not only ignored but actively undermined.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>The fourth example may be the most instructive of all, because it involves the very act of training itself.</p><p>Again, a new vehicle model. A decision was made &#8212; correctly, in principle &#8212; that people needed to be trained and certified on the new work content. They needed to develop the mental and muscle memory required to perform the steps of the job properly before the launch.</p><p>Here is how it is supposed to work. The pilot team &#8212; the group doing development work with the new vehicle &#8212; pulls in the team leader from each area. They work together. They write the standardized work for the five or six stations in that team leader&#8217;s zone. The team leader takes ownership of those documents, brings them back to the floor, trains and certifies their people, and then improves the documents as a living practice. The knowledge flows from development through the team leader into the team. The team leader is the bridge.</p><p>Here is what actually happened. People with almost no experience were pulled into the pilot team. Engineers essentially cut and pasted standardized work from the previous model with some modifications. And then &#8212; this is where it becomes theater &#8212; that data was fed into a computerized virtual training system. A picture of the vehicle appeared on a screen. The trainee would touch points on the picture to simulate tightening this fastener, turning this bolt, clicking this connector. The system tracked whether the trainee got the sequence right, whether they remembered which parts went where.</p><p>The quality control of this system was so poor that the pictures were often wrong. The sequences were wrong. The parts were wrong. Operators looked at it and said what any intelligent person would say: why are we doing this? I know how to build this car. I will figure it out.</p><p>But the theater continued. Everyone went through the system. Everyone was certified by the system. And the real training happened the way it always happens &#8212; on the floor, through repetition, through defects, through feedback, through the slow accumulation of knowledge that comes from actually building the product with your hands. The standardized work people actually used to ensure quality bore little resemblance to what was on the paper or what was in the virtual training. The system certified. The floor trained. Two parallel universes, one visible to management, the other actually producing cars.</p><p>So much theater. For such little value.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>Four examples. Four different facets of the same inversion. The first stripped ownership of the method from the team leader&#8217;s hands. The second stripped the operator&#8217;s relationship to the part. The third replaced frontline learning with a surveillance system. The fourth replaced the team leader&#8217;s role as trainer with virtual certification theater.</p><p>But the fifth example is the one that named the thing. And to understand it, you first have to understand what a team actually is.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>A team is not an org chart designation. It is not a name on a whiteboard or a set of stations grouped for administrative convenience. A team is a social organism. It forms the way all human bonds form &#8212; through proximity, repetition, and the small accumulated acts of sharing a life together.</p><p>Here is what a team looks like when it is real. These are people who know each other. They talk about their kids&#8217; football games while they work. They argue about what happened with their girlfriends the previous evening. They horse around. They bring food for each other &#8212; someone&#8217;s wife made too much biryani, someone else brought tamales, someone shows up with a sheet cake because it is their daughter&#8217;s birthday. During breaks, they eat together. They celebrate things together. They grieve things together. They share the ordinary texture of their lives in the minutes between the work, and because of that sharing, they become people to each other. Not headcounts. Not labor units. People.</p><p>And on the line, that social fabric is what makes everything else work. They help each other out. When someone falls behind, the person next to them steps in without being asked &#8212; not because a procedure says to, but because that is what you do for someone you know. They watch each other&#8217;s quality. They flag problems for each other. They teach each other tricks they have figured out. The team leader is the glue &#8212; the person who holds that social construct together, who knows each member&#8217;s strengths and struggles, who creates the conditions for the group to function as something greater than a collection of individuals.</p><p>In that social construct, the work happens. In that social construct, frontline intelligence is deployed. And it blooms. Not because someone mandated it. Because the human conditions for it exist &#8212; trust, familiarity, mutual obligation, pride in shared effort. The team is the soil. Everything else is what grows in it.</p><p>Now. Here is what happened.</p><p>The model year was running long in the tooth. The new vehicle was coming, but not yet. In the meantime, market demand required more volume &#8212; not enough for a business case to add a third shift, but enough that something had to give. So the plant did what plants do under pressure: it increased operating hours by staggering shifts, splitting breaks, stretching the schedule. On paper, this maximized utilization of the asset. In practice, it destroyed the most fundamental unit of lean.</p><p>The team.</p><p>By staggering start times and break schedules, team members who were nominally part of the same team never overlapped long enough to be together. They did not start work at the same time. They did not take breaks at the same time. Extra relief people were added to the line so that each person could take their half hour individually &#8212; one at a time, rotating off and rotating back. So a break was no longer a team gathering. It was a solitary half hour. One person sitting alone with a cell phone, watching a YouTube video, eating whatever food they had brought &#8212; by themselves. Then clamoring back to the line so the next person could be relieved. No conversation. No human contact. The food stopped being shared because there was no one to share it with. The stories stopped being told because there was no one to tell them to.</p><p>And on top of that, the plant had issued bone-conducting headphones &#8212; the kind that sit on your cheekbone and let you listen to whatever you want while you work. So now every team member was in their own acoustic world. They did not talk to each other while they worked because they were listening to their own music, their own podcasts, their own silence. They did not talk during breaks because they took breaks alone. They were, in every meaningful sense, strangers who happened to work adjacent stations and share a team leader&#8217;s name on a whiteboard.</p><p>Teams of strangers. Building cars together. Going home without ever having spoken.</p><p>And then one day, a team member looked at me and said: <em>Venki, what team?</em></p><p>That was the moment this phrase &#8212; The Handmaid&#8217;s Tale &#8212; crystallized in my mind. Because here was a plant that had team leaders, team boards, team meetings on the schedule, team metrics on the wall. Every artifact of the team concept was present and accounted for. And the team itself did not exist. The humans who were supposed to constitute it had been rendered invisible to each other by scheduling optimization and consumer electronics. The most foundational element of lean &#8212; people who know each other, work together, solve problems together, hold each other accountable, and take pride in what they build together &#8212; had been dissolved. Not by malice. By efficiency.</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>Five examples. Five layers of the same erosion.</p><p>Ownership of method, removed. Relationship to the part, severed. Quality reframed as detection instead of learning. Training reduced to certification theater. And finally, the team itself &#8212; the irreducible human unit on which every other element depends &#8212; dissolved into a collection of isolated individuals wearing headphones.</p><p>None of these changes happened because someone decided to abandon lean. Every one of them made perfect sense to the leaders who approved them &#8212; leaders who had not been present when the original systems were built and therefore did not understand <em>why</em> those systems existed. They saw inefficiency where there was investment. They saw redundancy where there was resilience. They saw manual processes where they imagined digital precision. They optimized for cost where the original architects had optimized for capability.</p><p>This is how the return to Taylorism happens. Not through a dramatic policy reversal. Not through a boardroom decision. It happens through a thousand small, reasonable, well-intentioned efficiency improvements, each one removing a thin layer of frontline ownership until one day you look at the floor and realize that what you have is not a lean system. It is a traditional system wearing lean&#8217;s clothes.</p><p>And it does not help that in a unionized environment, the union exists to fight Taylorist management. Even when the management is not consciously Taylorist, the union will fight it as though it is. Because historically, that is what management has always been. So the adversarial dynamic reinforces itself &#8212; management optimizes, the union resists, and the possibility of genuine collaboration around frontline intelligence disappears into the gap between them.</p><p>What I found when I returned to Lansing for my Walden Pond years was a Handmaid&#8217;s Tale. The rituals of lean, performed faithfully. The substance of lean, gone. Not because anyone killed it. Because the foundational understanding of deploying frontline intelligence no longer existed in the leaders who ran the system. Without that understanding, every decision defaults to the Taylorist model &#8212; tell them what to do, check whether they did it, discipline them when they did not. It is the gravitational pull of industrial management. Without continuous, conscious effort to resist it, you always fall back.</p><p>So my question to everyone who finds themselves at different stages of lean &#8212; in any industry, at any scale &#8212; is simple:</p><p>Do you have the artifacts of lean? Or do you have lean?</p><p>Do you have the binders, the boards, the metrics, the certifications, the team names on the whiteboard? Or do you have deployed frontline intelligence &#8212; people who own their methods, understand their parts, learn from their defects, train their own, and know each other&#8217;s names?</p><p style="text-align: center;">&#8226; &#8226; &#8226;</p><p>And if the answer is uncomfortable &#8212; if you recognize your own floor in these examples &#8212; then the harder question follows: how do you go back? How do you restore the foundation?</p><p>I have been vexing with this for a long time. Decades now. And the question I keep circling is whether Western companies made a fundamental mistake in adopting lean &#8212; not because lean is wrong, but because they never fully understood the foundation that needed to be established first. The degree of effort and intention required to abolish the Taylorist way of thinking was massively underestimated. And since it was never abolished, lean was layered on top of it. The topsoil looked different. The bedrock never changed. And over time, as the examples above illustrate, the bedrock reasserted itself.</p><p>Does lean work in situations like that? My answer is it absolutely does. I do not have a better system to replace it with. I do not think one exists. But I have come to believe that before you can put lean back in &#8212; genuinely, with deployed frontline intelligence as its foundation and not its decoration &#8212; you have to first build something that was never built the first time around. A foundation. A different understanding of what labor is, what the floor is capable of, and what the relationship between the people who design the system and the people who operate it must look like.</p><p>That foundation will take years to build. It requires leaders who understand that frontline intelligence is not a tool to be deployed when convenient and withdrawn when the spreadsheet demands it. It is the operating system itself. Everything else &#8212; the standardized work, the quality systems, the training, the teams &#8212; runs on top of it. Without it, you get theater. You get a Handmaid&#8217;s Tale.</p><p>This is what I have been writing about. This is what <em>Already Paid For</em> is about. Not lean as a methodology. Lean as a commitment &#8212; to the intelligence, dignity, and capability of every person on the floor. Until that commitment is foundational, not decorative, we will keep building Handmaid&#8217;s Tales and wondering why the results do not follow.</p><p>The vestments were worn. The hymns were sung. The theology had regressed to the mean. Taylorism.</p>]]></content:encoded></item><item><title><![CDATA[Eating the Seed Corn]]></title><description><![CDATA[How AI Is Destroying the Formation It Needs to Replace]]></description><link>https://thelonggameforall.substack.com/p/eating-the-seed-corn</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/eating-the-seed-corn</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Tue, 07 Apr 2026 11:02:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d75711eb-3a72-45d2-87eb-70bad839a17b_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-HJ_QKG-tmlo" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;HJ_QKG-tmlo&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/HJ_QKG-tmlo?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p>My mother tried to teach me my multiplication tables. I squirmed. I whined. I ran. Eventually I climbed onto the window ledge &#8212; which in India means perching behind metal rods that run vertically across the frame, the kind that keep you from falling three stories. My mother is a short woman. Up on that ledge, her hands couldn&#8217;t reach me. A small boy&#8217;s tactical victory.</p><p>She gave up. My mother was patient, but there are limits. She did something she almost never did: she drew my father into it. He was a reluctant conscript when it came to homework. That was her domain. But she was done, and I was on the window, triumphant.</p><p>So my father &#8212; tall, quiet, not a man who raised his voice &#8212; walked over to the window. And tall as he was, he didn&#8217;t need to climb. He just pulled up right in front of me and met my gaze. Right there, through the metal rods, eye to eye. The tactical advantage I&#8217;d won over my mother evaporated in an instant. He didn&#8217;t yell. He didn&#8217;t threaten. He just looked at me, steady, and said: belt it out.</p><p>And I did. Because it was the only way down with dignity.</p><p>Seven eights are fifty-six. Nine sevens are sixty-three. Twelve twelves are a hundred and forty-four. I belted them out through the metal bars to my father&#8217;s face, and somewhere between the sixes and the nines, the numbers stopped being punishment and started being pattern. Not because I wanted to learn. Because the struggle was the only door back into the house.</p><p>That was formation. I didn&#8217;t know the word then. I do now. And I&#8217;ve spent thirty-six years on factory floors watching it happen &#8212; and watching it get destroyed.</p><p style="text-align: center;">&#8212;</p><p>Here&#8217;s what nobody tells you about the multiplication tables. The arithmetic wasn&#8217;t the point. The architecture was. When you struggle through the recitation until the numbers become relationships instead of sounds, you&#8217;re building something underneath. Pattern recognition. Number sense. The ability to look at a column of figures thirty years later and <em>feel</em>, before you&#8217;ve checked the math, that something is off. The way you feel a wrong note on the sitar before your brain names the raga.</p><p>Then came the calculator. Fine. Nobody mourned long division. But the calculator only worked &#8212; I mean really <em>worked</em>, as a tool in the hands of a thinking person &#8212; if the person using it had already done the work by hand. The engineer who punches numbers into a spreadsheet and catches the error before the formula does? She can do that because the foundation was already poured.</p><p>Then came the computer. Same principle, higher abstraction. The finite element analysis is only as good as the engineer who can interrogate it &#8212; who knows what the beam deflection should roughly look like before the software renders it, because she once solved those problems with pencil and paper and cursing.</p><p>Now comes AI. And here is where the pattern breaks.</p><p>AI doesn&#8217;t stand at the window and say <em>belt it out.</em> AI opens the window and says <em>don&#8217;t worry, I&#8217;ll do the multiplication for you.</em> And the child never comes down. Never builds the foundation. Never gets the dignity of having done it himself.</p><p style="text-align: center;">&#8212;</p><p>Frank Landymore, writing in <em>Futurism</em> this week, reports what more than a dozen humanities professors are telling anyone who&#8217;ll listen: AI is not just enabling cheating. It is destroying their students&#8217; capacity to think. &#8220;Incapable of reading and analyzing, synthesizing data, all kinds of skills&#8221; &#8212; that&#8217;s Michael Clune, a literature professor at Ohio State, describing what walks into his classroom now.</p><p>The research backs him up. A Carnegie Mellon study from early 2025 found that knowledge workers who regularly used and trusted AI tools were losing their critical thinking skills. Not stagnating. <em>Losing.</em> An earlier study linked students who relied on ChatGPT to memory loss, procrastination, and worsening academic performance. And an MIT study that ran EEG scans on subjects writing essays with and without ChatGPT found that AI users showed the lowest levels of cognitive engagement during the tasks.</p><p>The lowest levels of cognitive engagement. The brain wasn&#8217;t struggling and failing. It was idling. The engine was barely running.</p><p>Dora Zhang, a literature professor at UC Berkeley, told Landymore she now talks to her students about AI &#8220;not under the framework of cheating or academic honesty but in terms that are frankly existential. What is it doing to us as a species?&#8221;</p><p>Good question. Let me offer an answer from the factory floor.</p><p style="text-align: center;">&#8212;</p><p>In manufacturing, we call it apprenticeship. And the people who understood it best were the Germans. The medieval <em>Z&#252;nfte</em> &#8212; the guilds &#8212; built an entire civilization around the idea that you don&#8217;t hand a young person a credential and call them ready. You put them under a master. You make them a <em>Geselle</em>, a journeyman, for years. You make them struggle with the material &#8212; wood, metal, cloth, numbers &#8212; until the knowledge isn&#8217;t in their head anymore. It&#8217;s in their hands. That system didn&#8217;t survive six centuries because it was romantic. It survived because it worked.</p><p>I call it formation. The period &#8212; months, years, sometimes a decade &#8212; during which a worker builds the mental models that allow them to see what the machine cannot. The operator who hears a bearing going bad before the vibration sensor picks it up. The quality engineer who looks at a run of parts and knows the die is drifting before the measurement confirms it.</p><p>A quality engineer I knew at GM &#8212; a man named Denny Hagman, who hired in as an hourly assembler and never got an engineering degree &#8212; once told me that if you want to know what&#8217;s wrong with a part or a process, talk to the person who performs that function three hundred times a day.</p><p>Three hundred times a day. That&#8217;s the multiplication tables of the shop floor. The repetition builds something that no sensor array and no algorithm can replicate.</p><p>And what the professors are watching happen in their classrooms is the interruption of that process. Not the replacement of a skill. The prevention of a skill from ever developing. The MIT EEG study isn&#8217;t measuring laziness. It&#8217;s measuring arrested development. The neural pathways that should be forming under cognitive load are sitting dormant. And there&#8217;s growing evidence that the window for certain kinds of critical thinking is narrow. Miss it, and you don&#8217;t get it back.</p><p style="text-align: center;">&#8212;</p><p>So here we are. The automation lobby&#8217;s argument has always been: workers can&#8217;t think, so replace them with machines. That&#8217;s the pitch. That&#8217;s the ROI slide.</p><p>But now the same companies making that argument &#8212; OpenAI, Microsoft, xAI &#8212; are pouring tens of millions into teachers&#8217; unions and school systems, handing out free AI tools to a generation of students, partnering with universities to embed their products into every assignment and every major. Elon Musk just launched what he calls the &#8220;world&#8217;s first nationwide AI-powered education program&#8221; in El Salvador &#8212; a million students across thousands of public schools, all using his Grok chatbot.</p><p>&#8220;These companies are giving these technological tools away partly because they&#8217;re hoping to addict a generation of students,&#8221; Eric Hayot, a comparative literature professor at Penn State, told Landymore. He&#8217;s not wrong. Handing out free AI tools to students is not unlike the subsidized Coke dispensers in school cafeterias &#8212; peddling sugar to children and calling it refreshment. Get them hooked early. Let the dependency do the selling later.</p><p>We know how that story ended. A Harvard study found that each daily serving of a sugary drink raised a child&#8217;s risk of obesity by sixty percent. Childhood obesity tripled in a generation. By the time 96 percent of American high schools had soda vending machines on campus, we had let commercial interests shape our children&#8217;s bodies in exchange for school revenue. It took decades of lawsuits, legislation, and parental outrage to claw the machines back out.</p><p>Now we&#8217;re doing it again. Same playbook. Different product. This time they&#8217;re not fattening the body. They&#8217;re starving the mind. You are letting their commerce harm your child.</p><p>I&#8217;ll say this plainly: if I see AI deployed in my grandchildren&#8217;s classroom as a substitute for thinking, I will volunteer full-time to homeschool them. Period. I didn&#8217;t spend thirty-six years watching intelligence get suppressed on factory floors to sit quietly while it gets suppressed in a second-grade classroom.</p><p>But it&#8217;s worse than the soda machines. Worse than addiction. It&#8217;s sterilization.</p><p>They are eating the seed corn.</p><p>In agriculture, seed corn is the portion of the harvest you set aside for next year&#8217;s planting. You don&#8217;t eat it. You don&#8217;t sell it. You protect it, because without it there is no next crop.</p><p>Human formation is the seed corn of the knowledge economy. Every engineer, every quality technician, every nurse, every teacher, every line worker who can hear the bearing going bad &#8212; they exist because someone, somewhere, made them do the work. Made them write the essay by hand. Made them solve the problem on paper. Made them do the function three hundred times a day until the knowing was in their hands, not just their head.</p><p>They exist because someone stood at the window and said: belt it out.</p><p>And now we&#8217;re feeding that seed corn into the chatbot. We&#8217;re consuming the formation to produce this quarter&#8217;s productivity gains. We&#8217;re optimizing the present by liquidating the future.</p><p>Why?</p><p>Greed. Not malice &#8212; I&#8217;ll grant them that. But greed that cannot see past the quarterly earnings call to the civilizational consequences. Train a human being: twenty years. Deploy a chatbot: twenty minutes. The ROI math is irresistible, until you realize you&#8217;ve sterilized the field and there&#8217;s nothing left to plant.</p><p style="text-align: center;">&#8212;</p><p>Now let me tell you something that might surprise you, given everything I&#8217;ve just said.</p><p>I love AI. I use it every day. I am wielding it right now like a samurai wields a sword &#8212; with precision, with intent, with thirty-six years of pattern recognition guiding every stroke.</p><p>I can do that because I am reasonably formed. My mother&#8217;s multiplication tables. A degree in mechanical engineering. A PhD in industrial engineering. Decades of getting beat up on the production floor and in the boardroom. The formation happened first. Then the tool arrived. And in my hands, it sings. It amplifies everything I already know. It lets me do in an afternoon what used to take a week. It is, without exaggeration, the most powerful instrument I&#8217;ve ever held.</p><p>But the sword is only as good as the swordsman. AI in the hands of a formed human is magnificent. AI in the hands of an unformed human is a crutch that prevents them from ever learning to stand.</p><p style="text-align: center;">&#8212;</p><p>Here&#8217;s what gives me hope, and it comes from Landymore&#8217;s reporting too.</p><p>Some professors are fighting back. They&#8217;re giving oral examinations. Requiring handwritten notebooks. Demanding that students show photographs of their notes. A faculty-run initiative called AgainstAI is advising professors on how to design around the technology. And several professors told <em>Futurism</em> they&#8217;re noticing more students pushing back &#8212; recognizing that they are, as Zhang put it, &#8220;the guinea pigs in this giant social experiment.&#8221;</p><p>Clune said something that I want to end with, because it&#8217;s the thing I&#8217;ve been trying to say for three years from the manufacturing floor: &#8220;There&#8217;s kind of defeatism, this idea that there&#8217;s no stopping technology and resistance is futile, everything will be crushed in its path. That needs to change.&#8221;</p><p>He&#8217;s right. It does need to change.</p><p>Because the argument was never about stopping technology. It was about protecting formation. It was about understanding that the value of a human being isn&#8217;t what they produce on a Tuesday afternoon &#8212; it&#8217;s the decades of struggle that gave them eyes the machine doesn&#8217;t have.</p><p>You don&#8217;t get that from a chatbot. You get it from the struggle.</p><p>And if we eat the seed corn &#8212; if we hand every student a Grok subscription and call it education, if we skip the formation and go straight to the deployment &#8212; then there will be nothing left to deploy. Nothing left to automate. Nothing left to extract.</p><p>Just machines talking to machines about what humans used to know.</p><p style="text-align: center;">&#8212;</p><p><em>This essay was prompted by Frank Landymore&#8217;s reporting in Futurism: &#8220;Professors Say AI Is Destroying Their Students&#8217; Ability to Think&#8221; (March 14, 2026). The research cited &#8212; Carnegie Mellon, MIT, and the student performance study &#8212; is drawn from Landymore&#8217;s article.</em></p><p><em>Dr. Venki Padmanabhan is a plant manager at Advanced Drainage Systems in Wooster, Ohio, and the author of the forthcoming book Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away. He writes The Long Game at thelonggameforall.substack.com.</em></p>]]></content:encoded></item><item><title><![CDATA[94% Belongs to the System: What Deming Proved and Management Forgot]]></title><description><![CDATA[Essay 4 of &#8220;The Evidence They Can&#8217;t Ignore&#8221; &#8212; A Series on Systematic Intelligence Suppression]]></description><link>https://thelonggameforall.substack.com/p/94-belongs-to-the-system-what-deming</link><guid isPermaLink="false">https://thelonggameforall.substack.com/p/94-belongs-to-the-system-what-deming</guid><dc:creator><![CDATA[Dr. Venki Padmanabhan]]></dc:creator><pubDate>Sun, 05 Apr 2026 11:02:55 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/26baf426-9396-4831-8fe1-309567255b8e_1600x900.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-zsdZSex3pF4" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;zsdZSex3pF4&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/zsdZSex3pF4?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p>For three weeks I&#8217;ve been building an evidence case. NUMMI proved that the same workers produce opposite results under different systems. Taylor&#8217;s writings proved the suppression was by design. The suggestion gap proved that workers have intelligence to contribute when the system asks for it.</p><p>This week I want to introduce the man who gave us the statistical proof &#8212; the mathematical demonstration that most of what we attribute to individual workers is actually produced by the system they work inside. His name was W. Edwards Deming, and his most consequential finding was this:</p><p><strong>&#8220;I should estimate that in my experience most troubles and most possibilities for improvement add up to proportions something like this: 94% belongs to the system (responsibility of management), 6% special.&#8221;</strong></p><p>That&#8217;s from <em>Out of the Crisis</em>, page 315. Published in 1982. Still ignored by most of the organizations it was written to save.</p><h2>The Red Bead Experiment</h2><p>Deming didn&#8217;t just assert the 94/6 split. He demonstrated it &#8212; hundreds of times, in front of thousands of executives, using one of the most elegant teaching devices in the history of management science.</p><p>The Red Bead Experiment works like this. A bowl contains 4,000 beads &#8212; 3,200 white and 800 red. White beads represent acceptable work. Red beads represent defects. Six volunteers are designated as &#8220;willing workers.&#8221; Their job is simple: dip a paddle with 50 holes into the bowl and extract 50 beads. The goal is to produce white beads. Red beads are unacceptable.</p><p>The willing workers dip their paddles. Some get 7 red beads. Some get 15. Some get 10. Management &#8212; played by audience members Deming assigns to the roles &#8212; does what management always does. They praise the worker who got 7 red beads. They counsel the worker who got 15. They set targets. They threaten consequences. They create incentive programs. They rank the workers.</p><p>Then they run the experiment again. And the rankings change. The worker who had 7 red beads last round now has 13. The worst performer improves. The best performer gets worse. The variation is random, because it&#8217;s produced by the <em>system</em> &#8212; the ratio of red to white beads in the bowl &#8212; not by any characteristic of the workers.</p><p>Deming would let this play out over several rounds, watching the audience squirm as they recognized their own management behaviors in the absurd theater playing out on stage. Then he would deliver the lesson: every action management took &#8212; the praise, the punishment, the ranking, the incentives &#8212; was a response to variation that the workers did not cause and could not control. The only way to reduce the number of red beads is to change the system: change the ratio in the bowl. And that is management&#8217;s job.</p><h2>The Principle in Practice</h2><p>The Red Bead Experiment is a simplification, of course. Real work processes are more complex than a bowl of beads. But the principle scales. In any process, the variation in output comes from two sources: <strong>common causes</strong> (built into the system) and <strong>special causes</strong> (attributable to specific events or individuals). Deming&#8217;s lifetime of statistical analysis across hundreds of organizations led him to the 94/6 estimate &#8212; the vast majority of variation is common-cause, produced by the system.</p><p>Here&#8217;s what this means on a factory floor, and I&#8217;ve seen it confirmed thousands of times in 36 years.</p><p>When an operator on Line 1 produces more defects than an operator on Line 2, the instinctive management response is to conclude that the Line 1 operator is less skilled, less careful, or less motivated. The Taylorist system reinforces this: if workers are interchangeable executors, then differences in output must reflect differences in the workers.</p><p>But swap the operators. Put the Line 1 operator on Line 2 and the Line 2 operator on Line 1. If the defects follow the <em>line</em> rather than the <em>person</em>, you&#8217;ve just demonstrated common-cause variation. The problem is in the process &#8212; the tooling, the material, the fixture, the environmental conditions, the upstream quality &#8212; not in the worker.</p><p>I have done this swap more times than I can count. The defects almost always follow the line.</p><p>Deming&#8217;s phrase for what most organizations do instead was &#8220;tampering&#8221; &#8212; adjusting the process in response to common-cause variation, which actually makes things worse. Ranking workers by performance when the variation is system-driven is tampering. Incentive pay tied to output when the output is constrained by the system is tampering. Firing the &#8220;bottom 10 percent&#8221; when the bottom 10 percent is a statistical artifact of the system is tampering.</p><h2>&#8220;The Workers Are Handicapped by the System&#8221;</h2><p>Deming was not gentle about where responsibility lies. From <em>Out of the Crisis</em>:</p><p>&#8220;The workers are handicapped by the system, and the system belongs to management.&#8221;</p><p>This is not a statement of worker victimhood. It is a statement of statistical fact. If 94 percent of the variation is in the system, and management owns the system, then 94 percent of the performance problem is a management problem. Not a training problem. Not a motivation problem. Not a hiring problem. A management problem.</p><p>The implications are devastating for the standard operating model:</p><p><strong>Performance reviews</strong> that rank individuals are measuring system noise, not individual capability. The worker rated &#8220;below expectations&#8221; may be producing exactly the output the system is designed to produce.</p><p><strong>Training programs</strong> aimed at individual skill gaps will not improve outcomes if the gap is in the system design. You can train a worker to operate inside a broken process with exquisite technique, and the process will still produce defects.</p><p><strong>Incentive systems</strong> that reward individual output create competition among workers who are all operating inside the same constrained system. The winner isn&#8217;t more capable. They&#8217;re luckier &#8212; or they&#8217;ve figured out how to game the metric, which makes the system worse for everyone.</p><p><strong>Automation investments</strong> that replace workers without fixing the system will automate the common-cause variation into the new process. The robot will produce the same defects the worker produced, because the defects were never coming from the worker.</p><h2>The Connection to Intelligence Suppression</h2><p>Here is where Deming&#8217;s principle intersects with the suppression thesis I&#8217;ve been building.</p><p>If 94 percent of variation belongs to the system, then the people best positioned to <em>improve the system</em> are the people closest to it &#8212; the frontline workers who live inside it every day, who see its failure modes at the point of occurrence, who develop tacit knowledge about its behavior that no manager sitting in an office can possess.</p><p>But a Taylorist system &#8212; which was designed to remove worker intelligence from the process &#8212; cannot access that knowledge. It has, by design, eliminated the very feedback channel through which system improvement is supposed to flow.</p><p>This creates a vicious cycle:</p><p>1. The system produces 94 percent of the variation.</p><p>2. The system prevents workers from contributing intelligence that could improve it.</p><p>3. Management attributes the variation to the workers.</p><p>4. Management invests in replacing workers (automation) rather than improving the system (deployment).</p><p>5. The new automated system inherits the same common-cause variation, because the underlying system design was never fixed.</p><p>Toyota broke this cycle with the suggestion system, the andon cord, and the entire architecture of deployed frontline intelligence we&#8217;ve been discussing. Deming provided the statistical proof of <em>why</em> it works: the workers aren&#8217;t the problem, the system is, and the workers are the best source of intelligence for fixing it.</p><h2>Why Deming Failed in America</h2><p>Deming is revered in Japan. He is credited &#8212; with some justification and some overstatement &#8212; with catalyzing the quality revolution that transformed Japanese manufacturing in the postwar period. The Deming Prize, established in 1951, remains one of the most prestigious quality awards in the world.</p><p>In America, Deming was a prophet largely without honor until he was in his eighties. His famous NBC documentary appearance, &#8220;If Japan Can&#8230; Why Can&#8217;t We?&#8221; aired in 1980, when American manufacturing was already in crisis. Ford invited him in. GM invited him in. Many companies sent executives to his four-day seminars. They heard the message. Some of them implemented elements of it.</p><p>But the 94/6 principle was never widely adopted, because adopting it requires management to accept that <em>they</em> are the primary cause of the performance problems they&#8217;ve been blaming on workers. That is not a message most management teams are willing to hear. It is far more comfortable &#8212; and far more consistent with the Taylorist assumption &#8212; to believe that the solution is better workers, better training, better incentives, or better robots.</p><p>Deming died in 1993. Thirty years later, the management practices he identified as &#8220;tampering&#8221; remain standard operating procedure in the majority of Western organizations. Workers are still ranked. Performance reviews still attribute system variation to individuals. Incentive systems still reward individual output. And automation investments still bypass the system design problem that Deming proved was the real issue.</p><h2>The 94% and the Suppression Tax</h2><p>Last week I introduced the concept of the suppression tax &#8212; the economic cost of an operating system that prevents workers from contributing their intelligence. Deming&#8217;s 94/6 principle gives us a way to estimate the scale of that tax.</p><p>If 94 percent of the variation in your system is common-cause &#8212; built into the process design, the equipment, the materials, the methods, the information flows &#8212; then 94 percent of your improvement potential lies in <em>changing the system.</em> And if your operating model is Taylorist &#8212; designed to exclude frontline workers from system improvement &#8212; then you have structurally blocked access to the people with the most direct knowledge of where the system fails.</p><p>The suppression tax isn&#8217;t just the suggestions never submitted (though that&#8217;s part of it). It&#8217;s the process improvements never made, the quality problems never solved at the root, the safety hazards never identified before someone got hurt, the customer complaints never prevented. It&#8217;s the compound interest on decades of forgoing the intelligence that was right there, on the floor, waiting to be asked.</p><p>Toyota&#8217;s operating model minimizes the suppression tax by maximizing the flow of frontline intelligence into system improvement. Deming&#8217;s statistics explain why this works. The NUMMI experiment demonstrates the magnitude of the difference it makes. And Taylor&#8217;s writings explain why most organizations are still running the model that prevents it.</p><p><em>Next week: &#8220;The Good Jobs Evidence&#8221; &#8212; MIT Sloan professor Zeynep Ton has been documenting what happens when companies invert the low-cost labor model. The results will not surprise you by now, but the financial data will.</em></p><p><strong>Dr. Venki Padmanabhan</strong> is a plant manager with 36 years of global manufacturing leadership experience, including executive roles at GM, Chrysler, Mercedes-Benz, Royal Enfield, and Ather Energy. He holds a PhD in Industrial Engineering from the University of Pittsburgh. He is the author of the forthcoming <em>Already Paid For: Why Unlocking Frontline Intelligence Beats Automating Workers Away</em> and co-founder of the Capability Capital Institute. He writes The Long Game at thelonggameforall.substack.com.</p>]]></content:encoded></item></channel></rss>