• My why
  • Are You Human
  • Understanding AI
  • Entrepreneurship Handbook
  • Skill up
  • Inspiration
Tech, business and everything In between
  • My why
  • Are You Human
  • Understanding AI
  • Entrepreneurship Handbook
  • Skill up
  • Inspiration
Tech, business and everything In between
Tech, business and everything In between
Technology

.digital feudalism and AI make past colonial extraction look cute

kamila
No Comments
17th October 2025
10 Mins read
459 Views
feudalism no logo

As much as I love technology and everyone’s current spotlight-stealer, AI, we need to have a serious conversation about where this is heading.

Power structures don’t die. They just get new clothes. After centuries of supposed progress, we’re still organising ourselves like medieval fiefs. Three companies control now 77% of the enterprise AI market. OpenAI went from $300 billion to $500 billion in five months.

And it tells you something about human nature we’d rather not admit; the greed and thirst for power that built feudalism never went anywhere. We just gave it algorithms. The economic forces driving AI concentration are so powerful that most of our “solutions” are wishful thinking dressed up as policy – designed to keep us calm (and carry on being exploited).

We’re past the point of prevention. The question now is what happens after.

Welcome to digital feudalism. Call it techno-feudalism if you want to sound fancy. It emerged with the internet and tech giants around the late 20th century – a shift from making things to owning things. Digital assets, data, platforms became the new castles. And most of us aren’t the lords in this story.

The exploitation you’re not seeing (or don’t bother fact-checking) 

While you’re getting melted over the latest AI agent releases and how easy it is to generate pointless videos such as Jesus walking on water during an Olympic swimming race, that eat up unimaginable amounts of energy, most people won’t bother learning how these companies trained their data in the first place.

Surprise, surprise; cheap workforce and lots of stolen, scraped assets. It’s always been the case.

To put things into better perspective – not that I have any illusion that it will keep you awake at night – Kenyan workers training ChatGPT earned $1.32 to $2.00 per hour reviewing the absolute worst content humans produce—child sexual abuse, murder videos, torture footage. OpenAI paid the contracting company $12.50 per hour per worker, but the workers got a WHOLE $2. Forget mental health support. In May 2024, 97 Kenyan AI workers wrote a letter to President Biden. They didn’t bother with corporate euphemisms – they called it what it was:

“a modern-day slavery.”

…guess, if anyone cared?

Scale AI paid similar wages through their Remotasks platform, then shut down operations with an email that said – no joke – “fare thee well.” A colonial governor leaving after the extraction is complete. So there you go. Your favourite AI assistant is built on a foundation of trauma outsourced to the cheapest markets, then abandoned when convenient.

It is estimated that only 5% of Africa’s AI talent has access to computational power for research. Of that 5%, only 1% have actual GPUs. The remaining 4% have cloud credits of about $1,000 per month. The other 95%? They use free Google Collab with severe restrictions.

A single NVIDIA H100 GPU costs $40,000—that’s 75% of Kenya’s GDP per capita. African researchers literally wait in queue after US and European users finish. That’s an actual compute queue, where African needs are explicitly deprioritised.

So the history repeats itself, just now it’s rebranded as “digital colonialism”. Raw materials extracted from poor countries, processed abroad, finished products sold back at premium prices. Same playbook, just faster and harder to see. Actually I’m lying. It’s crystal clear, but people in power don’t bother doing anything about it. They get their peace of the cake so it’s convenient for them. And us, common people are too brain rotted with shortened attention span (thank you social media and ChatGPT!), so we’re too distracted with the sensational news within our own playground, to care about bigger problems that will materialise in (not so) distant future.  

Why this time might actually be different

Every generation thinks their problems are “unprecedented”, I know. The labour movement that created unions was decades of organising from the 1880s to the 1930s, with lots of people getting beaten and killed along the way. Decolonisation took twenty years of independence movements after World War II weakened the empires. AI went from “interesting research” to “three companies control everything” in less than 10 years. ChatGPT launched in November 2022. By mid ’25, the oligopoly is pretty much complete. But speed isn’t even the scariest part.

Previous inequality structures exploited your labour or extracted resources. They didn’t rewire how your brain worked. Newest AI agents and upcoming neurotech can. In fact, frequent AI tool usage shows “significant negative correlation with critical thinking abilities.” (I wrote about that more in my previous post about companies storming your last privacy space – your brain). 

It’s not that AI makes you dumb. It’s cognitive offloading creating a cascade. As the old saying goes; unused muscle atrophies. Here – initial convenience leads to skill atrophy leads to dependency leads to inability to function without AI. You outsource thinking to AI, your brain stops practicing those skills, eventually you genuinely can’t do it yourself anymore. Let he who is doesn’t use AI for simple posts or even grammar check out of laziness cast the first stone.

Microsoft Research found something even weirder: people with higher confidence in AI showed LESS critical thinking. The more you trust it, the less you think. And the effect is strongest in younger users (17-25 year-olds). What kind of future they’ll be building for the rest of us, you think? 

So in the meantime, we’re building a system that makes people cognitively dependent on their digital landlords. You can free slaves, break up monopolies, redistribute land. How exactly do you “undo” cognitive atrophy across an entire generation? Nobody will want to go back to writing reports from scratch. And certainly not to the Clippy area. 

The people who want this

PayPal “mafia” member, Peter Thiel wrote in 2009:

“I no longer believe that freedom and democracy are compatible.”

He’s also said conservatives should let liberals win, overreach, and then stage a military coup. At Stanford in 2012, he taught students that companies are better run than governments because they have a single decision maker — a dictator, basically.

Thiel’s protégé, JD Vance, is now Vice President of the United States. Thiel bankrolled $15 million for Vance’s 2022 Senate campaign, then lobbied Trump to pick him as VP. One journalist said during the 2024 RNC:

“It’s Peter Thiel’s party now.”

Then there’s Elon Musk, who spent $242.6 million to elect Trump and now co-heads a government agency. 150 billionaire families spent $1.9 billion on the 2024 election – 70% to Republicans. Tech companies increased AI lobbying by 141% year-over-year. OpenAI went from spending $260,000 on lobbying in 2023 to $1.76 million in 2024. Seven-fold increase in one year.

I’m not into politics but these numbers make you think twice about the influence level few tech oligarchs managed to grab. People who explicitly believe democracy is the problem just go governance shopping. You can pay, you can play.The technology itself helps them concentrate power.

AI optimises for engagement (more data for them), predicts your behaviour (more lock-in), and automates work (less labour leverage for you). Facebook’s “AI Backbone” makes 6 million predictions per second about user behaviour. YouTube’s 2024 breakthrough in “intent prediction” led to a 0.05% increase in daily users, which they called “one of the most significant improvements” in recent history.

We’ve moved from surveillance to what researchers call “behavioural actuation”; systems that don’t just watch, but actively intervene to modify your behaviour in real-time. 

Is there even hope?  

So what if you can’t fix the system? What if the concentration really is inevitable? What if we’re actually heading for digital feudalism and there’s no stopping it? Then your options are pretty simple: be a peasant or become a lord. Actually, I’m lying. Most of us won’t be able to go past “peasant.” And of course people love reading articles about inequality. But what they really want to know is how to end up on the winning side of it. So before we get to the “what should society do” solutions, let’s talk about what’s actually happening: millions of people are quietly positioning themselves within the feudal structure rather than fighting to dismantle it.

The new hierarchy- where do you actually fit?

Source: Iman Najafi

Iman Najafi did an amazing graph (above) showing where each of the entities vs you are positioned;

Kings: OpenAI, Anthropic, Google, Meta, Microsoft. They own the foundation models, the compute infrastructure, the data moats. Forget about becoming a king. That window closed before you even stood a chance.

Lords: Companies building on foundation models, but owning valuable assets—proprietary datasets, unique applications, distribution channels, regulatory moats. Maybe achievable if you move extremely fast.

Vassals: Service providers with steady income from lords but owning nothing fundamental. Consultants, agencies, tool builders dependent on API access. Comfortable but precarious.

Peasants: Gig workers, data labellers, content creators whose work trains AI, people whose skills AI automates. You own nothing. You are the product, silly.

In both historical and digital feudalism, upward mobility depends entirely on asset ownership, not skills or credentials. Medieval serfs who became skilled craftsmen were still serfs. Merchants who accumulated assets could become lords. Same pattern is seen today.Your coding skills? Your education? Work ethic? All service-level stuff that AI is coming for. The only thing that protects you is owning assets that generate value independent of your labour.

What assets did you build today? Nada? Then you’re on the peasant track, whether you make £50K or £250K.

Watch behaviour, not rhetoric.

While tech oligarchs publicly discuss “AI ethics” and “responsible innovation,” privately they’re executing a different strategy. They’re building proprietary datasets that aren’t scrapeable. Not contributing to open datasets; creating private moats. The data you can’t access is more valuable than the data you can. They’re forming strategic partnerships with infrastructure owners.

Medieval merchants didn’t marry into nobility, but the ones who formed strategic partnerships with lords could eventually become lords themselves.

You cannot compete with OpenAI in foundation models, but there are niches the giants haven’t conquered; heavily regulated industries where compliance creates moats, specific languages where general AI underperforms, physical-world applications requiring specialised interfaces. The strategy is finding markets where being smaller is an advantage.

They’re obsessing over recurring revenue. One-time projects and hourly billing are peasant economics. Lords have subscriptions, licensing fees, marketplace takes. Building things that generate money while they sleep and get harder for customers to leave. High switching costs are your friend.

They’re more aggressive about intellectual property than feels comfortable. Anything you build that isn’t protected will be copied by someone with more resources. The nice guys who share everything openly? They’re contributing to commons while VCs and corporations enclose it. Open source is beautiful in theory. In practice, Meta open-sources Llama to hurt OpenAI while keeping their real moat (user data and distribution) proprietary. So don’t mistake strategy for altruism.

Of course, this strategy only works for people with certain advantages. Capital to invest in asset creation. Time to build without immediate income pressure. Technical skills or domain expertise. Networks and access to decision-makers. Education and cultural capital.

The Kenyan data labeller peanuts can’t follow this advice. Neither can the factory worker whose job got automated. Nor single parents working two jobs don’t have time for “strategic asset creation.” This is a strategy for the professional class trying not to fall into the working class as AI eats their jobs. It’s not a solution for everyone. It’s a lifeboat, and there aren’t enough seats.

The game theory trap

So here is the prisoner’s dilemma in action: when everyone focuses on individual positioning instead of collective action, we guarantee the feudal outcome. Every talented engineer joining Big Tech makes the oligopoly stronger. Every entrepreneur (and his bosses – VCs) optimising for acquisition choose to become a lord rather than stay independent.

Individually? Positioning yourself within the feudal structure is rational. Collectively? This makes the structure unbreakable.The people at the top know this. They want you thinking “I can’t beat them so I’ll join them” rather than “we could organise and change this.”

Those Silicon Valley mantras about “disruption”? It was never about challenging power. Always about capturing it. They want you participating in the capture, feeling like a winner for getting a slightly better spot in a fundamentally broken structure.

So what do you do? Build assets to protect your family while contributing to the problem? Reject it and risk ruin while everyone else races ahead? There’s no clean answer. The system has always been designed so individual survival conflicts with collective liberation.

Are we screwed?

Concentrated power almost never voluntarily disperses. It takes catastrophe, war, or sustained mass resistance over decades. The Black Death broke feudalism by killing so many people labour became scarce. World Wars destroyed enough wealth that redistribution became possible. Or the Great Depression created political will for the New Deal.

Catastrophe is the most reliable mechanism for breaking concentrated power. And it scary to think that nothing changes until disaster forces it. The companies racing to build superintelligent AI have no idea how to control it. But they pour money into this race anyway.

Stuart Russell, professor of computer science at Berkeley, co-founder of the International Association for Safe and Ethical AI, put it bluntly: we have “no idea” where large language models will take us. LLMs are trained to imitate humans, and in the process “we suspect that they absorb human-like goals. … [But] this is a fundamental error.”

Then he said this:

“Not only may the bus of humanity be headed towards a cliff, but the steering wheel is missing and the driver is blindfolded.”

The companies building this are competing to see who can drive faster. OpenAI went from “we need to be careful about AGI” to $500 billion valuation in 5 months. You think they’re slowing down? If they did, someone else would get there first.

The incentive structure looks like this now – move fast (break things big time), capture the market, worry about catastrophic risk later. Maybe. If there’s time. If it’s profitable. (But in the meantime, prep for a doom and build yourself a bunker.)

Russell warned we’re building AI systems more intelligent than humans that could destroy civilisation. His hope? If beneficial coexistence proves impossible, maybe superintelligent AI will recognise this and gracefully withdraw, “allowing us to shape our own future.”

That’s the optimistic case. That superintelligent AI will be nice enough to fuck off if it realises it’s going to kill us all. But that’s what we’re betting on, really?

https://www.linkedin.com/posts/allison-dolan-5356522_i-no-longer-believe-that-freedom-and-democracy-activity-7224757131948818435-tZ2l?utm_source=share&utm_medium=member_desktop

Shares
Write Comment
Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Post

//hard business truths #17. chasing makes you look desperate

Tech, business and everything In between
Tech, business and everything In between
  • My why
  • Are You Human
  • Understanding AI
  • Entrepreneurship Handbook
  • Skill up
  • Inspiration

Kamila Hankiewicz

Entrepreneur / Host

Creativity is born in chaos. No matter if it's software, podcast or a kitchen. I share what I learn while building untrite.com, oishya.com, and hosting brilliant people on my podcast Are You Human.