The point here is that AI is about power, not knowledge, intelligence or human fulfillment.
Is this email not displaying correctly?
View it in your browser.

Musings Report 2025-47  11-22-25  Knowing, Doing, Fulfillment, Power, Collapse

You are receiving this email/post because you are a subscriber/patron of Of Two Minds / Charles Hugh Smith.

Knowing, Doing, Fulfillment, Power, Collapse

This sounds like a koan, and it is. Let's dig in.

Knowledge isn't just knowing--the ultimate manifestation of knowledge is doing--knowing how to make, create, fix, learning from experience, a process that constantly expands our abilities to do more.

Knowing an "answer" doesn't give us the ability to do something useful. Looking at a recipe doesn't give us the knowledge of how to cook. Having AI generate a derivative song doesn't give us the ability to play or compose music. Looking at a blueprint doesn't give us the knowledge of how to build a house.  The knowledge of doing is experiential-- knowing is not enough, we must learn by doing.

The process of doing assembles tacit knowledge, experiential knowledge that cannot be fully formalized because it is assembled by both halves of our minds, the intuitive and the rational.

AI provides "answers," but this is not a substitute for the knowing that enables doing, which then enables mastery.

Humans are not machines, and "value" cannot be reduced to financial numbers. Humans are social beings because isolation offers little selective advantage; working together in groups offers selective advantages.

What's valuable is thus socially defined: making ourselves useful to others gives us purpose, meaning, a positive social role and a positive identity / self-respect / self-worth. 

Without a socially useful role, we wither, and are prone to self-destructive spirals and depression or anti-social behaviors: Idle hands are the devil's workshop.

Fulfillment as individuals and as social beings arises not from idleness / convenience but from applying the knowing of doing.

The vision of fulfillment offered by AI is the exact opposite: Nirvana is having nothing to do because robots and AI will do all the work, and we will have limitless  conveniences and leisure--a PR cover for idleness.

This vision of fulfillment--of having nothing to do but play all day--is at its core childlike, the child's idea of happiness.  But once we grow up, a life of purposeless, socially useless idleness is not fulfilling or healthy; it's debilitating. 

In the Silicon Valley vision of AI supremacy, we will all buy a robot that will do all our cooking and cleaning so we will be blissfully free to stare at screens all day, "entertaining" ourselves with endless AI generated content and social media scrolls.  That all this "entertainment" is debilitating and deranging--never mind, the point for the AI boosters is that it's profitable.

There is no fulfillment possible in watching a robot prepare a meal for us. The fulfillment, the satisfaction, and yes, the joy, is in harvesting the green beans ourselves, julienning them, and then preparing them for the table we set ourselves.

A world in which we stare at screens while robots do all the work is a prison, a drip of Soma, a lifeless life devoid not just of fulfillment but of independence, self-reliance and power, for once we no longer know how to do anything essential and useful ourselves, we are dependent, which is another way of saying we're powerless.

As I have noted here many times, self-reliance is the foundation of agency--control of the direction of our lives--and power.

AI concentrates power in the hands of the few, turning everyone who comes to depend on the Soma of "answers" and robots into a ring-fenced herd that no longer has the power to act or think independently.

To the degree that knowledge is power, then AI is the concentration of this power because AI curates what is considered knowledge.  And since AI is a model, and all models leave things out that the model builders don't even realize they left out because they're embedded in a cultural mindset of what qualifies as ""knowable" and "knowledge," then all AI is deeply, profoundly, inescapably coercive on the ground level of what we take to be "known" and therefore "true."

The models of "intelligence" and "knowledge" generate content that reinforces the limitations and biases  of the model. The inevitable outcome of this self-reinforcing loop is "model collapse":  the model ceases to be anything other than a reflection of its own limitations and biases, presented as "facts, answers and knowledge."

This article explains just how this curation, editing and bias works. It is paywalled, but it's well worth reading if you can access a free version. I have excerpted some key points below.


What AI doesn’t know: we could be creating a global ‘knowledge collapse’: As GenAI becomes the primary way to find information, local and traditional wisdom is being lost. And we are only beginning to realise what we’re missing.

To understand how certain ways of knowing rise to global dominance, often at the expense of Indigenous knowledge, it helps to consider the idea of cultural hegemony developed by the Italian philosopher Antonio Gramsci.

Gramsci argued that power is maintained not solely through force or economic control, but also through the shaping of cultural norms and everyday beliefs. Over time, epistemological approaches rooted in western traditions have come to be seen as objective and universal. This has normalised western knowledge as the standard, obscuring the historical and political forces that enabled its rise. Institutions such as schools, scientific bodies and international development organisations have helped entrench this dominance.


 In her book Decolonizing Methodologies (1999), the Māori scholar Linda Tuhiwai Smith emphasises that colonialism profoundly disrupted local knowledge systems – and the cultural and intellectual foundations on which they were built – by severing ties to land, language, history and social structures. Smith’s insights reveal how these processes are not confined to a single region but form part of a broader legacy that continues to shape how knowledge is produced and valued. It is on this distorted foundation that today’s digital and GenAI systems are built.

I recently worked with Microsoft Research, examining several GenAI deployments built for non-western populations. Observing how these AI models often miss cultural contexts, overlook local knowledge and frequently misalign with their target community has brought home to me just how much they encode existing biases and exclude marginalised knowledge.

The work has also brought me closer to understanding the technical reasons why such inequalities develop inside the models. The problem is far deeper than gaps in training data. By design, LLMs also tend to reproduce and reinforce the most statistically prevalent ideas, creating a feedback loop that narrows the scope of accessible human knowledge.

Why so? The internal representation of knowledge in an LLM is not uniform. Concepts that appear more frequently, more prominently or across a wider range of contexts in the training data tend to be more strongly encoded. For example, if pizza is commonly mentioned as a favourite food across a broad set of training texts, when asked “what’s your favourite food?”, the model is more likely to respond with “pizza” because that association is more statistically prominent.

More subtly, the model’s output distribution does not directly reflect the frequency of ideas in the training data. Instead, LLMs often amplify dominant patterns or ideas in a way that distorts their original proportions. This phenomenon can be referred to as “mode amplification”.

And beyond merely reflecting existing knowledge hierarchies, GenAI has the capacity to amplify them, as human behaviour changes alongside it. The integration of AI overviews in search engines, along with the growing popularity of AI-powered search engines such as Perplexity, underscores this shift.

As AI-generated content has started to fill the internet, it adds another layer of amplification to ideas that are already popular online. The internet, as the primary source of knowledge for AI models, becomes recursively influenced by the very outputs those models generate. With each training cycle, new models increasingly rely on AI-generated content. This risks creating a feedback loop where dominant ideas are continuously amplified while long-tail or niche knowledge fades from view.


The AI researcher Andrew Peterson describes this phenomenon as “knowledge collapse”: a gradual narrowing of the information humans can access, along with a declining awareness of alternative or obscure viewpoints. As LLMs are trained on data shaped by previous AI outputs, underrepresented knowledge can become less visible – not because it lacks merit, but because it is less frequently retrieved or cited. Peterson also warns of the “streetlight effect”, named after the joke where a person searches for lost keys under a streetlight at night because that’s where the light is brightest. In the context of AI, this would be people searching where it’s easiest rather than where it’s most meaningful. Over time, this would result in a degenerative narrowing of the public knowledge base.

Allow me to summarize:
AI inevitably generates model collapse and knowledge collapse.
AI extinguishes fulfillment by extinguishing the tacit knowledge of doing.
This concentrates power in the hands of those who own and control the AI gearing while disempowering everyone who accepts AI's dripline of "answers" and implicit promise that purposeless idleness is fulfillment when it is actually a prison of debilitating powerlessness.

This gearing is busy embedding itself in every layer of power as the means of concentrating power.


There Is Only One AI Company. Welcome to the Blob: As Nvidia, OpenAI, Google, and Microsoft forge partnerships and deals, the AI industry is looking more like one interconnected machine. What does that mean for all of us?

In conclusion:  this is not to say there isn't some utility in some AI tools when they are narrowly applied and tightly constrained. The point here is that AI is about power, not knowledge, intelligence or human fulfillment.


Highlights of the Blog 


Is AI a Catalyst for Growth--or For Collapse?  11/22/25

What We've Lost  11/18/25


Best Thing That Happened To Me This Week 

BBQ grilled steaks...

and potato salad, shared with friends...


What's on the Book Shelf


After the Empire: The Breakdown of the American Order (2006) Emmanuel Todd


From Left Field

NOTE TO NEW READERS: This list is not comprised of articles I agree with or that I judge to be correct or of the highest quality. It is representative of the content I find interesting as reflections of the current zeitgeist. The list is intended to be perused with an open, critical, occasionally amused mind.

Many links are behind paywalls. Most paywalled sites allow a few free articles per month if you register. It's the New Normal. At a reader's suggestion, I'm identifying links that are free/not paywalled.


Millions of Kids Are on ADHD Pills. For Many, It’s the Start of a Drug Cascade. Powerful psychotropic drugs are often the next step, even though their combined effects in young children haven’t been studied closely. ‘I was living in a body hijacked by the medication.’ (wsj.com, paywalled)

The Loneliness Epidemic Isn't About Phones, It's About Algorithms. (free)

The Middle Class Is Buckling Under Almost Five Years of Persistent Inflation. Workers growing tired of economy in which everything seems to get more expensive. (wsj.com, paywalled)

I'm 49 years old and was laid off from my six-figure sales job. I now make $52,000 in an entry-level role where I work alongside college grads. (free)

Feeling Great About the Economy? You Must Own Stocks. Investors’ rosy feelings about their stock market gains are powering spending—but it’s a different story for everyone else. (wsj.com, paywalled)

A Bartender And A School Worker Make $300K A Year. They Want To Know How Others Cope With People Assuming They're Struggling? (free)

Tech should help us be creative. AI rips our creativity away: AI-generated songs are topping Spotify charts. This isn’t about the ‘democratization’ of art – it’s about scale. (theguardian.com, paywalled)

The Real Threat Isn’t AI. It’s That Our Jobs Were Never Worth Doing. (free)

A groundswell of activism takes hold in the US: ‘We are a bridge to the future’ (theguardian.com, paywalled)

Experience: I found an old Rembrandt in a drawer. (theguardian.com, paywalled)

"We're Losing Our Community": Short-Term Rentals Are Ruining Three Rivers, Residents Say. (free)

Mystery Hacker Used AI To Automate 'Unprecedented' Cybercrime Rampage. (free)

I just graduated from Yale. Now, I'm back with my family in low-income housing, and I'm not sure where I belong. (free)

My parents think of Vietnam as the country they escaped. I see it as the place I want to live. (free)

"Knowing is not enough, we must apply. Willing is not enough, we must do."  Bruce Lee

Thanks for reading--
 
charles
Copyright © *|CURRENT_YEAR|* *|LIST:COMPANY|*, All rights reserved.
*|IFNOT:ARCHIVE_PAGE|* *|LIST:DESCRIPTION|*
Our mailing address is:
*|HTML:LIST_ADDRESS_HTML|**|END:IF|*
*|IF:REWARDS|* *|HTML:REWARDS|* *|END:IF|*