Let's consider the changes visible in media: how media is created, curated, distributed and controlled.
Is this email not displaying correctly?
View it in your browser.

Musings Report 2024-38  9-21-24  How Is AI Changing Our Media--and Our Trust?

You are receiving this email/post because you are a subscriber/patron of Of Two Minds / Charles Hugh Smith.

How Is AI Changing Our Media--and Our Trust?

A great many claims are being made about how AI will revolutionize our lives, and the effects are already visible in a number of realms. I've written many essays over the past 18 months addressing a wide range of AI-related topics. Here are a few from the long list:

There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk  8/8/24

Will Hollywood and the Music Industry Survive the Super-Abundance of Original AI Content?  7/6/24

Who Error-Corrects AI?  2/28/24

Let's consider the changes visible in media: how media is created, curated, distributed and controlled.

Natural language machine-learning tools such as ChatGPT (i.e. large language models, LLMs) are trained on large amounts of data to assemble statistical relationships and mimic human speech patterns. These tools can summarize topics and concepts, and respond to queries about specific subjects, from programming to history. 

They appear authoritative but this authority is illusory. They "hallucinate" fictitious "facts," and place quotation marks around text as if it were a quote from a source, when it is only text generated by the program.  Correspondent Bob W. shared the response he received from OpenAI about ChatGPT's use of quotation marks:

"You're correct in noting that ChatGPT, while capable of generating text that appears as direct quotations from specific works, does not access external databases or the internet to pull direct quotes from texts. Instead, it generates responses based on patterns and information it learned during its training process. This means that while ChatGPT can produce text that resembles quotations and attribute them to specific works or authors, these responses are generated based on its understanding and are not pulled directly from the source materials.

This characteristic of ChatGPT is part of its design as a language model that generates responses based on a vast corpus of pre-existing text data up to its last training cut-off in September 2021. As such, it's important to verify any 'quotations' provided by ChatGPT against the original source material, especially for critical or scholarly work."

Bob added this conclusion: "I would think that ChatGPT programming would include knowing the meaning of quotation marks."

That this inherently misleading trait is not readily visible to users is disturbing.

These tools can generate natural language texts, from articles to entire books cobbled together from the program's enormous databases.

I recently experimented with another AI tool from Google, Notebooklm, which generates a podcast conversation between two AI-generated hosts discussing whatever text you upload.

I uploaded my essay
2024, A Year of No Significance.
Here is the
AI-generated podcast.

The voices are remarkably natural, though they're a little too perfect: no pauses, stumbles, etc.

Their discussion stays on topic, but it includes references and contexts that aren't in the essay. In other words, the topics are interpreted and recontextualized in accordance with the programming. 

Just as these programs "learn" by scanning texts composed by humans, we humans "learn" about the limits and implicit design of these programs by experimenting with them.

My second AI-generated podcast was based on my essay

The Impossible Dream: 70 Million Boomers Retire in Style.
Here is the
AI-generated podcast

We can discern a few things about the program's design. One is that it generates podcasts of pre-set durations: short essays generate a podcast of X length, longer essays generate a podcast of about 11:30 minutes. The program will fill this time with "fluff" as needed.  

The program is also designed to generate a "positive ending," because this is America, and there must always be a solution / positive outcome.  

I did mention investing in our own health as the most cost-effective option, but there really isn't any way to sugarcoat the impossibility of funding 70 million retirees if a substantial percentage need caregivers.

This reveals the way in which AI tools can subtly contextualize content to suit a pre-programmed agenda that isn't visible to users.

That this agenda could have political dimensions is obvious.

Now that these AI tools are generating texts, audio and video on a mass scale, we can discern structural problems.

One is the potential for hallucinated "facts" and false attributions (text placed in quotes that is not an actual quote), and the subtle recontextualization of topics and data to fit pre-programmed norms.

Another is what I call "the dragon eats its own tail." (Or in this case, "the dragon eats its own tale.") AI programs scooping up source text, audio and video are now scooping up AI-generated content that is not authoritative, nor is it clearly identified as of dubious origin, i.e. AI generated.

So dubious, degraded and outright false content is recycled as authoritative, weakening the entire foundation of these tools. There is no easy fix, as I discuss in
Who Error-Corrects AI?   2/28/24

A second issue is the centralization of these tools and the distribution of content. To understand how centralization (concentration of ownership and control) has changed media, we need to return to the pre-social media / Big Tech monopolies days of the early Internet, circa 2000.

In its initial phase, the World Wide Web (a.k.a. the Web or the Internet) was a self-organizing public utility:  the cost of Internet access was standardized like a utility: everyone, rich or low-income, paid the same monthly access fee.  Though the government (and private governance bodies such as ICANN) provided a basic scaffolding of standards, ownership and control of the content posted on the Web were private: individuals, enterprises, agencies and organizations all paid to host a website (URLs, DNS service, servers or hosting services) and posted their own content.

This level playing field was open to all, and hence self-organizing: sites were linked by their owners / managers to email accounts, bulletin boards and other websites of their own choosing.

In this phase, search was what I call organic, meaning search engines prioritized search results solely by relevance. Google's innovation was Page Rank, which ranked results by incoming and outgoing links. Organic search was not profitable and so Google and other search engines were written off as intrinsically low-margin enterprises.

In the early 2000s, this self-organizing utility model was replaced by another far more centralized and profitable structure. The Internet we now have that is dominated by a handful of immensely powerful corporations devoted not to public utility but to the maximization of profit.

Through search and social media, these mega-monopolies control what content is displayed, prioritized, deleted or buried, effectively shaping the entire media landscape by means that are hidden to us (algorithms) and to purposes / agenda that are equally invisible.

It may seem innocuous that automated podcasts, videos and texts are preprogrammed to generate a "positive ending," but we are naive if we reckon that's the limit of the recontextualizing that's occurring beneath the surface. 

The third critical issue is social trust, a topic I explored in.
Our AI-Powered Post-Truth, Post-Trust Unraveling 10/21/23

With untrustworthy sourcing, invisible algorithms and agendas, and deepfake video and audio a few clicks away, then what happens to our society-wide trust in the "institutions" of media? I discussed the social decay that results when a high-trust society erodes to a low-trust society.

A Low-Trust Society Is an Impoverished Society (March 8, 2024)

How do we sort the wheat from the chaff? How do we verify or authenticate trustworthy sources?

It's tempting to hope that an AI tool will be developed that identifies every bit of AI-generated content, but how can we trust this AI tool, given the absence of trustworthy AI and the self-serving nature of tech / media monopolies?

What's already apparent is many people are turning away from trusting institutions and corporations in favor of individual humans they trust.  This is a natural response to loss of trust in self-serving corporations and institutions, but there are two problems with relying in individual content creators.

One is few individuals have the means to collect or verify data, so we all rely on data collected by state agencies or institutions.  If individuals are all referencing the same data, and that data has been massaged to align with invisible agendas, then even trusted individuals are flying blind.

The other problem is that individuals can be "bent" by hidden funding that is invisible to us. This is already an issue in scientific and academic research, as front organizations that cultivate a facade of objectivity are funded by concentrations of wealth and power that benefit from influencing studies and research. 

In other words, the concentration of wealth, power and control is the core source of decaying trust. Instead of a free for all, we have a handful of monopolies controlling what we read, see and hear via invisible algorithms and contextualizing. 

There is no real path back to a high-trust social order other than breaking up every monopoly-cartel, starting with the Big Tech / AI monopolies dominating the media.


Highlights of the Blog 


I Want the "Rich Guys Internet" 9/19/24

2024, A Year of No Significance 9/17/24

The Impossible Dream: 70 Million Boomers Retire in Style  9/15/24


Best Thing That Happened To Me This Week 

My jury duty was cancelled, as the case was settled. Whew.


What's on the Book Shelf


Beyond the Stable State by Donald Schon, recommended by Michael M.


From Left Field

NOTE TO NEW READERS: This list is not comprised of articles I agree with or that I judge to be correct or of the highest quality. It is representative of the content I find interesting as reflections of the current zeitgeist. The list is intended to be perused with an open, critical, occasionally amused mind.

Many links are behind paywalls. Most paywalled sites allow a few free articles per month if you register. It's the New Normal.


She Survived the Maui Wildfires. She Couldn’t Survive the Year After..

How to Be Truly Free: Lessons From a Philosopher President--Pepe Mujica, Uruguay’s spartan former president and plain-spoken philosopher, offers wisdom from a rich life as he battles cancer. (via Stuart L.)

Q&A: Jose Mujica on Uruguay’s secular history, religion, atheism and the global rise of the ‘nones

World scientists’ warning: The behavioral crisis driving ecological overshoot.

Human ‘behavioral crisis’ at root of climate breakdown, say scientists.
New paper claims unless demand for resources is reduced, many other innovations are just a sticking plaster.  "The material footprint of renewable energy is dangerously underdiscussed. These energy farms have to be rebuilt every few decades – they’re not going to solve the bigger problem unless we tackle demand."


The Bronze Age Collapse.

The CrowdStrike Outage and Market-Driven Brittleness

The Hacking of Culture and the Creation of Socio-Technical Debt

A decade after his death, French sociologist Pierre Bourdieu stands tall

A eulogy to Koda Farms from a Japanese Central Valley farmer who grew up on their rice

How a Real Estate Boom Drove Political Corruption in Los Angeles

Prince, Villanova Junction, live 2011 (6:35 min) -- Prince's cover of a classic Jimi Hendrix instrumental....

"A child is a fire to be lit, not a vase to be filled." Francois Rabelais

Thanks for reading--
 
charles
Copyright © *|CURRENT_YEAR|* *|LIST:COMPANY|*, All rights reserved.
*|IFNOT:ARCHIVE_PAGE|* *|LIST:DESCRIPTION|*
Our mailing address is:
*|HTML:LIST_ADDRESS_HTML|**|END:IF|*
*|IF:REWARDS|* *|HTML:REWARDS|* *|END:IF|*