Toward a New Media Ecosystem

With the arrival of "deepfakes" not only in photos but also in video and audio, AI-driven disinformation appears certain to become a mounting scourge in our media landscape. 

"Unless we take action, democracy in the United States seems destined to fail, and our sovereignty as citizens will perish with it," writes Barbara McQaude in Attack from Within: How Disinformation is Sabotaging America.

"Along the way, elections will be compromised. Authoritarians will come into power. Dissenters will face intimidation violence. Disinformation will hamper our ability to solve challenges like climate change, pandemics, and wealth disparities. Raw power will replace the rule of law...

"Disinformation and the threat of new forms of totalitarian control are unlikely to ever be completely eradicated, but given the stakes, we must take affirmative steps to diminish them to the maximum degree." 

McQuade is optimistic. "The good news is that most problems created by humans can be solved by humans," she writes. "We have figured out how to prevent polio and, as John F. Kennedy promised, to send a person to the moon. We can invest resources and devote American ingenuity to researching the best ways to stop disinformation. But addressing problems requires consensus, which is difficult to achieve without agreeing to a shared set of facts."

In a similar vein, Tobias Rose-Stockwell asks us to imagine a "great sense-making algorithm" that's "built for our flourishing instead of our outrage."

"Thinking through the design of such a system is becoming a necessary task," he writes In Outrage Machine. "Any platform that controls and influences humans at such an enormous scale must provide us with clear ways of understanding it." 

Some of the proposals that McQuade and Rose-Stockwell overlap. Combining them, we get this list, which looks like a good place to start:

  • Reformed Section 230:  Deprive online platforms of immunity for harmful content they knowingly amplify for profit. This includes false political advertising and content flagged by credible fact-checking organizations.

  • Regulation of Messaging Platforms: These could be light-touch regulations similar to common carrier obligations, or more substantial oversight if their outsized influence warrants it.

  • Verified User Accounts: Require social media companies to implement robust user verification and thereby reduce the spread of harmful content by anonymous accounts and bots.

  • Transparent Political Advertising: Require platforms to clearly disclose funding sources and targeting mechanisms of all political ads.

  • Auditable Algorithms: Require companies to open algorithms that shape content visibility to independent auditing.

  • Enforced Community Standards: Require platforms to vigorously enforce standards prohibiting hate speech, threats, harassment, and the spread of verifiably false information.

  • User Bill of Rights: Introduce a transparent framework of user rights and responsibilities while ensuring due process and user input into significant policy changes.

  • Prioritizing Factual Content: Require algorithms to explicitly prioritize factual information from authoritative sources, including official voting information or reporting from reputable news outlets.

For better or for worse, social media are "virtualizing" our human communities and scaling them exponentially, posing both benefits and challenges. We must find a balance between freedom of expression and the public's welfare and safety, and that means some forms of moderation are going to be necessary. 

To generate a consensus will require a broader range of stakeholders to be in the ring with more dialog: think tanks, educational institutions, and faith communities. We need a shared vision of a well-architected future that amplifies human connection and with strong regulatory measures to ensure accountability. 

Many confrontations -- especially in such public forums as congressional hearings -- stem from a lack of understanding of AI and business models by regulators. This has led to aggressive and sometimes unjustifiably hostile reactions towards technological advancements. 

We need to move away from antagonistic approaches, developing roadmaps for relationships that foster cooperation and adopting proactive, rather than reactive, strategies 

One AI and Faith expert recently suggested, let's write a "two-page white paper" to "help good innovate more quickly than evil." 

Here's a start on a plan we asked ChatGPT to draft for us.  

Dan Forbush

PublIsher developing new properties in citizen journalism. 

http://smartacus.com
Previous
Previous

Prompt Engineering: A New Genre of Writing

Next
Next

Next on Nova: 'A.I. Revolution'