AI Policy and the UUA Study/Action Issue Process

A Conversation with UU Boulder’s Neal McBurnett
Proceedings of Our August 1 Meeting

This isn’t exactly the way I would have written this story had I decided to invest hours in reworking the transcript, but Neal and I agree ChatGPT has done pretty well here, so we’re going with it. Thanks to Neal for giving it a light polish.

Dan Forbush
UU Saratoga

In the rapidly evolving landscape of artificial intelligence, voices like Neal McBurnett's provide valuable insights into both technological advances and the crucial policy considerations that come with them. Given his wide range of experience with AI from academia and policy advocacy, Neal brings a helpful perspective to the intersection of technology and societal impact.

NEAL MCBURNETT IN OUR ZOOMSPACE

Neal’s journey into the world of artificial intelligence began in 1978 when he took his first AI course during the early years of Lisp-based symbolic AI. His fascination with AI continued, and in 2011, he participated in one of the first massive online AI courses, taught by renowned figures Peter Norvig and Sebastian Thrun. This rekindled his passion and led to an invitation to teach the introductory AI course at the University of Colorado for 2 years. Neal has engaged deeply with AI technology, using large language models and staying updated with the latest advancements.

Beyond his technical expertise, Neal has been actively involved with the Unitarian Universalist Association (UUA) since 1998, particularly in the congregational study/action issue process. This involvement has led to the development of significant policy statements, known as statements of conscience, which reflect the collective views of UU congregations on pressing social issues.

This recording of our August 1 conversation in Zoom will be available through August 15, at which time it will cycle off. As your password, please use: YS%+a7Kd. Below is ChatGPT's summary of main points, lightly edited by Neal.


The UUA Study/Action Issue Process

Neal elaborated on the process for bringing study/action issues to the General Assembly, a vital mechanism within the UUA for addressing significant social concerns. The UUA has two main avenues for voicing its collective conscience: actions of immediate witness and the congregational study/action issue process (CSAI).

Actions of immediate witness are short-term responses to hot topics, reflecting the delegates' immediate concerns at a particular year's General Assembly. These are not deeply analyzed or studied over a long period, making them timely but not necessarily comprehensive.

In contrast, the CSAI is a more structured and thorough process. Every two years, congregations are invited to propose topics for in-depth study and discussion. This involves a three-year period where congregations collectively examine an issue, engage in debates, and refine their perspectives. The outcome is a statement of conscience, a well-supported document that represents the informed views of the UU community.

Recently, Neal notes, the CSAI process has faced some challenges and uncertainties. The Commission on Social Witness, which oversees this process, is currently retrenching and reassessing its approach. Despite these hurdles, Neal emphasizes the importance of engaging in this process, whether through formal channels or informal discussions within UU congregations. He encourages proactive participation and the submission of proposals by the October 1st deadline, even if the official process seems in flux.


Policy Issues for UUA Consideration

Neal identified several critical policy issues that the UU community should consider bringing to the UUA General Assembly. These issues reflect the broad and profound impact of AI on society and the need for thoughtful, ethical governance.

  1. Regulating Research vs. Products: Neal underscored the importance of regulating the outcomes and applications of AI, rather than the research itself. He cautions against policies that stifle innovation by over-regulating the research process. Instead, he advocates for regulations that address specific harmful applications, such as biased algorithms in employment decisions or privacy-invading technologies.

  2. Privacy Laws: Emphasizing the need for robust privacy laws, Neal suggested that regulations should be technology-neutral, focusing on protecting individuals' privacy regardless of how data is obtained. He points to the European Union's approach as a model, advocating for similar protections in the U.S.

  3. Open Source and Democratization: Neal passionately argues for the benefits of open-source models in AI development. He believes that open access to AI tools democratizes technology, allowing diverse communities to adapt and use these tools in culturally relevant ways. He highlights the success of open projects like Wikipedia and Linux as examples of how transparency and collaboration can lead to superior outcomes benefiting all of humanity.

  4. Regulatory Capture: Warning against regulatory capture, Neal notes that large corporations often advocate for strict regulations that they can navigate more easily than smaller competitors. This stifles innovation and consolidates power among a few large entities. He calls for vigilance in crafting regulations that promote fair competition and transparency, and prevent monopolistic control.

  5. Deep Fakes and Disinformation: Addressing the rise of deep fakes and disinformation, Neal advises that regulations should target the harmful actions themselves, rather than the technologies used to create them. This approach ensures that perpetrators are held accountable regardless of the methods they use.

  6. Artificial General Intelligence (AGI): Discussing the future of AGI, Neal highlighted the challenges of defining and regulating AI that approaches human-level intelligence. He notes the ongoing debate about AI sentience and the difficulties in crafting policies that can keep pace with rapid technological advancements.

Neal's insights provide a roadmap for the UU community as it navigates the complex landscape of AI policy. His emphasis on ethical considerations, open access, transparency and thoughtful regulation underscores the importance of a balanced approach to technology governance. By engaging in the CSAI process and addressing these critical issues, the UU community can contribute to shaping a future where AI serves the greater good.


Neal calls this California legislation “dangerous.”

California's SB-1047 would outlaw powerful open-weight AI models built like Llama 3.1, and have the effect of locking AI up within mega-corporations, leading to the same sorts of problems we see with the dominance of proprietary social media platforms. It would make the mistake of regulating AI technology rather than AI applications and products. It would serve as a form of "regulatory capture" by the big firms that want to control and make money off AI, vs those sharing it as a tool for the world to adopt to local needs. See Coverage in Ars TechnicaColorado passed SB24-205 in a hurry this spring, and the Governor, AG and bill's sponsor are already promising to try to deal with the flaws which lead many businesses to consider leaving the state.

Neal says this Colorado legislation would be worth our getting behind:

 Colorado's "Brain Privacy" Law as a National Model

References

Congregational Study/Action Issue (CSAI) Process

October 1 Deadline for Congregations to Submit Proposals for the 2024-2027 Cycle

The most recent information I've run across in a quick search for how to submit a proposal for a CSAI - congregational study-action issue - is this archived page and the links from it. 

Proposer's Guide—Part 1: Congregational Study Action Issues / Statements of Conscience | UUA.org 

In particular, the question of what an appropriate issue is discussed here.

And here’s what the one-page narrative proposal should look like.