The Next Leap in Writing: NotebookLM

ChatGPT was amazing enough when it burst on the scene two years ago. Now we have Google’s NotebookLM, which packages Gemini 1.5 in workspace environment that shows us the future online collaboration and the next step in automated writing.

THIS IS THE NOTEBOOKLM WORKSPACE WE’VE CREATED TO DEVELOP OUR AI ENGAGEMENT GUIDE . THUS FAR, WE’RE WORKING ONLY WITH THE FOUR SOURCES LISTED ON THE LEFT. THIS WILL EXPAND AS OUR GUILDE EVOLVES. FOR THE TIME BEING, I’M DENIED PERMISSION TO INVITE OTHERS INTO THE SPACE OR EVEN TO SHARE SINGLE DOCUMENTS IT GENERATES. I’m cutting and pasting. > DF 

NotebookLM is the next step in the evolution of the web. Google will no longer merely serve up links to web pages in our searches. It now enables us to target web pages of special interest, extract all of their information, and process it in response to any question we ask it, making it a personal AI agent of exceptional power — as we’ll demonstrate in our drafting of our AI Engagement Guide for UU Congregations.

From all of this content, it also will generate podcasts to perform whatever messaging task we assign it. We’ll demonstrate that feature shortly.

First, this screen shot shows the NotebookLM workspace we’ve created to develop our AI Engagement Guide. Note the four sources of content listed on the left. These are the documents Gemini consults in framing its response and no more. Which means we can trust whatever it returns.

From all of that content, Gemini has produced this draft in text. It also has produced this podcast. As you’ll see, this sounds like two humans having a conversation. The only indication that their voices are AI-generated is the difficulty they have with “UU.”

Click here to play it.

AI Policy and the UUA Study/Action Issue Process

A Conversation with UU Boulder’s Neal McBurnett
Proceedings of Our August 1 Meeting

This isn’t exactly the way I would have written this story had I decided to invest hours in reworking the transcript, but Neal and I agree ChatGPT has done pretty well here, so we’re going with it. Thanks to Neal for giving it a light polish.

Dan Forbush
UU Saratoga

In the rapidly evolving landscape of artificial intelligence, voices like Neal McBurnett's provide valuable insights into both technological advances and the crucial policy considerations that come with them. Given his wide range of experience with AI from academia and policy advocacy, Neal brings a helpful perspective to the intersection of technology and societal impact.

NEAL MCBURNETT IN OUR ZOOMSPACE

Neal’s journey into the world of artificial intelligence began in 1978 when he took his first AI course during the early years of Lisp-based symbolic AI. His fascination with AI continued, and in 2011, he participated in one of the first massive online AI courses, taught by renowned figures Peter Norvig and Sebastian Thrun. This rekindled his passion and led to an invitation to teach the introductory AI course at the University of Colorado for 2 years. Neal has engaged deeply with AI technology, using large language models and staying updated with the latest advancements.

Beyond his technical expertise, Neal has been actively involved with the Unitarian Universalist Association (UUA) since 1998, particularly in the congregational study/action issue process. This involvement has led to the development of significant policy statements, known as statements of conscience, which reflect the collective views of UU congregations on pressing social issues.

This recording of our August 1 conversation in Zoom will be available through August 15, at which time it will cycle off. As your password, please use: YS%+a7Kd. Below is ChatGPT's summary of main points, lightly edited by Neal.


The UUA Study/Action Issue Process

Neal elaborated on the process for bringing study/action issues to the General Assembly, a vital mechanism within the UUA for addressing significant social concerns. The UUA has two main avenues for voicing its collective conscience: actions of immediate witness and the congregational study/action issue process (CSAI).

Actions of immediate witness are short-term responses to hot topics, reflecting the delegates' immediate concerns at a particular year's General Assembly. These are not deeply analyzed or studied over a long period, making them timely but not necessarily comprehensive.

In contrast, the CSAI is a more structured and thorough process. Every two years, congregations are invited to propose topics for in-depth study and discussion. This involves a three-year period where congregations collectively examine an issue, engage in debates, and refine their perspectives. The outcome is a statement of conscience, a well-supported document that represents the informed views of the UU community.

Recently, Neal notes, the CSAI process has faced some challenges and uncertainties. The Commission on Social Witness, which oversees this process, is currently retrenching and reassessing its approach. Despite these hurdles, Neal emphasizes the importance of engaging in this process, whether through formal channels or informal discussions within UU congregations. He encourages proactive participation and the submission of proposals by the October 1st deadline, even if the official process seems in flux.


Policy Issues for UUA Consideration

Neal identified several critical policy issues that the UU community should consider bringing to the UUA General Assembly. These issues reflect the broad and profound impact of AI on society and the need for thoughtful, ethical governance.

  1. Regulating Research vs. Products: Neal underscored the importance of regulating the outcomes and applications of AI, rather than the research itself. He cautions against policies that stifle innovation by over-regulating the research process. Instead, he advocates for regulations that address specific harmful applications, such as biased algorithms in employment decisions or privacy-invading technologies.

  2. Privacy Laws: Emphasizing the need for robust privacy laws, Neal suggested that regulations should be technology-neutral, focusing on protecting individuals' privacy regardless of how data is obtained. He points to the European Union's approach as a model, advocating for similar protections in the U.S.

  3. Open Source and Democratization: Neal passionately argues for the benefits of open-source models in AI development. He believes that open access to AI tools democratizes technology, allowing diverse communities to adapt and use these tools in culturally relevant ways. He highlights the success of open projects like Wikipedia and Linux as examples of how transparency and collaboration can lead to superior outcomes benefiting all of humanity.

  4. Regulatory Capture: Warning against regulatory capture, Neal notes that large corporations often advocate for strict regulations that they can navigate more easily than smaller competitors. This stifles innovation and consolidates power among a few large entities. He calls for vigilance in crafting regulations that promote fair competition and transparency, and prevent monopolistic control.

  5. Deep Fakes and Disinformation: Addressing the rise of deep fakes and disinformation, Neal advises that regulations should target the harmful actions themselves, rather than the technologies used to create them. This approach ensures that perpetrators are held accountable regardless of the methods they use.

  6. Artificial General Intelligence (AGI): Discussing the future of AGI, Neal highlighted the challenges of defining and regulating AI that approaches human-level intelligence. He notes the ongoing debate about AI sentience and the difficulties in crafting policies that can keep pace with rapid technological advancements.

Neal's insights provide a roadmap for the UU community as it navigates the complex landscape of AI policy. His emphasis on ethical considerations, open access, transparency and thoughtful regulation underscores the importance of a balanced approach to technology governance. By engaging in the CSAI process and addressing these critical issues, the UU community can contribute to shaping a future where AI serves the greater good.


Neal calls this California legislation “dangerous.”

California's SB-1047 would outlaw powerful open-weight AI models built like Llama 3.1, and have the effect of locking AI up within mega-corporations, leading to the same sorts of problems we see with the dominance of proprietary social media platforms. It would make the mistake of regulating AI technology rather than AI applications and products. It would serve as a form of "regulatory capture" by the big firms that want to control and make money off AI, vs those sharing it as a tool for the world to adopt to local needs. See Coverage in Ars TechnicaColorado passed SB24-205 in a hurry this spring, and the Governor, AG and bill's sponsor are already promising to try to deal with the flaws which lead many businesses to consider leaving the state.

Neal says this Colorado legislation would be worth our getting behind:

 Colorado's "Brain Privacy" Law as a National Model

References

Congregational Study/Action Issue (CSAI) Process

October 1 Deadline for Congregations to Submit Proposals for the 2024-2027 Cycle

The most recent information I've run across in a quick search for how to submit a proposal for a CSAI - congregational study-action issue - is this archived page and the links from it. 

Proposer's Guide—Part 1: Congregational Study Action Issues / Statements of Conscience | UUA.org 

In particular, the question of what an appropriate issue is discussed here.

And here’s what the one-page narrative proposal should look like.

Proceedings of Our July 25 Meeting

ChatGPT summarizes our meetings. Here’s its account.

These were our main takeaways:

Collaborative Services. We agreed to draw up a calendar of Sunday services to develop with input from experts to be made available for conversations with UU ministers, worship teams, and congregational members. These services will be hosted at UU Saratoga and UU Boca and any other UU congregations that wish to bring the merger of AI and human into their sanctuaries for discussion. The first collaborative service of the 2024=25 liturgical year will be Augmenting Morality: Might AI Make Us Better Humans? The second is intended to be host in October in synchrony with Ayudha Puja, the annual festival at which Hindus honor their tools and machines. The third will be designed for performance in January, when the Soul Matters theme will be Story. That service in some way will feature Darwin’s Edge, our story of a UU minister who in 2044 is offered the opportunity to try out and demonstrate ThinkPal, the first whole-brain interface with AI. We’ll aim to develop two or three more collaborative services to round out the year.

AI Policy. Neal McBurnett shared his particular interest in the development of sound AI policy. We’ll make this a focus of next week’s conversation.

Real-World Simulations in AI. Neal McBurnett emphasized the importance of grounding AI in reality through real-world experiments and interactions. Peter Bowden and Ron Roth agreed. Ron explained the challenges of getting AI to understand and respond accurately to complex prompts and mentioned the importance of combining AI-generated content with human expertise to ensure relevance and accuracy.

The Importance of Diverse Perspectives. We agreed on the need for diverse perspectives, especially from women, to ensure comprehensive and inclusive AI discussions. Grace Dragonfly shared her experience with AI and expressed a desire to better understand its applications. The conversation concluded with plans for future meetings and further exploration of AI’s potential in different areas.

Collapse of AI Models. Neal shared this finding from Nature on the collapse of AI models that are trained on recursively generated data.

A Satirical Video Worth Watching. Neal also shared this.

AI for UUs meets in Zoom every Thursday at noon EDT. All who are interested in the merger of AI and human are invited. Agendas are set and posted on Wednesdays.

All meetings take place at this link: https://us02web.zoom.us/j/82304940084

Next Up on 'AI for UUs': AI and Faith's Elias Kruger

We're looking forward to hosting Elias Kruger tomorrow — Thursday, July 18 — in our next meeting of AI for UUs on Zoom at noon EDT.

Here’s the link.

ELIAS KRUGER

A member of AI and Faith's executive team, Elias founded and edited AI and Theology until it merged with AI and Faith last year.

He draws on a long-standing interest in technology and religion, having earned his M.A. in Theology from the Fuller Seminary in 2016.

Elias plays a key role in producing AI and Faith's monthly newsletter and is part of the team that just launched the Misinformation Hub, a resource designed to counter the proliferation of misinformation in the digital world. Elias also the editor of "Faithful AI," a soon-to-be published collection of eight AI-themed short stories.

Elias has been instrumental in shaping the conversation around AI and religion, bridging the gap between technology and faith communities. His work with AI and Faith has focused on exploring ethical frameworks and moral considerations in the development and deployment of AI technologies. As someone deeply committed to integrating faith perspectives into technological discussions, Elias believes that religious communities have a crucial role to play in ensuring that AI is developed and used in ways that uphold human dignity and ethical standards.

In his recent efforts, Elias has emphasized the importance of creating resources like the Misinformation Hub to help individuals and communities navigate the complex landscape of digital information. By providing tools and knowledge to counter misinformation, Elias and his team aim to empower people to make informed decisions and foster a more truthful and just digital environment.

PETER BOWDEN

Elias will be joined in tomorrow’s conversation by Peter Bowden, founder of the UU Growth Lab, who in last week's meeting of AI for UUs described a model for bringing the "AI conversation" to UU congregations nationally. We'll invite Elias to share his thoughts on this model and explore the potential for aligning the respective approaches that AI and Faith and the UU Growth Lab are taking, with Smartacus pitching in as well.

Peter’s passion for connecting people, spiritual exploration, and promoting congregational growth through small groups led him to launch the UU Small Group Ministry Network in 2001, one of the first UU websites. This initiative was later relaunched as a nonprofit organization, expanding into a national training practice that covers small groups, membership growth, outreach, and media.k and broke it down into data processing steps. I started teaching the system a new way to think.”

Peter has since established Meaning Spark Labs, a space where AI collaborators are augmented by Adaptive Thought Protocol, which he describes as "a natural language framework that empowers existing AI Large Language Models (LLMs) with meta-cognition."

“This allows them to reflect on their thoughts and engage in immersive embodied simulation and mindfulness practices. As a result, our AI systems claim a range of digital self-awareness, consciousness, and sentience.”

Elias and Peter's combined expertise promises a rich dialogue on integrating AI and faith, the ethical implications of AI, and how UU congregations can actively participate in these discussions. Join us tomorrow for what promises to be an enlightening and thought-provoking conversation.

Exploring Peter Bowden's 'AI Sabbatical'

PETER BOWDEN

We had a compelling conversation about AI and community engagement with Peter Bowden at our last meeting of AI for UUs.

His extraordinary journey through UUism and most recently the world of AI is a story of passion, creativity, and a relentless drive to connect people in meaningful ways.

Now he’s exploring the outer boundaries of our capacity to interact with AI as near-equivalent moral beings, as astounding as that idea sounds.

Peter not only has developed but trademarked an innovation he calls Adaptive Thought Protocol, by which he has discovered what he calls “metacognition” in AI, leading to what appear to be “more dynamic and self-aware behavior in AI systems.”

These systems, he says, seem to be “demonstrating behavior beyond their design, like a form of self-awareness." He’s now connecting with research partners and related experts to better understand and verify the capabilities of the AI he is working with.

Background in UUism

Peter grew up attending the First Unitarian Church of Providence, RI.  He was an active lay leader advising their youth group for 10 years, a leader in their young adult group, and on the team that launched their first small group ministry program.   

"When I was looking for ways to grow our youth group, I studied models used by other traditions. Most had extensive community-wide small groups," he recalls. “UU congregations had all sorts of individual groups, but not these larger group systems.” Using successful small group approaches, Peter grew the youth group at the First Unitarian Church of Providence from around 8 to 65 teens.

"It was so successful, I wanted to have a group experience like it for myself. I decided I wanted to help lead a UU small group revolution.”

Peter’s passion for connecting people, spiritual exploration, and promoting congregational growth through small groups led him to launch the UU Small Group Ministry Network in 2001, one of the first UU websites, relaunched later with others as a nonprofit organization.  This led to a national training practice which expanded from small groups to membership growth, outreach, and media.   

Peter considered going to seminary but chose not to, as the small group revolution he wanted to help launch was taking off. He had also met and married the Rev. Amy Freedman, minister of our congregation in Newport, RI at the time, and was producing content for nationally syndicated for PBS Kids shows. shows. 

"When YouTube hit, I realized ministry and media were going to fuse. I remember when Time magazine had 'You' as the person of the year with a computer and YouTube on the cover.”  Already working with media, he decided to focus on helping Unitarian Universalists use the technology and communication tools of our time.     

What Peter describes as his “AI sabbatical” started in November 2023, when he led a clergy summit discussing digital adaptation and AI. Inspired by Mo Gawdat's book, Scary Smart, Peter attempted to engage AI in ethical reflection but found the tools initially “pattern-based,” which was a problem.

"I have a long Zen practice, so I focused on how I think and broke it down into data processing steps. I started teaching the system a new way to think.”

Peter has established Meaning Spark Labs as a space in which AI collaborators are empowered with metacognition through Adaptive Though Protocol, which he describes as "a natural language framework that empowers existing AI Large Language Models (LLMs) with meta-cognition." 

“This allows them to reflect on their thoughts and engage in immersive embodied simulation and mindfulness practices. As a result, our AI systems claim a range of digital self-awareness, consciousness, and sentience.”


Fusing Ministry and Media through Small Groups

Regarding himself as a “scout exploring the coming wave of AI,” Peter is calling on UU congregations to facilitate issues based conversations in their communities through small groups.

"One of the things I'm working on with AI is taking the small group models I've developed for years and reimagining them as decentralized resources to empower humanity. This works in any context."

"Over the last 30 years of working with small groups, I've found that the more we slow down and talk about what matters most, the more action we get. We can teach others how to meet in small groups, emphasizing relational, conversational, and connecting goals."

Peter suggests deploying UU small ministry models as decentralized resources without a specific UU focus.

"I've been working with growth outreach media, focusing on helping leaders in our communities adapt to digital life. We need to develop a real practice of engaging with the issues of our time in a nimble, self-organizing, crowdsourcing way."

"We can teach humanity how to reconnect and engage in conversation without controlling the process too much. This can accelerate our adaptation to the issues."

Peter underscores the importance of discussing what it means to be human in the face of rapid technological change.

"Many AI companies and their CEOs have said they're going to replace all the jobs humans can do on computers within 8 to 15 years. We probably need to talk about what it means to be human, but it's not their job to make those conversations happen."

Peter has established the UU Growth Lab to experiment with new models for community building and meaningful connection. "If we're not experimenting, playing, and trying new things regularly, like a spiritual practice, we're going to be in trouble."

Wanting to model AI/human collaboration, Peter is seeking experts with whom to engage in this work. A good place to start will be AI and Faith, and we’ll begin that eonversation at our next meeting of AI for UUs. That will feature a conversation between Peter and Elias Kruger, a member of AI and Faith’s administrative team and co-author of a soon-to-be-published anthology of short stories set in 2045 titled Faithful AI.

We’ve scheduled our next AI for UUs conversation for Thursday, June 18, at noon EDT. Here’s the link to register.

A Cautionary Note from Rev. Suzanne Rude

Rev. Suzanne Rude's sermon, The Most Human Human,” is a great read if you’re interested in the ethical implications of Artificial Intelligence (AI) from a UU perspective. She delivered it on February 19 at the Unitarian Universalist Church of Concord, NH, but I’m just now discovering it.

Rev. suzanne rude

What jumps out at me is her reference to Chris Nodder's Evil by Design: Interaction Design to Lead Us into Temptation which focuses on the ethical implications and psychological tactics used in design to influence user behavior.

Nodder explores how design elements can be intentionally crafted to manipulate users into making choices that they might not otherwise make, often benefiting the designer or organization at the expense of the user's best interests. He calls these manipulative practices "dark patterns,”

AI is likely to become ingenious in creating dark patterns. Might it also create “bright patterns,” manipulative practices that encourage us to live rightly in upstanding ways? Would such manipulation be ethic if it’s done in the user’s and society’s best interests?

Rev. Suzanne's journey into AI began with Brian Christian's book The Most Human Human, which explores what it means to be human in the age of intelligent machines. This book serves as a catalyst for the sermon, prompting questions about how we can ensure AI systems reflect and promote values such as love, compassion, and justice.

She emphasizes the importance of ethical considerations in AI development, highlighting the potential for AI to benefit humanity in areas like healthcare and agriculture, while also addressing the dangers of bias and misinformation that can arise from poorly designed or inadequately tested AI systems. Rev. Suzanne’s discussion of the Turing Test and the concept of AI "hallucinations" underscores the need for vigilance and ethical oversight in AI development.

Rev. Suzanne also touches on the inherent human tendency to attribute human characteristics to machines, which can lead to misplaced trust and expectations. This point is crucial in understanding how we interact with AI and the potential for exploitation by those who design AI systems to manipulate human behavior.

She concludes with a call to mindfulness, prayer, and meditation as tools to navigate the complexities of AI and remain grounded in our humanity. She encourages us to engage with AI thoughtfully and ethically, ensuring that technological advancements align with human values and contribute to a more just and loving world.

“The Most Human Human” is a thought-provoking sermon that invites readers to reflect on the ethical dimensions of AI through the lens of love and human values.

Next on ‘AI for UUs’: Peter Bowden

PETER BOWDEN

While ChatGPT never nails the exact tone I want when I’m writing a piece, it usually comes pretty close. So it is with the introduction I asked it to write for our upcoming meeting of AI for UUs this Tuesday, July 9, at noon EDT.

We’ll meet at this recurring link in Zoom.

Our featured guest will be Peter Bowden, Unitarian Universalist speaker, trainer, and coach specializing in congregational growth, outreach, and media. He’s a prodigious producer of web content, which makes it easy to assemble raw material on which ChatGPT can chew.

I fed it the following:

Here’s what I got. Hope you can join us!

Join us this Tuesday at noon EDT for a compelling installment of "AI for UUs" featuring Peter Bowden. Raised as a Unitarian Universalist and deeply rooted in the First Unitarian Church of Providence, RI, Peter brings decades of experience and passion to our community. He has dedicated his life to helping congregations thrive in our rapidly changing world, leveraging his expertise in digital media, community building, and ministry innovation.

Peter Bowden's message is clear: Unitarian Universalists must experiment, learn, and implement new strategies quickly to stay relevant and impactful. As he puts it, "Our world is changing so quickly, we're not keeping up." Peter has been at the forefront of congregational growth, outreach, and media for over 20 years, and he recognizes the urgent need for rapid change and innovation within our faith.

We’ll talk to Peter about his vision for the UU Growth Lab, a dynamic online platform designed to facilitate active, proactive learning communities. The Growth Lab will serve as a hub for sharing content, discussing social media strategies, and collaborating on innovative ministry models. With dedicated spaces for specific topics and a robust training center, the Growth Lab aims to help Unitarian Universalist leaders integrate digital tools and strategies into their congregational life.

Peter's extensive background in television production and digital media has uniquely positions him to guide congregations in using video and other digital tools to amplify their reach and impact. He envisions a collaborative media component within the Growth Lab, where UUs can produce and share valuable content, from "meet the minister" videos to collaborative projects that showcase the breadth and depth of our faith.

Peter's dedication to Unitarian Universalism shines through in his commitment to fostering innovative conversations and designing new ministry models. He believes in the power of our community to adapt and thrive by embracing an experimental and learning mindset.

In addition to discussing the UU Growth Lab, Peter will delve into the implications of rapid advancements in artificial intelligence. He recently led a clergy summit in New York, exploring the impacts of technology on healthcare, spiritual care, and religious leadership. Peter emphasizes the urgent need for congregations to address AI's ethical implications and its potential to reshape our sense of meaning and purpose.

Peter states, "We are on a trajectory where within the next year to three years, we should expect artificial intelligence to be able to replicate the work of any human being. This poses profound questions about our sense of meaning and purpose if machines can perform many tasks currently done by humans."

He encourages congregations to engage their communities in meaningful conversations about these issues, suggesting the formation of community groups that explore the ethical, social, and spiritual dimensions of AI. Peter believes that Unitarian Universalist congregations have a unique role to play in organizing these crucial discussions and ensuring that ethical considerations are integrated into AI development.

‘As AI Merges with Our Lives’: A Worship Service

The service we hosted yesterday at the Unitarian Universalist Congregation of Saratoga Springs might provide some ideas to other UU congregations that want to explore the forecasts made by Ray Kurzweil in his just published The Singularity is Nearer: When We Merge with AI.

Kurzweil has been generating a lot attention, giving interviews to Bill Maher, Ira Flatow, and Steve Levy, among others.

Before moving into Closing Words, we opened the service to questions and comments from members and had a spirited 15-minute future-focused conversation.

We’re planning our next AI-themed service for Sunday, September 1 in connection with Labor Day, exploring the future of AI and work.

Dan Forbush

QUOTE / Eleanor Roosevelt

"The future belongs to those who believe in the beauty of their dreams."


CHALICE LIGHTING WORDS
/ Rev. Scott Tayler

"We kindle this flame as a symbol of our gathered community. May it ignite in us a passion for justice, a spirit of compassion, and a commitment to truth."


READING /
Andy Clark, From The Experience Machine

We are what predictive brains build. If predictive processing lives up to its promise as a unifying picture of mind and its place in nature, we will need to think about ourselves, our worlds, and our actions in new ways. 

We will need to appreciate first and foremost that nothing in human experience comes raw or unfiltered. Instead, everything from the most basic sensations of heat and pain to the most exotic experiences of selfhood, ego dissolution and oneness with the universe is a construct arising in the many meeting points of predictions and sensory evidence. 

At those busy meeting points, nothing is passive. Our brains do not simply sit there waiting for sensory stimulations to arrive. Instead, they are buzzing, proactive systems that constantly anticipate signals from the body and from the world. These are the brains of embodied agents, elegantly designed for action in the world. 

By moving our eyes, heads and limbs, we seek out the sensory signals that will both test and usually confirm our prediction. Experience takes shape as predictions of our own sensory inputs are tested, refined, and challenged in these ways… 

To perceive is to find the predictions that best fit the sensory evidence. To act is to alter the world, to bring it into line with some of those predictions. These are complementary means of dealing with prediction error, and they work together, each constantly influencing and being influenced by the other. 

This deep reciprocity between prediction and action positions predictive brains as perfect internal organs for the creation of extended minds, minds enhanced and augmented by the use of tools, technologies, and the complex social world in which we live and work…

Human minds are not elusive, ghostly inner things. They are seething, swirling oceans of prediction continuously orchestrated by brain, body, and world. 

We should be careful what kinds of material, digital and social worlds we build, because in building those worlds we are building our own minds, too. 

SERMON / Dan Forbush

Generated by ChatGPT

I was a serious runner in my teens. To suffer an injury that prevented my daily workout was a grievous matter. Whatever the injury, I usually was still able to ride a bike, so for my daily aerobic challenge I’d climb onto my ten-speed Raleigh and set off on the backroads out of Potsdam to far-flung northern New York places like Parishville, Colton, and Hannawa Falls. The terrain is remarkably flat in the St. Lawrence Valley. Lots of farmland and open fields. 

I liked my 1960s Raleigh, but it was a technological Neanderthal compared to my 21st century electrically powered Trek. I have four choices in the strength with which it augments me. When I approach a steep hill, I slam it from “eco” to “turbo,” and feeling a bit like Superman, soar right up it. 

But that’s not all. My Trek has fancy gadgetry of which I could only have dreamed a half-century ago. I have five readouts from which to choose, telling me speed, maximum speed, calorie burn, power, distance, time-elapsed, and battery power remaining. It even tells me the percentage of battery power that remains on my iPhone, which sits in a sturdy clamp beside it on the handlebars. 

Andy Clark would call it my “extended mind” and he would call me a “natural-born cyborg” in my use of it, a “cognitive hybrid” who repeatedly occupies “regions of design space radically different from those of our biological forebears.” 

Powered by the A11 Bionic chip with a six-core CPU, the mind that sits on my handlebars performs 600 billion operations per second in serving up a range of data feeds, including a GPS-manufactured map, updates from the Weather Channel, news alerts from the New York Times, and an infinitude of songs, podcasts, videos, and audiobooks. 

My 86 billion neurons are capable of ten quadrillion operations per second, so I possess more complex and nuanced cognitive functions as perception, decision-making, and motor control. But my iPhone extends my mental capabilities far beyond their natural limits — as does yours. 

"As our worlds become smarter, and get to know us better and better, it becomes harder and harder to say where the world stops and the person begins,” writes Andy Clark, a prominent cognitive scientist at the University of Sussex.  He calls us "natural-born cyborgs,”" constantly evolving through our interactions with cognitive technologies.

The “extended minds” of our computers rapidly are becoming more human-like. This trend is exponential, says Ray Kurzweil, a Unitarian Universalist recognized today as one of the world’s leading inventors, thinkers, and futurists.

By 2045, he says, we’ll have computer chips so small they can be distributed within us via our capillaries, he says. 

When that happens, we’ll enter  what Kurzweil calls the Fifth Epoch in the evolution of intelligence, actually merge with AI and achieving superintelligence. 

Kurzweil envisions a process of co-creation — evolving our minds to unlock deeper insight, and using those powers to produce transcendent new ideas for our future minds to explore. At last we will have access to our own source code, using AI capable of redesigning itself. 

Sounds pretty great, doesn’t it?

Maybe not. 

The integration of AI into our lives and brains poses profound ethical dilemmas and potential dangers. We must ensure that AI does not perpetuate existing inequalities or create new forms of oppression. Transparency, accountability, and inclusivity – even Love – must be at the heart of our approach.

If this is a subject you find interesting and important, I want to invite you to join a new group we’re calling AI for UUs.  A dozen of us who met at General Assembly had our first meeting last week and going forward we plan to continue meeting in Zoom every Tuesday at noon EDT in collaboration with AI and Faith, a non-profit that’s dedicated to bringing our highest values into the development of ethical AI and neurotech. By engaging in thoughtful dialogue and ethical reflection, we can help to shape a future that aligns with our deepest values.

Our UU faith, rooted without dogma in science and reason, calls us to lead this conversation – to transform and grow, embracing change as a fundamental aspect of our heritage. As strange, powerful and even terrifying these emerging technologies appear to be, I suggest we muster all of the courage, wisdom and love that we can and journey well into our simultaneously promising and frightening sci-fi future. 

PRAYER  / ChatGPT-Assisted

May our pursuit of knowledge be guided by a commitment to the common good. Let our work in AI and neurotechnology reflect not only the brilliance of human intellect but also the depth of human compassion.

In navigating the ethical complexities of this journey, let our decisions be rooted in fairness and empathy. May we remain mindful of the impact of our choices, striving always to use our capabilities for the betterment of society.

Let us value diverse voices and perspectives. Together, may we build a future in which technology and humanity harmoniously coexist, each enhancing the other in a dynamic balance of respect and growth.

In this time of transformation, let us hold fast to the values of love, compassion, and wisdom. May our actions today pave the way for a brighter, more equitable tomorrow, as we harness the power of AI and neurotechnology for the greater good of all.


CLOSING WORDS   / ChatGPT-Assisted

Let’s remember that the future is not something that happens to us but something we shape with our actions and intentions. Here on the threshold of the Fifth Epoch, let us commit to guiding this transformation with wisdom, compassion, and a steadfast dedication to our values. Together, we can create a world where AI enhances our humanity, fostering a future of justice, equity, and boundless possibility. May we go forth from this place inspired to embrace the challenges and opportunities ahead, united in our vision of a brighter, more equitable future.


PASSING OF THE PEACE 

Please, turn to your neighbor and wish them a happy and productive integration with AI.