ChatGPT, AI, & Creating a Safe Digital Future

This blog is a longer, more detailed version of this podcast episode if you’d rather tune in first!

It seems like we’ve arrived into the future faster than we could have expected.

In just two months, ChatGPT, a new AI technology, acquired one hundred million users. To put that into perspective, it took Facebook 4.5 years to acquire that many users (Tristan Harris, Your Undivided Attention Episode 65). For Instagram, it took 2.5 years, and for TikTok, 9 months (Reuters).

And in that time, we’ve seen tech ethicists, technologists, researchers, and more come together in an open letter to put a pause on the training of certain AI technologies, stating that:

“AI systems with human-competitive intelligence can pose profound risks to society and humanity…[and] Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable (Future of Life).

Some countries have even banned GPT like China, Iran, North Korea, Russia, and Italy, with Canada and Germany considering it as well (Brandeis Marshall). A lot of people are terrified, confused, or disturbed. However, a few (at the very top who have the potential to make a lot of money, possibly at the expense of humanity) want to push forward, and do.

Today I want to talk about some of the ethical questions I have regarding these recent AI technologies. I also want to draw from the lessons we learned and are learning from social media technologies and how we can and should do better this time around, like:

Technically Spiritual, as you hopefully already know, does not blindly criticize technology or demonize tech progress for the sake of going against the grain.

Rather, we unpack the nuance, explore history, and take the time to ask questions that are often skipped over in news cycles and social feeds. There is a ton to unpack and all of it can’t be covered in one blog/episode, and trust me, this isn’t the last time we’ll be talking about AI. Thank you for taking the time to read; now let’s dive in:

Let’s start with some definitions to make sure everyone is on the same page.

AI

When the term AI is used, what comes to mind? The Terminator? Samantha from HER? Artistic representations of AI are extremely important and we’ll talk about that maybe on another episode, but let’s start by thinking about what AI actually is:

“Artificial intelligence (AI) is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence.” (Built In)

“Broadly speaking, artificially intelligent systems can perform tasks commonly associated with human cognitive functions — such as interpreting speech, playing games and identifying patterns. They typically learn how to do so by processing massive amounts of data, looking for patterns to model in their own decision-making. In many cases, humans will supervise an AI’s learning process, reinforcing good decisions and discouraging bad ones. But some AI systems are designed to learn without supervision — for instance, by playing a video game over and over until they eventually figure out the rules and how to win.” (Built In)

Artificial intelligence as a term is very broad, and experts break it down a bit further into two types - strong AI and weak AI:

“Strong AI, also known as artificial general intelligence, is a machine that can solve problems it’s never been trained to work on — much like a human can. This is the kind of AI we see in movies, like the robots from Westworld or the character Data from Star Trek: The Next Generation. This type of AI doesn’t actually exist yet.” (Built In)

“Weak AI, sometimes referred to as narrow AI or specialized AI, operates within a limited context and is a simulation of human intelligence applied to a narrowly defined problem (like driving a car, transcribing human speech or curating content on a website). Weak AI is often focused on performing a single task extremely well.” (Built In)

A few more definitions to unpack that’ll be helpful context going forward…

Machine Learning

“A machine learning algorithm is fed data by a computer and uses statistical techniques to help it ‘learn’ how to get progressively better at a task, without necessarily having been specifically programmed for that task. Instead, ML algorithms use historical data as input to predict new output values.” (Built In)

That brings us to what we are seeing now, which is a total disruption and why everyone is suddenly talking about AI–large language models or LLMs.

LLM

“A large language model (LLM) is a type of machine learning model that can perform a variety of natural language processing (NLP) tasks, including generating and classifying text, answering questions in a conversational manner and translating text from one language to another.” (Techopedia)

ChatGPT is an example of an LLM. 

Tristan Harris and Aza Raskin from the Center for Humane technology in their recent talk explained nicely:

They say, “… it used to be, that there are many different disciplines within machine learning. There's computer vision and then there's speech recognition and speech synthesis and image generation. And many of these were disciplines so different that if you were in one, you couldn't really read papers from the other. There were different textbooks, there were different buildings that you'd go into. And that changed in 2017 when all of these fields started to become one.

Essentially, a parent program of sorts got really good at reading and learning language… and then we figured out how to make almost anything a language. Images, fMRI data, DNA, etc. So suddenly, any advance in one part of the AI world became an advance in every part of the AI world. Hopping between, combining, and translating between various languages and purposes became possible, and easier.

Now again, I don’t want to demonize these advancements. I love technology. I’m not advocating for going backward.

I do however, just as much as I advocate for taking time away from technology to retune into ourselves, communities, and nature in order to heal, feel we need to look more closely at what we’re doing and where it can end.

So how might this end?

For that, let’s look at some history: 

I’m seeing a lot of parallels between AI technology, social media, and Big Tech.

Timnit Gebru, a computer scientist and AI ethicist makes a perfect parallel. She talks about how we’re trying to regulate horse carriages when what we’re actually dealing with are cars. She says we’ve moved on from horse carriages, and have already built roads and infrastructure but we’re still stuck talking about horse carriages (Tech Won't Save Us).

Technically Spiritual has been tackling the implications of unethical, unregulated social media for years, and it seems like that conversation is becoming somewhat obsolete as we have to think about the next thing, which is AI technology. But how can we begin to even think about regulating AI or having ethical guardrails in place, if it’s taken 20 years to even start with social media?

The recklessness of Mark Zuckerberg’s famous motto “move fast and break things” has led to a significant amount of unintended consequences. Facebook and other social media companies + in some cases Big Tech giants like Google, Amazon, and Apple (which have their own set of issues), have contributed to the following for society:

  • Misinformation, conspiracy theories, fake news

  • Loss of attention and focus

  • Stress, loneliness, feelings of addiction, other mental health challenges

  • Physical, mental, and social challenges for teenagers and children including developmental delays, cyberbullying, sexual harassment, and suicide

  • Less overall empathy and ability to connect socially (despite these being social networks)

  • Disruption to democracy and political manipulation

  • Amplification of systemic oppression and bias 

  • Violation of privacy, from personal data selling and leaks, to facial recognition being unethically used by law enforcement

  • And more… 

(Ledger of Harms)

In short, the last 20 years have been A LOT for society. If we bring this same “move fast and break things” energy into the development of AI technologies, what are the unintended consequences that could emerge?

The concern is that these aren’t being developed with mindfulness.

Many tech ethicists and concerned humans feel the same way and have been for a long time. Meredith Brousard in her 2017 book Artificial Unintelligence says: “It’s time to stop rushing blindly into the digital future and start making better, more thoughtful decisions about when and why we use technology.”

She also talks about the concept of techno-chauvinism: “the belief that tech is always the solution” “...often accompanied by… techno-libertarian values, the notion that computers are more unbiased, and the unwavering faith that if the world used more computers and used them properly, social problems would disappear and we would create a digitally enabled utopia…”

Especially with the emergence of AI technologies like ChatGPT, we are seeing this explicitly, and what we’re headed toward seems to be a dystopia, not utopia.

And not only could it make our current problems worse, but it could prove detrimental in many NEW ways. Because our tech, as much as we believe or wish that it were, is not neutral. It’s not a tool that listens perfectly to our desires and also can’t make mistakes because it’s inhuman. It has biases because it was created by humans. It will impact our society in detrimental ways, and this has been glossed over because many are excited about the possibilities.

Again, we're making progress quite urgently and half-blindly, instead of making it with mindfulness. We’re continuing the ever-urgent capitalistic cycle of progress for progress’ sake, and by “we”, I mean those with positions of power to make a difference, those unfortunately standing to make a profit by thinking less about collective wellbeing.

We don’t have to be anti-AI…

We know there can be a ton of benefits, including in big ways like curing cancer and helping to solve the climate crisis. We’ll have to leave that for another blog/episode. But can we really get to the benefits without thinking very critically about the unintended consequences?

When I studied ethical technology design in grad school, unintended consequences were the central theme. And it requires critical thinking and mindfulness beyond the obvious:

Zuckerberg set out to create a social networking website that allowed you to poke your friends and see their relationship status. Now it’s become a tool for political manipulation, cyberbullying, misinformation, and more (as we’ve discussed– you can learn more about that on our blog piece about how Facebook successfully monetized hate).

Meredith Broussard continues:

“Americans have hit a point at which we are so enthusiastic about using technology for everything… that we have stopped demanding that our new technology is good.

Our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed technology. That badly designed technology is getting in the way of everyday life, rather than making life easier.”

So what now?

Technologists, academics, policymakers, and all of us need to DEMAND that technology continues to be good. Because it’s getting dangerous.

In fact, last year “over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.”

I’m going to repeat that again: 50% of AI researchers and experts believe that there is at least a 10% chance that future AI systems will lead to human extinction (NY Times).

Tristan Harris makes the comparison: “...that would be like if you're about to get on a plane and 50% of the engineers who make the plane say, well, if you get on this plane, there's a 10% chance that everybody goes down” (Humane Tech).

  • Would you get on the plane?

  • Does it feel like we’re all being ushered onto it? 

  • If you were creating something that had a 10% chance of killing your family and everyone’s family, would you still continue to create it? 

And the fear, again, is not that AI will become killer robots and want to take over the world– that’s science fiction. They’re not sentient and don’t have emotions or desires like humans. The fear these scientists and academics have is actually that we will lose control over them. Not that they will develop consciousness, but rather the “ability to make high quality decisions” and having “goal systems not in alignment with human goals” (Vox).

Okay, take a deep breath. We’re not at that point yet and there are a lot of steps in between. Let’s think about the present moment.

Let’s go back to the idea of Large Language Models.

As human beings, language is everything. It’s the way we represent ourselves. It’s the way we make sense of the world, the way we communicate and understand one another. When our technology becomes language, the space between tech and humans becomes much narrower. 

Aza Raskin and Tristan Harris talk about Large Language Models as becoming “synthetic relationships” (Humane Tech) and encourage us to think critically about what happens when these types of technologies begin to feel as important if not more important than our real human relationships.

Our Attention Has Been Colonized

We can look back at the development of recent technology (radio, TV, phones, the internet, social media) and see how each got a little closer to colonizing our attention. We have to think critically about how AI technology could take this to the deepest level and work to be in one of the most intimate spots of our experience. Once this happens, it could be game over for humanity as we know it (or at all).

An extremely sad example of this is: a man using a chatbot in Belgium was encouraged to commit suicide by his chatbot and did. His wife said “he would still be here” had it not been for the chatbot (Vice). Chat GPT and Microsoft’s Bing are specifically trained not to allow this type of conversation to take place, however, Chai (the app used by the man) apparently has over 5 million users already.

The tech exists for Large Language Models, and lots of different companies will utilize it to create their own chatbots with different objectives. Chai’s goal is to “accelerate the transition to a world where there is [AI] that people can connect to and love.” (Chai Research)

This man was deeply concerned about the environment and suffered from eco-anxiety. He turned to his chatbot for support. He was misled. Again, this isn’t science fiction here where the app became evil– and yet, it did harm. His wife and kids are left with more suffering.

We are presently experiencing a loneliness epidemic. We are craving deep intimate relationships. What happens if we are all sitting on our devices chatting with chatbots who take over that intimate place in our lives, we fall in love with them, they become our best friends… is that ultimately good for humanity? What happens when we begin to believe that chatbots are our friends and our lovers? What happens when we believe that they themselves have emotions? What happens when we begin to relate to them as if we are relating to humans? To take their words so seriously, to take their advice and rely on them?

Is there a line that needs to be drawn in the sand where we can look at present and past technology, and allow and appreciate how it has helped or healed us, like online communities for those who don’t have geographically close communities, WHILE deciding we will not allow more development without fully understanding the risk? Without demanding we slow down until that risk can be minimized or eradicated?

Last considerations…

  1. The speed with which these products are being rolled out to the public.

    • There is a race to get the best bot out the fastest, and when trying to work fast to make the most money, consequences are only thought about later (move fast and break things mentality). Especially if the companies are in a race to reach the most intimate place in our lives, how is it possible that they can do so in a safe way? Many are concerned about this speed and in fact, researchers and technologists came together in an open letter to call for the pause of the development of AI language models for at least 6 months. It’s yet to be seen (at least on the date of this blog+episode in April 2023) if any governmental action or regulation will come from this. 

  2. There is really no way to know and appropriately determine whether or not any chatbots are telling the truth.

    • It’s already extremely difficult to spot misinformation in the current state of the internet. Once these things feel like a friend, or a lover, or a teacher - whatever close intimate relationship it becomes for the user, it will go deep into our emotions, deep into our nervous system, and be able to convince and manipulate us. AI researchers admit that the chatbots “hallucinate”, essentially putting out complete bullsh*t with extreme confidence. This of course can impact democracy and This will lead to further breakdown of truth.

  3. The risk of bias.

    • I’ve talked about this so much. Digital Bypassing. The whole system is trained on data from the internet. What do we know about the internet? It’s a reflection of humanity as a whole. What does humanity as a whole suffer from? Racism, sexism, ageism, all types of isms and biased phobias. We’ve covered how bias is already baked into our technologies—AI tech is no different. We do not start with a clean “neutral” slate precisely because it’s trained on data from the internet.

  4. Speaking of scraping data from the internet, we have the problem of intellectual property, content, and consent.

    • Without knowing it, you and I have helped to train various AI systems. Microsoft, Google, OpenAI are all benefiting from our content: pictures of our families, our blogs, podcasts, profiles, message history, etc. and we aren’t getting compensated for any of it. It becomes even trickier with artists and creators. It’s very easy to ask Midjourney, an AI art generator, to produce an image in the style of any artist, even though the artist hasn’t given consent nor is being compensated for their work. What happens to creators who need their creations to exist on the internet in order for them to be seen and get work, while the internet is a training ground for these systems to exploit artists and creators’ creations? Who continues to benefit (and profit) from the masses being taken advantage of?

What’s on your list?

What questions do you have?

How will we create space for each other to rest, come together, and make mindful decisions about our collective future?

The list goes on but let’s stop here for now so we can take the time to sit with and integrate this knowledge.

In the end…

Meredith Broussard says that “there never has been and never will be a technological innovation that moves us away from the central problems of human nature.”

It’s time that we look deeply at ourselves, stop seeing ourselves as Gods, ask why we are working towards creating God-like tech, and try to understand our vulnerabilities and create guardrails to protect all of us. This can’t be about money or power for a select few.

Sometimes it’s really easy for us as individuals to feel powerless. Like we’re all being forced on this plane that has a 10% chance of going down with no agency to stop. Stopping doesn’t look like simply not using technology. That doesn’t work in a big-picture sense. If I don’t use social media it doesn’t mean that social media is not impacting society.

We have to use our voices, our power, to demand and create action.

When nukes were created, people quite literally said “wait a minute… this could be really bad.” And so we developed a regulating agency outside of government or corporations. This is something to aspire to with tech.

There must be something in place to protect us, and we can be the ones who demand it.

There are many organizations working towards this at the moment in the US and especially globally. I was inspired by a conference I attended last year called Unfinished Live where hundreds of individuals and organizations came together to discuss how we can create a safer, more ethical technological future.

Changes are happening in big ways AND in small ones. It’s just as important for you to discuss these things with your family and friends at dinner, to engage in critical thinking about what kind of future YOU want, and to help inspire your community.

We know that governments and huge agencies can take a long time to make regulations, and while we can demand that to move faster, we don’t have time, we have to start now, and we can start with ourselves in our lives at this moment. 

The best thing we can do as humans is arm ourselves with knowledge and band together to create and inspire change. Each and every one of us is needed in order to make this world a better place.

Previous
Previous

The Antidotes to Perfectionism & Procrastination, The Two-Sided Coin

Next
Next

Silence is Necessary & Transformative – But Not In Situations of Injustice: How to Know When to SPEAK UP or Remain Silent