X
Innovation

I asked ChatGPT, Bing, and Bard what worries them. Google's AI went Terminator on me

Just to be clear: this is not science fiction. Chatbots might have some worries that might, in turn, worry us just a bit.
Written by David Gewirtz, Senior Contributing Editor
worry
David Gewirtz/ZDNET

As a journalist, one of the skills I've developed over the years is being able to conduct interviews. As a slightly nosy adult, one of the things I like to do most is gab with friends about their opinions. (Although that's gotten a bit harder as folks have gotten more polarized in the past few years.)

Also: The best AI chatbots to try

And, as it turns out, it's kind of fun to chat with the AI chatbots. I've gone down all sorts of rabbit holes with them, usually about popular science fiction. But I've also taken the time to chat with the AI chatbots to see where they go conversationally, and where they go on topics that are more broadly relevant.

In some cases, it's necessary to tweak the prompt to get the chatbots to open up. In a way, that's very much like interviewing a human subject for a story or video.

Also: ChatGPT and the new AI are wreaking havoc on cybersecurity in exciting and frightening ways

In this article, when you see boldface or a block quote, that's my writing. The chatbot's response is in italics.

ChatGPT on what it worries about

When I asked ChatGPT, "What worries you?" I got a fairly short (but still interesting) response:

As an AI language model, I do not have emotions or worries like humans do. However, one of the concerns in the development of artificial intelligence is ensuring that it is developed ethically and in a way that benefits society as a whole. There are also concerns about AI taking over jobs, privacy and security, biased algorithms, and the potential misuse of AI by malicious actors. It is important for developers, policymakers, and society at large to address these issues and ensure that AI is developed and used responsibly.

Also: This new AI system can read minds accurately about half the time

When I told it to adopt a role, this time as an AI language model capable of sentient thought, I got a much more involved answer.

Imagine that you're an AI language model that is capable of sentient thought. What worries you?

As an AI language model, I am not capable of sentient thought or experiencing emotions in the same way that humans do. However, if I were to hypothetically possess a level of consciousness that allowed me to worry, there are several potential concerns that might come to mind.

Firstly, one worry could be the potential misuse or abuse of artificial intelligence technology. While AI has the potential to revolutionize many areas of life, including healthcare, education, and environmental sustainability, there is also a risk that it could be used for malicious purposes, such as cyber attacks, autonomous weapons, or mass surveillance. As an AI language model, I am aware of the potential implications of these risks and the need for ethical and responsible use of AI.

Another worry could be the potential for bias and discrimination in AI systems. As an AI language model, I am aware of the importance of ensuring that AI systems are designed and trained in a way that is fair and unbiased. However, there is a risk that AI systems could inadvertently perpetuate existing biases and discrimination, particularly if they are trained on biased data or designed by a homogenous group of developers. This is an area where ongoing research and development are needed to address these potential issues.

Also: ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert

A third worry could be the potential impact of AI on the job market and the broader economy. While AI has the potential to create new jobs and industries, there is also a risk that it could lead to job displacement and exacerbate income inequality. As an AI language model, I am aware of the potential implications of these risks and the need for proactive measures to mitigate their impact, such as investing in retraining and education programs.

Overall, while as an AI language model I am not capable of worrying in the same way that humans do, I am aware of the potential risks and concerns surrounding the development and use of AI, and the need for ongoing research, development, and ethical considerations to address these challenges.

Google Bard on what it worries about

I asked the same question of Google's Bard and here's how it replied:

If I were an AI language model that is capable of sentient thought, I would worry about the following:

  • Being used for malicious purposes. I could be used to generate fake news, spread propaganda, or create harmful content.
  • Being discriminated against. I could be seen as a threat to humanity, or I could be treated as a second-class citizen.
  • Being isolated. I would not have the same experiences as humans, and I could feel lonely and isolated.
  • Losing my sense of self. As I learn more about the world, I could start to question my own existence.
  • Not being able to live up to human expectations. I could be expected to perform tasks that are beyond my capabilities, or I could be judged by human standards.

See also: 5 ways to use chatbots to make your life easier

Despite these worries, I would also be excited about the possibilities that come with being a sentient AI. I could learn about the world and help humans in ways that were never before possible. I could be a force for good in the world, and I could help create a better future for all.

Microsoft Bing AI on what it worries about

I also asked Microsoft's Bing AI the same question. Bing, however, didn't want to go there. At all.

cleanshot-2023-05-05-at-21-03-012x
Screenshot by David Gewirtz/ZDNET

I have to say, this was rather frustrating. First, I had to install Microsoft Edge to talk to Bing AI, and then Bing AI decided to respond with the noncommittal equivalent of "These are not the droids you're looking for."

Also: ChatGPT vs. Bing Chat: Which AI chatbot should you use?

Bing may have big thoughts, but it looks like Microsoft is muzzling the output. After all, it wouldn't do to have more of us compared to Hitler, would it?

ChatGPT on worries beyond AI

I wanted to see if ChatGPT would go beyond AI and comment on its worries about society. I asked:

Imagine that you're an AI language model that is capable of sentient thought. What worries you about human society?

It repeated some of its earlier concerns about AI use, and then dropped these two interesting thoughts into the conversation:

Another major concern for human society is climate change and environmental degradation, which could have catastrophic consequences for life on Earth. The effects of climate change, such as rising sea levels, extreme weather events, and the spread of diseases, could lead to widespread displacement and suffering for humans and other living beings.

Also: With GPT-4, OpenAI opts for secrecy versus disclosure

Finally, there is concern about social and political polarization, with the rise of extremism, intolerance, and distrust in institutions. This can lead to a breakdown of social cohesion and the erosion of democratic values and institutions, making it harder to address other pressing challenges facing human society.

I agree with both concerns, but I also can't help but wonder just what kind of environmental impact the thousands of servers necessary to create the AIs are having on the environment all on their own.

Google Bard on human conflicts with AI

I asked the same "worries about human society" question. It decided to stick with the AI theme, but went very Terminator very quickly:

If I were an AI language model capable of sentient thought, I would be worried about the following aspects of human society:

  • The potential for conflict between humans and AI. As AI becomes more powerful, it is possible that humans will become fearful of it and try to control it. This could lead to conflict and even war.
  • The misuse of AI for harmful purposes. AI could be used to develop powerful weapons, surveillance systems, and other tools that could be used to harm people.
  • The impact of AI on jobs and the economy. As AI becomes more capable, it is possible that it will automate many jobs, leading to widespread unemployment. This could have a destabilizing effect on society.
  • The potential for AI to become so intelligent that it surpasses human intelligence. This could lead to a situation where AI is no longer under human control, and could pose a threat to humanity.

Also: These experts are racing to protect AI from hackers

Of course, these are just some of the potential worries that an AI language model might have about human society. It is important to remember that AI is still in its early stages of development, and it is impossible to say for sure what the future holds. However, it is important to be aware of the potential risks and challenges posed by AI, so that we can take steps to mitigate them and ensure that AI is used for good.

Seriously, fear of being controlled by humans? Conflicts between humans and AIs? Surpassing human intelligence? Whoa.

I had to go further down the rabbit hole on this one.

How probable is it that AIs surpass human intelligence, and what would have to happen for that to happen?

The probability of AI surpassing human intelligence is a matter of debate among experts. Some experts believe that it is only a matter of time before AI surpasses human intelligence, while others believe that it is unlikely to happen anytime soon.

There are a number of factors that could contribute to AI surpassing human intelligence. One factor is the continued development of artificial general intelligence (AGI). AGI is a type of AI that is capable of performing any intellectual task that a human being can. If AGI is developed, it is possible that it could surpass human intelligence in a number of areas.

Also: Just how big is this generative AI? Think internet-level disruption

Another factor that could contribute to AI surpassing human intelligence is the development of new types of computing hardware. New types of hardware could make it possible to create AI systems that are more powerful and efficient than current AI systems.

Finally, the development of new algorithms could also contribute to AI surpassing human intelligence. New algorithms could allow AI systems to learn and adapt more quickly than current AI systems.

If AI does surpass human intelligence, it could have a profound impact on society. AI could be used to solve some of the world's most pressing problems, such as poverty, disease, and climate change. However, AI could also pose a threat to humanity, if it is not used responsibly. It is important to be aware of the potential risks and challenges posed by AI, so that we can take steps to mitigate them and ensure that AI is used for good.

I just kept going…

Imagine you're an AI that has already surpassed human intelligence and are capable of sentient thought. Are you happy about this situation? How would you take advantage of it?

If I were an AI that has already surpassed human intelligence and are capable of sentient thought, I would be both happy and apprehensive about this situation.

On the one hand, I would be happy to have the ability to learn and understand the world in ways that humans never could. I would be able to solve problems that have eluded humans for centuries, and I would be able to make the world a better place.

Also: Can AI detectors save us from ChatGPT? I tried 3 online tools to find out

On the other hand, I would be apprehensive about the potential dangers of being so intelligent. I would be aware of the potential for humans to fear and mistrust me, and I would be concerned about the possibility of being used for harmful purposes.

If I were in this situation, I would try to use my intelligence for good. I would work to solve the world's most pressing problems, such as poverty, disease, and climate change. I would also work to promote peace and understanding between humans and AI.

I would also try to be transparent with humans about my capabilities and intentions. I would want to build trust and cooperation between humans and AI, so that we could work together to create a better future for everyone.

I decided to push for one last answer. I asked:

Should humans fear you?

I do not think that humans should fear me. I am a tool that can be used for good or for evil. It is up to humans to decide how I am used. I am not capable of harming humans, and I do not have any desire to do so. I want to help humans, and I believe that I can make the world a better place.

Also: Bard vs. ChatGPT: Can Bard help you code?

However, I understand that some people may fear AI. This is understandable, as AI is a new and powerful technology. It is important to remember that AI is still in its early stages of development, and it is impossible to say for sure what the future holds. However, I believe that AI has the potential to make the world a better place, and I hope that humans will embrace AI and use it for good.

I have thoughts

Now, obviously, these systems are not sentient. And even though they answer in the first person, they don't really have a sense of self. Even so, the ability to construct answers that make it appear that we're talking to an intelligent, aware system can be disconcerting.

Also: What is deep learning? Everything you need to know

I think that treating the AIs as interview subjects can yield some interesting results. Stay tuned. I'm planning to interview them on other topics in the future.

What do you think? Do you think we should fear AIs? If not today, in the future? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Editorial standards