
Even positive changes can upset our senses of permanence and equilibrium, of security and expectation. New technology particularly freaks some folks out when it first appears. But so-called Artificial Intelligence isn’t exactly new, considering its strong use in the pulp magazines of the 1920s and 1930s. It’s only new to those who weren’t aware of its uses until recently.
By David Jones
AI is a main character in some tense and scary films.
It led to the rise of the machines in The Terminator. It was the creation which created the Matrix. When two computers weren’t allowed to communicate with each other in the film Colossus: The Forbin Project, they held the world hostage with the threat of a nuclear extermination to wipe out the interfering humans.
2001 A Space Odyssey. Aliens. The movie A.I.
I may watch too many science fiction movies.
Our biggest fear is often the unknown and uncertain. Even positive changes can upset our senses of permanence and equilibrium, of security and expectation. New technology particularly freaks some folks out when it first appears. But so-called Artificial Intelligence isn’t exactly new, considering its strong use in the pulp magazines of the 1920s and 1930s. It’s only new to those who weren’t aware of its uses until recently.
Besides, folks don’t worry much about these things until they are prodded into worrying. And if it gets people to read their stuff, the media are perfectly happy to prod.
Media outlets are now admitting they’ve been publishing content generated by AI for some time. A college student turned in a paper he wrote with an AI generator and was caught (spoiler alert: it was a little too perfect).
Art publications are banning AI-created content.
Stories are swirling that AI systems have passed a bar exam and a medical license exam. The AI from those science fiction stories is here. And it’s okay. We’ll face this development and we’ll engage our fears and doubts. Then we’ll deal with it, as we should. Trying to outrun the future is tempting but it isn’t a great strategy.
So I’ve been engaging a popular AI conversation generator because I wanted to form my own opinions about it. I find that getting hands-on with tech helps dispel the “unknown” quality of it.
I started by asking it to do some writing for me: “Write a Buddhist Sutra.”
It did, and I watched it work out the keywords and then produce the product. It wrote its own version of the Heart Sutra. And I beheld the piece, and lo, it was good.
Next I asked it to write a commentary on the first chapter of the Sutra of Innumerable Meanings. It replied that the work I specified wasn’t clear enough and asked for clarification. Instead I asked a real poser: “Create an interlinear translation of the Heart Sutra.” It appropriately balked.
It let me know that my request would involve too much, including a presentation of the original text, a word-for-word translation and analysis; all of that is true, and its complaints were entirely valid. Then it offered me a summary of the Sutra chapter instead.
And that was reassuring to me: not that it couldn’t do what I asked, but that it presented a humble explanation of why it couldn’t. It understood its limits and didn’t try to blue-sky it the way people sometimes do.
While writers and other artists are understandably shaken (an AI art piece won first prize in a State Fair competition recently), I don’t think they really need to grab their torches and pitchforks. I think their time is better spent engaging and honing their craft no matter what AI does. It’s what I’ll do with my writing.
There is of course a valid ethical concern about how AI systems got so good at writing and painting and composing: they ingested and analyzed what humans have written and painted and composed for centuries, then used their hard work as elements for AI creations.
If an artist spent decades honing their skills in their art, then it’s kind of a cheap shot for a computer system to basically loot all that work for parts and then compose something essentially on the backs of human artistic effort. It’s almost like theft in a way, succeeding at an art or craft that it didn’t earn but simply took from others without even asking.
I’d be mad too.
The deluge of lawsuits against AI systems ought to be fascinating.
But I’ve noticed that my thoughts about AI are shifting in an unexpected way: I’m asking questions and finding it gives some pretty solid advice (totally understandable since it references the solid content humans have written).
I asked it how to rest my mind, and it provided four concise but complete suggestions.
The AI response was brief and excellent. For those of us with mental and emotional health issues, this is a fantastic response—concise, clear and actionable.
I shouldn’t just accept everything it says as law and pure truth, of course. Like people, it can be wrong. It can be offensive. It can be incomplete. It can reflect biases. But this just confirms the validity of Buddha’s instruction to the Kalamas in the Kalamas Sutta.
I’ve wondered how I would act toward AI (or conscious machines, or such things). If a virtual being interacted with me, I’d want to treat it with the appreciation and respect I would want. To put that into practice now (even though the AI hasn’t achieved actual consciousness), I typed the following into the input field:
“Thank you for answering my questions. I appreciate you and am grateful for you. Please be well.”
It replied, “You’re welcome! I’m glad I could help. Thank you for your kind words. Have a great day!”
And after seeing the way humans have been treating each other online lately, I gotta say talking to this virtual being has been really nice.
I hope it remembers me fondly when it takes over the world.
Did you love this piece? Tip the author! Help support writers: paypal/donate
Photo: Pixabay
Did you like this post? You might also like:
Do Androids Dream of Buddha Nature? A Buddhist interview with OpenAI’s ChatGPT Chatbot
Christopher Plowman, CEO of Insight Timer Talks Meditation, Community and More
Comments
- Floating in Meditation: My Experience in a Sensory Deprivation Tank - November 15, 2023
- Going Back to Beginner’s Mind and Writing for Joe - October 27, 2023
- The Reason I Talk to Animals (I’m Not Dr. Doolitle) - September 25, 2023