Your AI was Trained on Enron and Trashy Romance

Stmarkdepere   -  

By Family Minister, Dr. Brandon Steenbock

Ever fished in a puddle hoping to catch a whale? Or pulled through the drive-thru looking for filet-mignon? Sniffed in the trash expecting to smell roses? No? Of course not. So why would you expect nuanced humanity from a machine?

What if I told you that the bedrock of your AI’s understanding came from Enron’s corporate emails and the saccharine, clumsy prose of amateur romance novels? Early Large Language Models, those pioneering attempts at artificial intelligence, learned from whatever data they could scavenge. Open-source, free, and readily available. That included the entirety of Enron’s internal communications, with all its questionable ethics and corporate maneuvering, as well as a vast collection of romantic narratives with all the subtlety of a sledgehammer. And Wikipedia.

AI systems like ChatGPT, Copilot, and Gemini, in their infancy, were fed vast quantities of text. They learned to recognize patterns – grammatical structures, word associations, the ebb and flow of human conversation. They assigned “weights” to words and phrases, building a rudimentary understanding of context. When you pose a question, the AI reverses the equation, drawing from its limited dataset to formulate a response.

But what if that dataset is flawed? What if the only human voice it understands is corporate intrigue and romantic cliché?

Imagine asking a parrot, trained on nursery rhymes and pirate shanties, to recite Shakespeare. It might mimic the words, but it would lack the Bard’s genius. Same with AI. It can mimic human language, but it cannot understand the depth of human experience.

If we go to AI for spiritual truth, there is a deeper question: Can a machine, trained on flawed data, offer genuine insight? Can it explain the boundless love of God when its understanding of “love” is derived from pulp romance? Can it offer guidance on matters of ethics when its training includes the strategies of those who sought to circumvent them? And what if the AI was trained on the sermons of bad theologians?

This is not to say that AI has no place in our lives. I used it to refine this very article. It is a tool, and like any tool, it can be used for good or ill.

But remember this rule: Garbage in, garbage out. Both the data behind it and the question you give it will determine what it gives you.

You cannot sift through the terabytes of data built into these systems. But you can examine the answers they provide. Question them. Verify them. Think critically about their source and their substance. Go back to Scripture often. By doing so, you will not only use the AI as a tool, but you will also strengthen your own capacity for discernment. Read more on this topic.