An Authentic Impression of Artificial Intelligence
Is Gene Kaye the only one who gets almost overwhelmingly creeped out by AI imagery to a disturbingly deep degree? Let's find out.
Last July, in “Deepfake Almighty,” we took a look at the rapidly emerging Artificial Intelligence (AI ) phenomenon from the perspective of its sinister and creepy capability to digitally synthesize audio, images and video that can already—even at this nascent stage of the technology—appear authentic enough to be accepted as real.
Or at least close enough to it to be impressive enough for nobody to care that it’s not.
We also used that as a springboard into some spiritually and scripturally informed speculations about what the ultimate purpose of such a technology could be; especially in terms of what the Book of Revelation tells us is going to be happening in the first half of the horror show that’ll be the Great Tribulation of which the Lord Jesus, Yeshua the Messiah, warns us many times over, most prodigiously in Matthew 24, and most specifically in verse :21 (NKJ):
“For then there will be great tribulation, such as has not been since the beginning of the world until this time, no, nor ever shall be.”
So, consider what you’re about to read as a follow up to all that, from the more intimate and immediate perspective of an up-close and personal experience with AI, that’s led to some observations from which some conclusions can be drawn that would seem to support the speculations we made about a possible purpose for this technology in our first exploration of it last summer.
Since just that time, AI as a widely available tech application has moved from producing automatically-generated text (as a proxy for people actually thinking and producing by their own abilities and efforts), to now having advanced quickly and far enough that it’s already producing artificially-generated sound, images and video on demand. Where less than a year ago AI was something in the realm of most people’s imaginations, it’s suddenly everywhere, being applied in everything, and at everybody’s fingertips.
I certainly can’t be the only one to whom the abrupt and exceedingly rapid emergence of AI technology—not just as a phenomenon on the consumer tech scene, but as an instrument being quickly as universally embraced and applied as the pencil once was and the mobile device now is—is somewhat eyebrow-raising. I can’t help but wonder why this technology has seemed to have sprung so quickly from nowhere and gone so rapidly from 0-60mph with breathtaking acceleration.
The conclusion we drew in “Deepfake Almighty” about the profane agenda of the beast world system and how AI might fit into it might have something to do with how and why it’s burst onto the world scene so quickly and penetratingly.
But, today we’re going to make a rare departure from the big picture considerations of the puzzle, to take a look at one of the pieces a little more closely in a little more detail; to see what we might be able to learn about the nature of the big picture from an examination of one of its parts.
“Dear Diary: Had My First Date With AI…”
As I alluded, AI is all of a sudden everywhere and available for everything and being pushed on all of us to embrace it and engage with it as much as we can and in as much of what we do as possible. We’re using it to compose written work for us and produce music for us and generate images for us that most of us would be incapable of doing on our own because of the talents and skills we lack for it.
As a result, suddenly, the internet and social media are bursting at the bandwidth seams with AI images; and you can’t click, tap or scroll through just about anything without running into them somewhere and somehow.
That’s how I managed to experience my first up-close and personal run-in with AI, and the experience was at once so profound and so jarring that I felt I had to share it; if for no other reason than to raise a few eyebrows besides my own.
A short time ago, I came across a post on X (formerly Twitter), citing Matthew 9:37-38:
“The harvest is plentiful but the workers are few. Ask the Lord of the harvest, therefore, to send out workers into his harvest field.”
The post came with an attached AI-generated image illustrating the verse, depicting a pastoral scene of a wheat field through which “the Lord of the harvest,” whom we assume to be the Lord Jesus, is seen walking, accompanied by whom we can also assume to be his disciples, going through the field, like harvesters gathering their crop.
It was my first encounter with an AI-generated image at which I could take a good and close look. And before I could take a good and close look at it, my emotions overwhelmed my reason (which I suspect may be part of the “magic” somehow baked into the tech in the first place, to make it so irresistibly appealing without requiring much consideration beforehand; which is always the same kind of winning combination that The Simpsons’ Lenny Leonard tells us alcohol and night swimming is).
With my reason and my capacity for disbelief temporarily suspended—more like impaired—I was immediately, almost compellingly beyond any ability to resist, so enamored of the image and impressed with its vivid realism, that I downloaded it.
Not long after that, I decided I was so impressed with it (without having taken any closer look at it than the essentially thumbnail view I’d had of it in the original X post in which I first saw it), I resolved to make it the home screen image on my phone.
So I uploaded it to my home screen as a wallpaper. Then, when the image resolved and I was able to take my first good, up-close and in-detail look at it...
Suddenly, all hell inside me broke loose.
After just a couple seconds of examining an ostensibly peaceful, pastoral, biblical image that should’ve been warming the cockles of my faithful heart, I instead began to feel an almost nauseating physical disturbance spreading through me that I’m still finding difficult to put into words.
Seasoned drinkers may recognize something of the feeling: that moment the morning after when everything inside you’s already tossing and turning with nowhere to go but up and out, you feel disoriented, uncomfortably warm, your skin’s crawling, and then you get that feeling of impending eruption as whatever’s churning in your stomach is about to go Mt. Vesuvius all over wherever you happen to find yourself.
If you’ve ever fainted, you may also recognize the feeling: that disorienting, disturbing discomfort that comes as everything inside you starts to go liquid, you start to feel yourself losing equilibrium, your senses are all muffled, you get all hot and bothered and it feels like reality’s about to pixelate and disperse into oblivion right before your very eyes.
I’ll admit it wasn’t quite as drastic as either of those illustrations, but it was an imposing enough combination of the two to only a slightly lesser degree, to still be extremely unsettling and palpably, physically disturbing.
All I needed was just a couple of seconds of that to tell me I’d seen enough, and to get that image off my phone, like, now! And so I did.
“Dear Diary: My First Date With AI: A Post-Mortem.”
After I was done reeling from the feeling—as much from the shock of the incomprehensibility of it as well as from its actual physical effects—I had to consider why it was that I experienced so profoundly disturbing a physical reaction from simply looking closely at a photographic image; because it was such a stunning experience that I had to try to reason out what had just happened to me and why.
It didn’t all come to me at once, but over a period of some days of chewing it over in my mind. And here’s what conclusion—based on the experiment and the observations I made from it—I was able to draw from the experience and what I’d considered about it.
I determined the physical discomfort I’d experienced had to have come about as the result of my senses being exposed to a disorientingly disturbing mix of the real and the unreal, to a practically indistinguishable extent; an extent at which the mind has difficulty classifying the contradictory input it’s receiving from the physical senses.
The mind understands beforehand it’s looking at an artificial reproduction. With AI technology at the capability levels at which it currently sits, its reproductions are a mix of the apparently genuine with the detectably artificial. In the example of the image in question, the overall picture painted by the sky, the clouds, the sunlight and the mountains in the distance that all actually look real, makes the mind assume the rest of the image is, as well. That makes the mind immediately assume what it’s looking at isn’t an artificial reproduction, but an actual photograph of a real scene.
So, under the impression of a quick initial glance, the physical senses tell the mind what it’s looking at is real, because that’s the first natural assumption to make based on the initial sensory input.
But then the next natural impulse is to examine the image a little more closely and in more detail.
And that’s when the bad acid trip began for me.
Because when I started to take a closer look at what my senses and my mind initially told me was a real image, I started to notice that while some of the details—like the sky, the clouds, the sunlight and the mountains in the distance in this example—do indeed look real, a lot of the other details in the image, don’t. I could tell they were artificially reproduced, like an almost-perfect painting; again, good enough to look real at a quick glance, but then detectably artificial once you examine them closely and in detail.
And then the mind immediately has to wonder: what is this “so real” doing with all this “unreal” in the same image?
That was the first cause for the dissonant disturbance.
The second came right after it, when I noticed the un-reality of the image doesn’t come from an insufficiently real-enough reproduction of details that falls short of capturing them in their genuine, physically manifested forms, but because the details are in fact, too perfect!
It’s not a reproduction that captures reality the way a photograph does, but a reproduction that renders an artificial perfection, so that the details look unreal not because they’re reproduced unskillfully or imprecisely like in a painting, but too skillfully, so that they actually look unreal because they appear to be too perfect.
That’s where the disturbance and the resulting discomfort come from, at least for me: the cognitive dissonance between what the mind is convinced is something real, but what the senses tell it, isn’t. And not because it’s too flawed, but because it’s too perfect. That “too perfect” also contributes to and exacerbates the disturbance. And when you stare at it long enough, the real and unreal parts mesh together into a single something your mind then doesn’t know how to process, and the disturbance becomes disorientation and then discomfort.
I found myself looking at a picture of the Lord Yeshua walking through a sunny field with his disciples, that should’ve been making me feel the warm and fuzzies all over; but I was experiencing the same physical reaction I remember having when I was watching a video I once saw of a Salvador Dali film in which in one scene, everybody’s favorite “weird and creepy for weird and creepy’s sake” sensory disruptor slowly slices a living human eyeball open with a razor blade, while the eyeball is still in the living human.
So in the same way everybody in a nightclub looks really good in the dim lighting and with all the booze and drugs at play, an AI-generated image looks real if you’re not paying too close attention. But when you do, you see it’s so close to looking real that it’s disturbing. And then you realize it both looks so close to real and yet is distinguished from real not by flaw, but by perfection; and that just cranks the disturbance up to 11.
What adds to the disturbance and discomfort is the way in which both the technology itself—by how it works and what it produces—and the marketing propaganda encouraging us to embrace it, go out of their way to make us believe it’s “better than real;” which is also disturbing, because it looks better than real, but you know it isn’t, and that’s not just disturbing but extremely creepy, because the only conclusion we can come to from that is to ask:
“Why?”
Why are we being encouraged with such insistence to embrace something that produces something we’re exhorted to accept as real? That our senses and minds tell us isn’t? And not because it’s flawed, but because it’s too perfect?
Can it be because both the technology itself and the people pushing it on us, are up to something no good?
A few words from that earlier exploration we did in “Deepfake Almighty,” might be worth repeating here as a possible answer to these questions that are worth revisiting and re-considering.
“Essentially, the whole thing, the technology and the apparent teleology driving it as articulated by fake Morgan Freeman, appears by design to be confusing the supernatural (the spiritual nature outside the physical) with the unnatural.
“Living images of artificially-generated people who look and sound and act real, aren’t natural; but, by blurring our ability to distinguish between what’s real and what’s not and increasing our susceptibility to accepting the unreal as real based solely on how it makes us feel, we’re being told it’s perfectly natural for the unnatural to replace the natural as reality, and that we should start accepting it as such.
“So we have to ask once again... why? Why do we need a technology that gives us the ‘ability’ to not be able to tell real from fake, and even if we can, not to care that the fake we know is fake, is fake, because it makes us feel like it’s real and we accept it as such?
“Could the unreal and unnatural Morgan Freeman AI deepfake trying to convince us to accept the unreal and unnatural as real, be the beginning of the psychological conditioning that’s preparing humanity to be susceptible to accepting the validity of Satan’s unnatural and unreal rule through his unholy and unreal trinity of him and the two beasts?”
I don’t know about anybody else, but my first experience with AI made me feel actually, physically sick enough from and about it to really make me want to have nothing at all to do with it. The disturbance and discomfort could very well be because that preponderance of too much perfection rendering the artificial imagery too unreal to be accepted as the “better than real” we’re encouraged to accept it as, is unnatural.
The world in which we live is flawed. So’s everything and everybody in it. Because of our sinful nature, all manifested reality was rendered imperfect by the degenerative corruption imposed on it by that sinfulness; itself imposed on it and us, by Satan.
So that anything that appears to be perfect in a natural reality characterized by universal imperfection, has to be by definition and default, unnatural.
Just to confirm for myself it wasn’t something else that may have caused that initial experience and not the image itself, I’ve tried looking at other AI-generated images since then; and although the intensity of the feeling was never as high as that first time, the disturbance and creepiness was there with every one of them.
It could just be me.
Is it?
(Photo credits: AnalyticsInsight.net; TheAIBibleOfficial via Instagram; Antinomi.org; Bible.com via Pinterest)
Kentucky Fried Christian encourages and welcomes your feedback and input. Something you like? Something you don’t like? Something of which you’d like to read more? Or less? Some topics you’d like to see discussed? Submit a comment or participate in the subscriber chat. You can follow Gene Kaye and Kentucky Fried Christian on Twitter, Gettr and Truth Social. Check out the Videograms of Verse video series on the KFCh channel on Rumble. Listen to the podcast now right here on this Substack, or on Spotify or Podbean. Please consider becoming a paid subscriber to help support Gene Kaye’s work; and now, get free access to the serialization of Living With Caligula, an anthology of short-fiction tales of the earliest Christian believers, available only on this Substack.
Although it was the image in the article that did it to me, it would have to figure that the day after publishing it, I find an even better example of exactly what I'm talking about; especially in regards to the "too perfect" part. You can see it here: https://twitter.com/DaveSmi68437143/status/1758614516949291490?t=UVtcBSwfRQE2UNvqIwSNaw&s=19