I am by no means technically qualified to dig into the future of AI on any basis beyond how it makes me feel. Luckily for me, that’s the one thing AI cannot do. Although I can try and make it,
The development and use of AI in various fields, including writing, is a complex and nuanced topic. It is up to each individual to decide how they feel about it. Some people may be concerned about the potential impact of AI on employment and job opportunities, while others may see it as an opportunity to enhance and improve certain tasks. It is important to consider the potential benefits and drawbacks of AI and to think carefully about how it may affect different industries and individuals.
There’s part of me that has a deep belief that AI will get so much better. And as it gets better, it will become more indistinguishable from human creation. And as that happens, we’ll see the world flooded with more and more AI content.
The other side of me dislikes it. This side of me is harder to justify. I can’t exactly explain why AI doesn’t feel like a human response most of the time. But I also cannot explain why it leaves me agog when it does seem to mimic human life so well. Maybe that’s the problem; even at its best, it feels like mimicry. This almost certainly stems from my knowing it is AI. Would I have any clue if I read something AI-generated in a situation that left room for error?
My gut tells me that ChatGPT has gotten good enough that in many situations it would mimic a human interaction to the point of passing the Turing test. And yet, the thought of this makes me squeamish and fills me with disbelief.
Partially, I believe this is because of my values. I believe that artists deserve to be compensated for what they create. We should all have some level of control over the messages we put out into the world. However, a recent podcast about Adam Smith has brought to light that even when the creator and reader are humans, the message can still be changed and completely misunderstood.
The next issue involves the problem of misinformation. It seems obvious that using AI to reinforce your beliefs and stop looking for other answers could quickly become the norm. Large language models can create beautiful-sounding responses, but that does not mean they create accurate or factual responses. So if I type something into ChatGPT, imagining what I get is correct, or disillusioning myself into thinking so, I can almost certainly get something approximating an intelligent answer.
AI lacks subtlety. which isn’t what it’s currently created to do, so I understand that’s not the goal, but humans have this ability towards subtlety that I don’t think can be mimicked by AI. At least not at this time. Humans are flawed. Humans are confident when they shouldn’t be and meek when they could be. But all of our insecurities make what we write and say feel human. ChatGPT is pure confidence. Everything that comes out of the prompts is certain, regardless of how wrong it may be. And if you are like me, it hasn’t been wrong for you; try and find its responses to the fastest sea mammal in the world. I’ll give you a hint. The Perrin's falcon is not a sea mammal.
This subtly of incorrectness and the attitude with which we present information we may not be certain about are very human. There are probably some variables that OpenAI can adjust to make its models seem more human in this regard, but the one aspect of AI that is different from human interaction is that we don’t want AI to ever be unconfident. We expect it from humans and hate it from computers. which is a failing of our understanding of knowledge, not the actual AI.
This blog has been on my mind for a couple of weeks. Basically, ever since I got access to Chat GPT But my inability to push through my lack of knowledge to write it has led to this moment. A bit of a pinot grigio-infused rant All I want to do is write my thoughts for my readers, and I’m absolutely plagued by doubt and indecision to the point where I can only write with a little lubricant. I’m sure AI will fix this problem in the near future.