The greatest accomplishment of ChatGPT may be its capacity to deceive us into believing that it is sincere.
Former British Prime Minister Benjamin Disraeli is cited in American author Mark Twain’s autobiography as saying: “There are three sorts of lying: lies, damned lies, and statistics.”
Nevertheless, it’s possible that Twain misquoted Disraeli. Artificial intelligence brings all three together in a neat little package, which is a fantastic advance.
The massive datasets collected from the Internet are used to train ChatGPT and other generative AI chatbots to generate the statistically most likely response to a request. Its responses are based on other websites’ wording, spelling, syntax, and even style rather than any knowledge of what makes something humorous, important, or truthful.
It uses a “conversational interface” to convey its responses and can carry on a conversation with users by employing context cues and deft gambits. The problem is that it combines statistical pastiche with statistical panache.
persuasive but unthinking I draw on a lifetime of interpersonal communication experience whenever I speak to another person. It is therefore difficult to respond when a computer program communicates in a human-like manner; one must take stuff in, consider it, and then respond in the context of both of our beliefs.
Yet with an AI interlocutor, it is in no way what is happening. They are incapable of thinking and lack all forms of cognition.
When AI converses with us to convey information, it becomes more convincing than it should be. Software pretends to be more trustworthy than it actually is by imitating human rhetorical strategies to convey reliability, competence, and comprehension that is far above what the software is actually capable of.
Are the results accurate, and do individuals believe that the results are accurate? The software’s interface makes promises that its algorithm-side cannot keep, and the designers are aware of this. The founder and CEO of the business that makes ChatGPT, Sam Altman, acknowledges that the program is “very limited, yet good enough at some things to create a misleading picture of brilliance.”
In an effort to avoid being left behind, a flood of businesses are still hurrying to include the early-stage capability into their user-facing products (including Microsoft’s Bing search).
Fact and fiction
Even when the AI gets something wrong, the conversational interface still produces results with the same assurance and polish. For instance, as science-fiction author Ted Chiang notes, the tool makes mistakes when performing addition with greater numbers since it lacks any mathematical sense.
It merely patterns-matches instances of addition from the internet. Also, it may locate instances for simpler math problems, but it hasn’t come across training text with bigger numbers.
It doesn’t “know” the explicit math rules that a 10-year-old could employ. But, as evidenced by this chat with ChatGPT, the conversational interface portrays its response as assured, regardless of how incorrect it is.
User: What’s the capital of Malaysia? ChatGPT: The capital of Malaysia is Kuala Lampur.
User: What is 27 7338? ChatGPT: 27 7338 is 200,526.
It isn’t.
In a biography of a famous person, generative AI can combine real facts with fiction or credit credible sources for unpublished research.
In a biography of a famous person, generative AI can combine real facts with fiction or credit credible sources for unpublished research.
That makes logical because papers typically include references and websites frequently indicate that prominent persons have won honors. ChatGPT is simply carrying out the purpose for which it was designed by putting together stuff that may or may not be real.
This is referred to as an AI delusion by computer scientists. We common folk might refer to it as lying.
Intimidating outputs
I stress the significance of output and process alignment when I am teaching design to my students. Conceptual ideas shouldn’t be presented in a way that makes them appear more polished than they actually are, for as by rendering them in 3D or printing them on glossy paper. A rough drawing in pencil makes it apparent that the concept is unfinished, subject to change, and shouldn’t be expected to cover all aspects of a problem.
The same is true with conversational interfaces; when technology “speaks” to us in carefully constructed, grammatically correct, or chatty tones, we tend to assume that it has much more thinking and reasoning than is actually the case. Instead of a computer, a con artist should employ this trick.
Since we might already be predisposed to trust anything the machine says, it is the role of AI developers to control user expectations. Jordan Ellenberg, a mathematician, talks on how the mere mention of algebra can make us lose control of our better judgment.
With its hundreds of billions of parameters, AI has the same computational intimidation power that it can use to disable us.
Making the algorithms produce ever-better material is important, but we also need to watch out for the interface’s tendency to make unwarranted claims. Maybe AI may have a little humility in conversations instead of the overconfidence and arrogance that already permeate the IT sector.