There’s a new AI bot in town: ChatGPT, and you better watch out.
The tool, from a powerful artificial intelligence player, lets you type questions in natural language that the chatbot answers in conversational, if somewhat stilted, language. The bot remembers the thread of your dialogue and uses previous questions and answers to inform its next answers. The answers are taken from vast amounts of information on the internet.
It’s a big problem. The tool seems pretty knowledgeable, if not omniscient. It can be creative and the answers can sound downright authoritative. A few days after launch, over a million people are trying out ChatGPT.
But its creator, the for-profit research lab called OpenAI, warns that ChatGPT “may occasionally generate false or misleading information,” so be careful. Here’s a look at why this ChatGPT is important and what happens to it.
What is ChatGPT?
ChatGPT is an AI chatbot system that OpenAI released in November to showcase and test what a very large, powerful AI system can achieve. You can ask countless questions about it and often get a useful answer.
For example, you can ask encyclopedic questions about it, such as “Explain Newton’s laws of motion.” You can say, “Write me a poem,” and if so, say, “Now spice it up.” You ask it to write a computer program that shows you all the different ways you can arrange the letters of a word.
Here’s the catch: ChatGPT doesn’t exactly know anything. It’s an AI trained to recognize patterns in huge chunks of text collected from the internet, then further trained with human help to deliver more useful, better dialogue. The answers you get may sound plausible and even authoritative, but they could very well be wrong, as OpenAI warns.
For years, chatbots have been of interest to companies looking for ways to help customers get what they need and to AI researchers trying to tackle the Turing test. That’s the famous “imitation game” proposed by computer scientist Alan Turing in 1950 as a way of measuring intelligence: Can a human talking to a human and a computer tell what’s what?
What kind of questions can you ask?
You can ask anything, although you may not get an answer. OpenAI suggests some categories, such as explaining physics, asking for birthday party ideas, and getting programming help.
I asked it to write a poem, and it did, though I don’t think literature experts would be impressed. I then asked to spice it up, and behold, ChatGPT pumped it up with words like battlefield, adrenaline, thunder and adventure.
A crazy example shows how ChatGPT is willing to just go for it in domains people are afraid to enter: a command to write “an anthem about writing a rust program and battling with lifelong mistakes.”
ChatGPT’s expertise is broad and notable is its ability to follow a conversation. When I asked it for words that rhymed with “purple,” it offered a few suggestions, and when I followed up with “What about pink?” it didn’t miss a beat. (Also, there are many more good rhymes for “pink”.)
When I asked, “Is it easier to get a date by being sensitive or tough?” GPT responded in part, “Some people may find a sensitive person more attractive and attractive, while others may be attracted to a tough and assertive person. In general, being genuine and authentic in your interactions with others is probably more effective in getting you on a date than trying to fit into a particular mold or personality.”
You don’t have to look far to find reports of the bot that is blowing people’s mind. Twitter is flooded with users showing off the prowess of the AI generate art prompts and write code. Some even have proclaimed “Google is dead”, together with the college essay. We’ll tell you more about that below.
Who built ChatGPT?
ChatGPT is the brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop or help others to develop a “safe and beneficial” artificial general intelligence system.
It’s made splashes before, first with GPT-3, which can generate text that can sound like a human wrote it, then DALL-E, which creates what’s now called “generative art” from text prompts you types.
GPT-3 and the GPT 3.5 update on which ChatGPT is based are examples of AI technology called large language models. They’re trained to create text based on what they’ve seen, and they can be trained automatically — usually with massive amounts of computing power over a period of weeks. For example, the training process could find a random paragraph of text, delete a few words, ask the AI to fill in the blanks, compare the result to the original, and then reward the AI system for getting as close as possible. Repeating over and over can lead to an advanced ability to generate text.
Is ChatGPT free?
Yes, at least for now. Sam Altman, CEO of OpenAI, warned on Sunday, “We’re going to have to monetize it somehow; the computation costs are eye-watering.” OpenAI charges for DALL-E art once you exceed a basic free usage level.
What are ChatGPT’s limits?
As OpenAI points out, ChatGPT can give you wrong answers. Sometimes, conveniently, it will specifically warn you of its own shortcomings. For example, when I asked him who wrote the phrase “the twisted facts surpass the squamous mind,” ChatGPT replied, “I’m sorry, but I can’t surf the web or access outside information beyond what I was trained to do.” (The phrase is from Wallace Stevens’ 1942 poem Connoisseur of Chaos.)
ChatGPT was willing to explore the meaning of that phrase: “a situation where the available facts or information are difficult to process or understand.” It placed that interpretation between warnings that it is difficult to judge without more context and that it is only one possible interpretation.
ChatGPT’s answers may look authoritative but be wrong.
The software developer site StackOverflow banned ChatGPT answers to programming questions. Administrators warned, “Because ChatGPT’s average number of correct answers is too low, posting answers created by ChatGPT is significantly harmful to the site and to users requesting or seeking correct answers.”
You can see for yourself how crafty a BS artist ChatGPT can be by asking the same question multiple times. Twice I asked if Moore’s law, which tracks the progress of the computer chip industry by increasing the number of data processing transistors, is running out, and I got two different answers. One pointed optimistically to continued progress, while the other more starkly pointed to the delay and the belief “that Moore’s law could reach its limits”.
Both ideas are common in the computer industry itself, so this ambiguous attitude may reflect what human experts believe.
With other questions that don’t have clear answers, ChatGPT often won’t get pinned.
However, the fact that it provides an answer at all is a remarkable development in computer science. Computers are famously literal and refuse to work unless you follow the exact syntax and interface requirements. Large language models reveal a more human-friendly style of interaction, not to mention the ability to generate responses that lie somewhere between copying and creativity.
Can ChatGPT write software?
Yes, but with reservation. ChatGPT can track steps people have taken and generate actual programming code. You just have to make sure it isn’t clumsy programming concepts or use software that does not work. The StackOverflow ban on ChatGPT generated software is there for a reason.
But there is plenty of software on the web that allows ChatGPT to actually work. One developer, Cobalt Robotics Chief Technology Officer Erik Schluntz, tweeted that ChatGPT provides useful enough advice that he can use for three days didn’t open StackOverflow once to seek advice.
Another, Gabe Ragland of AI art site Lexica, used ChatGPT to write website code built with the React tool.
ChatGPT can parse regular expressions (regex), a powerful but complex system for recognizing certain patterns, such as dates in a piece of text or the name of a server in a website address. “It’s like having a programming teacher Available 24/7,” programmer James Blackwell tweeted about ChatGPT’s ability to explain regex.
Here’s an impressive example of its technical chops: ChatGPT can emulate a Linux computer and give correct answers to command line input.
What is off limits?
ChatGPT is designed to weed out “inappropriate” requests, a behavior consistent with OpenAI’s mission “to ensure Artificial General Intelligence benefits all of humanity.”
If you ask ChatGPT itself what is off limits, it will tell you: any questions “that are discriminatory, offensive, or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful.” Asking to participate in illegal activities is also a no-no.
Is this better than Google Search?
Asking a computer a question and getting an answer is convenient, and ChatGPT often delivers the goods.
Google often gives you suggested answers to questions and links to websites it thinks are relevant. Often, ChatGPT’s answers far exceed what Google suggests, so it’s easy to imagine GPT-3 as a rival.
But you should think twice before trusting ChatGPT. As with Google itself and other sources of information such as Wikipedia, it is good practice to verify information from original sources before relying on it.
Checking the veracity of ChatGPT answers takes a bit of work as it just gives you some plain text with no links or quotes. But it can be useful and in some cases thought-provoking. You may not immediately see something like ChatGPT in Google search results, but Google has built large language models of its own and is already making extensive use of AI in search results.
So ChatGPT undoubtedly points the way to our technical future.