From ChatGPT-3.5 to ChatGPT-4: A Quantum Leap in Natural Language Processing and Multimodal Capabilities
New features allow the program to accomplish things that no computer ever could before.
Last week I got invited to attend a live, virtual unveiling of OpenAI’s new ChatGPT-4 AI, which is designed to be almost a quantum leap in improvement compared with the existing model that uses ChatGPT-3.5 at its core. Given that the original AI only debuted to the public last December, I was a little bit skeptical about how much had actually changed in just a few months. But make no mistake, the new features and capabilities in ChatGPT-4 are truly impressive, allowing the AI to accomplish things that no computer ever could before, and bringing AI technology one giant step closer to something that is almost indistinguishable from actual, human intelligence.
The presentation was surprisingly low-key, and was basically just OpenAI President and Co-Founder Greg Brockman demonstrating the new features of ChatGPT-4 and occasionally taking audience feedback and questions to feed into the new AI model. But he really didn’t need to be showy, because the new AI performed technological magic like I have never seen before. The full, 30 minute presentation is now available on YouTube if people want to see it.
There were a lot of impressive, incremental improvements announced. For example, ChatGPT-4 can now accept and memorize text with up to 25,000 words, as opposed to 3,000 with 3.5, so it can analyze even larger, more complex documents—and output larger results as well. It has also been honed to make fewer mistakes in response to queries, something I tested out and found to be true following the presentation.
ChatGPT-4 opens its eyes to the world
However, without a doubt, the most impressive new feature, and one that I think will have the most impact on future AI interactions, is its new multimodal capabilities. In other words, Chat GPT-4 is working on the ability to accept information in the form of images and sound. And for images, that does not just mean being able to look at a picture and identify what the AI is seeing—like pointing out that something in a photograph is a cat or a dog—but actually being able to analyze an image and offer up conclusions or predictions about it.
In the presentation, the AI was shown an image of a cartoon squirrel holding a camera and asked what was funny about the picture. The AI correctly said that squirrels traditionally do things like eat nuts, and are not expected to use a camera or perform other human-like activities, which makes the image shocking and funny.
ChatGPT-4’s newfound sight is due to an application called Be My Eyes which helps it analyze photographs and other visual documents. The app was originally designed to help blind people navigate the world. It seems to work well for the AI, although Brockman stressed that the new visual analysis feature for ChatGPT was still in beta.
But it’s more than just having the AI be able to look at an object and recognize what it is. ChatGPT-4 can then interpret data in a photograph and draw conclusions. In a short demo video, OpenAI showed ChatGPT-4 a picture of a bunch of balloons on a long string and asked it what would happen if the string was cut. The AI successfully answered that the balloons would float away. In another example video, ChatGPT-4 was able to successfully deduce from looking at a photo that when a heavy weight was dropped onto a seesaw-like device, a ball sitting on the other side would fly up in the air. That kind of cause and effect deduction that the AI can now do just from looking at a photograph is like nothing computers could ever even attempt before.
But it’s even more impressive than that. Brockman held up a very rough sketch of his vision for a joke website that he scribbled into a notebook. At his website, users would be presented with multiple jokes and could click a button to see the punchlines. But it was difficult to understand how the site would actually work because it was a very rough sketch.
To advance his joke website project, Brockman took a picture of his sketch with his phone and sent it into ChatGPT-4, asking the AI to help him implement his idea. Not only did the AI recognize what he wanted, but it also programmed the entire website based on his sketch. The website was live and working exactly as Brockman had intended within about 30 seconds of uploading the sketch.
ChatGPT-4's upgraded System prompt functionality puts users in control
There were many impressive new features unveiled, but one key area that I think a lot of people might initially miss is the improvement of the System prompt that allows users to change the role of the AI or to direct it to perform specific tasks. With ChatGPT-3.5, if a user asked the AI to do something that was too far outside of its original design, it would more often than not just give up and do its own thing. Brockman demonstrated the improvements in the new version by telling the AI in the System prompt to follow the user’s instructions very carefully.
Initially using the System prompt with ChatGPT-3.5, the AI really didn’t listen when asked to perform unusual tasks. For example, it was asked to summarize a document pulled from the OpenAI web pages that described the AI’s development, but to do so exclusively using words that began with the letter G. ChatGPT-3.5 was able to summarize the document, but ignored the user’s weird request about using specific letters and instead reverted back to its default programming. It basically ignored the System prompt telling it to carefully follow all user instructions.
But when he asked the same thing in ChatGPT-4, after first telling the System prompt to fully listen to a user’s commands, it did exactly what was asked of it, offering a summary that read: “Gigantic GPT-4 garners groundbreaking growth, greatly galvanizing global goals.” Later on when Brockman asked the audience for another letter to use with the same query, someone suggested the letter Q, which seemed impossible. But the AI was up for the challenge, offering the following Q-focused summary: “GPT-4 quintessentially quickens quality quantifications, quelling questionable quandaries.”
Later on in the presentation, Brockman again changed the AI perimeters, this time asking it to act like a tax assistant before feeding it the tax code and asking specific questions about things like standard deductions for people with certain income levels. It seemed to get everything right, although who can really understand everything in the tax code these days?
The concept of changing or reconfiguring an AI to do what you need is nothing new. I still try to play AI Dungeon fairly often. And although it’s less advanced than ChatGPT, it does allow users to modify their stories depending on what kind of game they want to play with it or the style of interaction they want it to use. So, allowing users to modify core AI behavior is an old concept, but most AIs are not advanced enough to take it very far. Even ChatGPT-3.5 was unable to branch out too far from its original design much of the time.
The fact that adding new roles to the AI seems to be a feature that is working in ChatGPT-4 is a big deal because it will allow the AI to better chameleon into whatever tool a user needs. And that, even more so than the new visual recognition features, may be what continues to add value to AI development, so that it becomes an asset for people attempting all kinds of unique tasks in the future.
Right now, users who interact with the free version of ChatGPT will have access to the ChatGPT-3.5 model, which is still very good. But to access ChatGPT-4, they will need to buy a subscription, which is currently priced at $20 per month. Developers who want to incorporate ChatGPT-4 into their applications can also do so with a contract that charges a set price per query.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys