Improved & Multimodal
On March 14th, OpenAI, the creators of ChatGPT and DALL-E announced GPT-4. The ChatGPT that you’ve all probably tried out was built on the GPT-3.5 language model. Not only is GPT-4 much more capable than 3.5 it’s also a multimodal modal, meaning that it can accept image inputs as well. The image feature is in preview right now, but yes: you’ll be able to input text or images. If you haven’t watched the Developer Livestream video from the launch, you missed out on some serious amazingness. I’ll summarize it here.
First, in the video, Greg Brockman, President and co-founder of OpenAI, demonstrated how much more capable GPT-4 is, especially in handling more complex and nuanced instructions.
Is GPT-4 is Smarter Than a Lawyer?
In the associated blog post, they showed off this increased capability by showing how well it scored on a series of publicly-available or practice versions of tests. For example:
- GPT-3.5 scored in the 10th percentile on the bar exam, GPT-4 scored in the 90th. The top 10% of test takers!
- GPT-3.5 scored in the 25th percentile on the Quantitative section of the GRE, while GPT-4 was in the 80th.
- It also boasted 4’s and 5’s on most AP exams, while GPT-3.5 had a significant number of 1s, 2s, and 3s.

More Text:
This version accepts larger sets of text than GPT-3.5 does. Whereas GPT 3.5 could do 3,000 words of text, GPT-4 can do 25,000. In fact, in the video, Brockman inserts two different articles and asks GPT-4 to find a common theme between them. He also pastes in 16 pages of tax code and asks GPT-4 to identify a fictional couple’s standard deduction.

Image Prompts
The video also showed GPT-4 accepting image-related prompts. First, Brockman uses it within Discord to have it describe what’s in an image and then identify the funny aspects of another image. The next example is using a photo of a hand-written mockup of a joke website, which GPT-4 then turns into functioning code. In the trailer video about GPT-4, they showed it a set of balloons held in a net and asked GPT-4 what would happen if the string were cut. It successfully identified that the balloons would fly away.
Access
So, how can we try out GPT-4? Well, they say that ChatGPT Plus subscribers will get GPT-4 access, but with a usage cap. If you don’t have a Pro account, you can interact with this language model through Microsoft’s Bing AI Chat. Microsoft announced that it has been using GPT-4 since its release a few weeks ago.
One thing that Brockman says in the video that I think is really important for the implications for our learners’ futures: “It’s not perfect, but neither are you and together it’s this amplifying tool that just lets you reach new heights.”
[ Image(s) Source:youtube.com/watch?v=outcGtbnMuQ, https://www.youtube.com/watch?v=–khbXchTeE, https://openai.com/research/gpt-4 ] Continue reading Open AI Announces GPT-4