OpenAI recently released the API for its latest language model, GPT-3, in beta. With this tool, some developers have started to show what this platform is capable of able to generate content simply by giving commands in English and understandable by anyone. For example, “create a website with seven buttons of rainbow colors” will generate exactly the HTML code of a website with seven buttons of different colors.


GPT-3 is a language modelThis means that (in very general terms) your goal is to predict what will come next based on previous data. It’s like a kind of “autocomplete” that we have in search engines like Google, but of course at a much higher level. For example, you can write two or three sentences of an article and GPT-3 will write the rest of the article. You can also generate conversations and the answers will be based on the context of the previous questions and answers.

It is important to understand that each answer proposed by GPT-3 is only a possibilityIt should not be the only one and the same request can always offer a different answer, even contradictory. A model that returns answers based on what has been said before and relating it to everything it knows to get the most meaningful answer possible. He doesn’t really understand the context. But of course, when you’ve learned millions of web pages, books or Wikipedia… the results are startling.

All public books on the Internet, Wikipedia and millions of scientific articles and news

OpenAI’s GPT-3 language model required some prior training to be what it is. This training consisted of learning a huge amount of information available on the Internet. OpenAI powered GPT-3 with all public books that have been written and available, all Wikipedia, and millions of web pages and scientific papers available on the Internet. Essentially, it absorbed all of the most relevant human knowledge we’ve posted online.

After reading this information and analyzing it, the language model created connections in a 700 GB model located on 48 GPUs of 16 GB each. To put it in context, last year OpenAI released GPT-2 weighing 40 GB and crawling 45 million web pages. While GPT-2 had 1.5 billion parameters, GPT-3 has 175 billion parameters.

The amazing experiences with GPT-3

One of the experiences that has gained popularity in recent days is that of Sharif Shameem. It shows youn web generator which we just have to describe in natural language what we want to display and generate the HTML / CSS code for it.

In another experience of Sharif Shameem, the OpenAI model directly program an application in React. According to the example, we just need to describe to GPT-3 what we want in the application and what we want it to do so that it generates all the code and schedules its operation.

Continuing the apps and their creation, Jordan Singer shows an example plugin for Figma. Figrma is a prototyping platform widely used in the design of mobile applications or websites. With this plugin based on GPT-3 he is described what he wants and directly creates all the articles. For example “an application with the icon of a camera, the title Photos and a stream of photos with the icon of a user and the icon of a heart”. Essentially create a basic version of Instagram.

Shreya Shankar has a demo in which GPT-3 transforms equations described in human language (English) at LaTeX.

Kirk Ouimet, for example, experimented with GPT-3 using a program capable of hold conversations. Conversations on absolutely all kinds of topics. For example on Bitcoin and crypto-currencies, on “The Legend of Zelda: Breath of the Wild”, on veganism, on AI and its impact on politics.

In another test we see a simple search engine which, when asked for something returns the response and the link to the URL where you got the information from. Yes, Google and many voice assistants do something very similar.

More information | arXiv

Source : Engadget