OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless - Technology News

Breaking News

OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless

“Playing with GPT-3 feels like seeing the future,” Arram Sabeti, a San Francisco-based developer and artist, tweeted last week. That pretty much sums up the response on social media in the last few days to OpenAI’s latest language-generating AI.  

OpenAI first described GPT-3 in a research paper published in May. But it began drip-feeding the software to selected people last week who requested access to a private beta. For now, OpenAI wants outside developers to help it explore what GPT-3 can do but it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the AI via the cloud.

GPT-3 is the most powerful language model ever. Its predecessor, GPT-2, released last year, was already able to spit out convincing streams of text in a range of different styles when prompted with an opening sentence. But GPT-3 is a big leap forward. The model has 175 billion parameters—the values that a neural network tries to optimize during training—compared to GPT-2’s already vast 1.5 billion. And with language models, size really does matter.

Sabeti linked to a blog post where he showed off short stories, songs, press releases, technical manuals and more that he had used the AI to generate. GPT-3 can also produce pastiches of particular writers. Mario Klingemann, an artist who works with machine learning, shared a short story called “The importance of being on Twitter”, written in the style of Jerome K Jerome, which starts: “It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.” Klingemann says all he gave the AI was the title, the author’s name and the initial “It”. There is even a reasonably informative article about GPT-3 written entirely by GPT-3.

Others have found that GPT-3 can generate any kind of text, including guitar tabs or computer code. For example, by tweaking GPT-3 so that it produced HTML rather than natural language, web developer Sharif Shameem showed that he could make it create webpage layouts by giving it prompts like “a button that looks like a watermelon” or “large text in red that says WELCOME TO MY NEWSLETTER and a blue button that says Subscribe.”

Yet, despite its new tricks, GPT-3 is still prone to spewing hateful sexist and racist language. Fine-tuning the model helped limit this kind of output in GPT-2, however.

It’s also no surprise that many have been quick to start talking about intelligence. But GPT-3’s human-like output and striking versatility are the results of excellent engineering, not genuine intelligence. For one thing, the AI still makes ridiculous howlers that reveal a total lack of common sense. But even its successes have a lack of depth to them, reading more like cut-and-paste jobs rather than original compositions.

Exactly what’s going on inside GPT-3 isn’t clear. But what it seems to be good at is synthesizing text it has found and memorized elsewhere on the internet, making it a kind of vast, eclectic scrapbook created from millions and millions of snippets of text that it then glues together in weird and wonderful ways on demand.   

That’s not to downplay OpenAI’s achievement. And a tool like this has many new uses, both good—from powering better chatbots to helping people code—and bad—from powering better misinformation bots to helping kids cheat on their homework.

But when a new AI milestone comes along it too often gets buried in hype. Even Sam Altman, who co-founded OpenAI with Elon Musk, tried to tone things down: “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”

We have a low bar when it comes to spotting intelligence. If something looks smart, it’s easy to kid ourselves that it is. The greatest trick AI ever pulled was convincing the world it exists. GPT-3 is a huge leap forward—but it is still a tool made by humans, with all the flaws and limitations that brings with it.



from MIT Technology Review https://ift.tt/2ZKxeK3

No comments