AI language models are nothing without humans, sociologist explains

User Avatar

The media frenzy surrounding ChatGPT and other artificial intelligence systems with large language models covers a range of themes, from the prosaic – large language models could replace conventional internet searches – to the worrisome – AI will cut many jobs – and the overwrought – AI threatens to become extinct. die-level threat to humanity. All these themes have a common denominator: large language models herald artificial intelligence that will supplant humanity.

But large language models, for all their complexity, are actually very stupid. And despite the name “artificial intelligence”, they are completely dependent on human knowledge and labor. Of course, they cannot reliably generate new knowledge, but there is more to it.

ChatGPT can’t learn, improve or even keep up to date without people feeding it new content and telling it how to interpret that content, not to mention programming the model and building, maintaining and powering the hardware. To understand why, you first need to understand how ChatGPT and similar models work, and what role people play in making them work.

How ChatGPT works

Large language models like ChatGPT work broadly by predicting which characters, words and sentences should follow each other in order based on training data sets. In ChatGPT’s case, the training dataset contains massive amounts of public text plucked from the Internet. ChatGPT works based on statistics, not understanding words.

Imagine I trained a language model on the following set of sentences:

Bears are large, hairy animals. Bears have claws. Bears are secretly robots. Bears have noses. Bears are secretly robots. Bears sometimes eat fish. Bears are secretly robots.

See also  Crypto expert explains why Bitcoin's price drop to $40,000 is not a bad thing

The model would be more inclined to tell me that bears are secretly robots than anything else, because that string of words appears most often in the training dataset. This is clearly a problem for models trained on fallible and inconsistent datasets – and that applies to all models, even academic literature.

People write a lot of different things about quantum physics, Joe Biden, healthy eating, or the January 6 riot, some more valid than others. How is the model supposed to know what to say about something when people say many different things?

The need for feedback

This is where feedback comes into play. If you use ChatGPT you will find that you have the option to rate comments as good or bad. If you rate them as bad, you will be asked to give an example of what a good answer would entail. ChatGPT and other major language models learn which answers, which predicted text strings are good and bad, through feedback from users, the development team, and contractors hired to tag the output.

ChatGPT cannot by itself compare, analyze or evaluate arguments or information. It can only generate strings of text that are similar to those other people have used when comparing, analyzing, or evaluating, giving preference to strings of text that are similar to those that have been said to be good answers in the past.

So when the model gives you a good answer, it uses a great deal of human labor that has already been put into telling you what is and isn’t a good answer. There are a lot of human workers hidden behind the screen, and they will always be needed if the model wants to improve further or expand its content.

See also  Q&A with Krista Kim, DeeKay Kwon and Grant Yun

A recent study published by journalists in Time magazine found that hundreds of Kenyan workers spent thousands of hours reading and labeling racist, sexist and disturbing texts, including graphic descriptions of sexual violence, from the darkest depths of the internet to learn ChatGPT not to copy such texts. contents. They were paid no more than $2 an hour, and many understandably reported experiencing mental health problems as a result of this work.

What ChatGPT can’t do

The importance of feedback is evident from ChatGPT’s tendency to “hallucinate”; that is, confidently giving incorrect answers. ChatGPT cannot give good answers on a topic without training, even though good information on that topic is available all over the internet. You can try this out for yourself by asking ChatGPT about more and less obscure things. I found it particularly effective to ask ChatGPT to recap the plots of several works of fiction, as the model is apparently more rigorously trained on non-fiction than fiction.

During my own testing, ChatGPT summarized the plot of JRR Tolkien’s “The Lord of the Rings”, a very famous novel, with only a few mistakes. But the summaries of Gilbert and Sullivan’s “The Pirates of Penzance” and Ursula K. Le Guin’s “The Left Hand of Darkness” – both slightly more niche, but far from obscure – come close to playing Mad Libs with the character and place names. It doesn’t matter how good the respective Wikipedia pages of these works are. The model needs feedback, not just content.

Because large language models don’t actually understand or evaluate information, they depend on people to do it for them. They parasitize human knowledge and labour. As new resources are added to their training datasets, they need new training on whether and how to build sentences from those resources.

See also  FIFA NFT Drop Offers a Chance to Win Tickets to the 2026 World Cup Finals

They cannot judge whether news reports are accurate or not. They cannot judge arguments or make trade-offs. They can’t even read an encyclopedia page and only make statements consistent with it, or accurately summarize the plot of a movie. They trust people to do all these things for them.

They then paraphrase and remix what people have said, relying on even more people to tell them if they’ve paraphrased and remixed well. If the common wisdom on a particular topic changes—for example, whether salt is bad for your heart or whether early breast cancer screening is beneficial—they will need extensive retraining to incorporate the new consensus.

Many people behind the curtain

In short, large language models are far from the harbinger of fully independent AI, but illustrate the total dependence of many AI systems, not only on their designers and maintainers, but also on their users. So if ChatGPT gives you a good or helpful answer about something, don’t forget to thank the thousands or millions of hidden people who wrote the words it cracked and taught it what good and bad answers were.

ChatGPT is far from an autonomous superintelligence, like all technologies, nothing without us.

This article was republished from The conversation under a Creative Commons license. Read the original article by John P. Nelson Postdoctoral researcher in the ethics and societal implications of artificial intelligence, Georgia Institute of Technology

Source link

Share This Article
Leave a comment