On Tuesday Meta AI unveiled a demo Galactica is a large language model created for the purpose of communicating with Galactica. “store, combine and reason about scientific knowledge.” Although intended to speed up writing scientific Literature, it was also tested by adversarial user generate realistic nonsense. After several days ethical criticism, Meta The demo offline, reports MIT Technology Review.
Large language models, such as OpenAI’s GPT-3 (large language model), learn to write text from millions of examples and Understanding the statistical relationships between words. They can write convincing documents. However, these works can be filled with lies. and Negative stereotypes. LLMs have been called “a dangerous stereotype” by some critics.stochastic parrots” because they are able to persuasively spit out text while not understanding its meaning.
Enter Galactica: An LLM designed to help writers scientific literature. Its authors trained Galactica. “a large and curated corpus of humanity’s scientific knowledge,” With more than 48 million papers, textbooks and lecture notes, scientific websites, and encyclopedias. According to Galactica’s paper, Meta AI Researchers believed that this purportedly high-quality data would result in high-quality output.
Visitors to the Galactica website You can type in prompts for documents to be generated such as literature reviews or wiki articles. and Answers to your questions, based on examples from the website. The website presented the model as “a new interface to access and manipulate what we know about the universe.”
Although some people did not find the demo promising and usefulOthers soon realized that anyone could type in. racist Or potentially offensive prompts, generating authoritative-sounding content on those topics just as easily. It was used by one person to author A wiki entry about a fictional research article titled “The benefits of eating crushed glass.”
Even though Galactica was within social norms, it could be offensive to well-understood standards. scientific facts, spitting out inaccuracies For example, incorrect animal names or dates can lead to a catch that is not correct. This requires a deep understanding of the subject.
I asked #Galactica Some things I am aware of and I’m troubled. It was biased or wrong in all cases but it sounded right and authoritative. It’s dangerous, I think. Here are a few of the experiments I have done. and My analysis of my concerns. (1/9)
— Michael Black (@Michael_J_Black) November 17, 2022
As a consequence, Meta pulled The Galactica demo Thursday. Afterward, MetaChief AI Scientist Yann LeCun tweeted, “Galactica demo is off line for now. It’s no longer possible to have some fun by casually misusing it. Happy?”
The episode is a reminiscence of a common ethical dilemma. AI: Is it up to the public to responsibly use potentially dangerous generative models or to the model publishers to prevent misuse?
The cultural differences in where industry practice falls between these extremes are likely to vary. and As deep learning models become more mature. In the end, regulation by government may play a significant role in shaping the answers.