Space Industry and Business News
ROBO SPACE
New Study Confirms Large Language Models Pose No Existential Risk
illustration only
New Study Confirms Large Language Models Pose No Existential Risk
by Sophie Jenkins
London, UK (SPX) Aug 13, 2024

ChatGPT and other large language models (LLMs) do not have the capability to learn independently or develop new skills, meaning they pose no existential threat to humanity, according to recent research conducted by the University of Bath and the Technical University of Darmstadt in Germany.

Published as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), the study reveals that while LLMs are proficient in language and capable of following instructions, they lack the ability to master new skills without direct guidance. As a result, they remain inherently controllable, predictable, and safe.

The researchers concluded that despite LLMs being trained on increasingly large datasets, they can continue to be used without significant safety concerns, though the potential for misuse still exists.

As these models evolve, they are expected to generate more sophisticated language and improve in responding to explicit prompts. However, it is highly unlikely that they will develop complex reasoning skills.

"The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study on the 'emergent abilities' of LLMs.

Led by Professor Iryna Gurevych at the Technical University of Darmstadt, the collaborative research team conducted experiments to evaluate LLMs' ability to tackle tasks they had not previously encountered-often referred to as emergent abilities.

For example, LLMs can answer questions about social situations without having been explicitly trained to do so. While earlier research suggested this capability stemmed from models 'knowing' about social situations, the researchers demonstrated that it is actually a result of LLMs' proficiency in a process known as in-context learning (ICL), where they complete tasks based on examples provided.

Through extensive experimentation, the team showed that the combination of LLMs' abilities to follow instructions (ICL), their memory, and their linguistic proficiency can account for both their capabilities and limitations.

Dr. Tayyar Madabushi explained, "The fear has been that as models grow larger, they will solve new problems that we cannot currently predict, potentially acquiring hazardous abilities like reasoning and planning. This concern was discussed extensively, such as at the AI Safety Summit last year at Bletchley Park, for which we were asked to provide commentary. However, our study shows that the fear of a model going rogue and doing something unexpected, innovative, and dangerous is unfounded."

He further emphasized, "Concerns over the existential threat posed by LLMs are not limited to non-experts and have been expressed by some leading AI researchers worldwide. However, our tests clearly demonstrate that these fears about emergent complex reasoning abilities in LLMs are not supported by evidence."

While acknowledging the need to address existing risks like AI misuse for creating fake news or facilitating fraud, Dr. Tayyar Madabushi argued that it would be premature to regulate AI based on unproven existential threats.

He noted, "For end users, relying on LLMs to interpret and execute complex tasks requiring advanced reasoning without explicit instructions is likely to lead to errors. Instead, users will benefit from clearly specifying their requirements and providing examples whenever possible, except for the simplest tasks."

Professor Gurevych added, "Our findings do not suggest that AI poses no threat at all. Rather, we demonstrate that the supposed emergence of complex thinking skills linked to specific threats is unsupported by evidence, and that we can effectively control the learning process of LLMs. Future research should, therefore, focus on other potential risks, such as the misuse of these models for generating fake news."

Research Report:Are Emergent Abilities in Large Language Models just In-Context Learning?

Related Links
University of Bath
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
OpenAI worries its AI voice may charm users
San Francisco (AFP) Aug 9, 2024
OpenAI says it is concerned that a realistic voice feature for its artificial intelligence might cause people to bond with the bot at the cost of human interactions. The San Francisco-based company cited literature which it said indicates that chatting with AI as one might with a person can result in misplaced trust and that the high quality of the GPT-4o voice may exacerbate that effect. "Anthropomorphization involves attributing human-like behaviors and characteristics to nonhuman entities, su ... read more

ROBO SPACE
Cleveland-Made Automated Tech Embarks on Space Mission

AFRL Collaborative Automation For Manufacturing Systems Laboratory opens

UCLA Engineers Develop Shape-Shifting Metamaterial Inspired by Classic Toys

ICEYE Expands SAR Constellation with Four New Satellites

ROBO SPACE
GMV Secures GBP 2 Million Contract for Quantum-Enabled White Rabbit Switch to Safeguard UK Infrastructure

Reticulate Micro delivers advanced video tech VAST to US Army

Northrop Grumman completes PDR for SDA Data Transport Satellites

SES Space and Defense secures US Air Force Air Combat Command contract

ROBO SPACE
ROBO SPACE
US, Australia collaborate to enhance GPS resilience in contested environments

oneNav's Advanced L5 Technology Mitigates GPS Jamming in Israel

China plans to launch pilot cities to showcase BeiDou applications

NextNav Receives DOT Award to Enhance PNT Services as GPS Backup

ROBO SPACE
Climate activists halt traffic at two German airports

Whisper Aero Partners with ORNL's Summit Supercomputer to Advance Quiet Electric Aircraft Development

Poland signs $10 bn deal for US Apache attack helicopters

AFWERX, MTSI evaluate electric VTOL aircraft for military applications

ROBO SPACE
URI-led research proposes new approach to scale quantum processors

Advances in Atomic-Level Photoswitching for Nanoscale Optoelectronics

HKUST Engineers Develop Full-Color Fiber LEDs for Advanced Wearable Displays

Achieving quantum memory in the hard X-ray range

ROBO SPACE
China Launches New Batch of Remote-Sensing Satellites

New Interactive TEMPO Data Story Offers Public Access to Air Quality Information

Sidus Space to Supply FeatherEdge System for Infrared Fire Detection Project with Xiomas Technologies

NASA C-20A Completes 150 Hours of Earth Science Flights

ROBO SPACE
Uganda garbage landslide death toll rises to 30

Uganda garbage landslide death toll rises to 34

Death toll from Uganda garbage landslide rises to 25

NY eco activists turn up heat on Citi over polluting investments

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.