
[ad_1]
An online forum used by OpenAI employees for confidential internal communications was hacked last year, according to anonymous sources. New York TimesThe hackers stole details about the design of the company’s AI technology from forum posts, but they did not break into the systems that OpenAI actually houses and builds its AI.
OpenAI executives announced the incident to the entire company at a general meeting in April 2023 and notified the board of directors. However, since no customer or partner information was stolen, it was not disclosed to the public.
Executives did not notify law enforcement because they did not believe the hackers had ties to foreign governments and therefore the incident did not pose a threat to national security, according to the sources.
“As we shared with our board and staff last year, we have identified and fixed the underlying issue and will continue to invest in safety,” an OpenAI spokesperson told TechRepublic in an email.
How did some OpenAI employees react to the hack?
According to the New York Times, news of the forum hacking caused concern among other OpenAI employees; they believed it showed a vulnerability in the company that could be exploited by state-sponsored hackers in the future. If OpenAI’s cutting-edge technology fell into the wrong hands, it could be used for nefarious purposes that would undermine national security.
look: OpenAI’s GPT-4 can autonomously exploit 87% of one-day vulnerabilities, study finds
In addition, executives’ handling of the matter has led some employees to question whether OpenAI is doing enough to protect its proprietary technology from foreign adversaries. Leopold Aschenbrenner, a former technical manager at the company, said he was fired after raising these concerns with the board in a meeting. Podcast with Dwarkesh Patel.
OpenAI denied this in a statement to The New York Times and said it disagreed with Aschenbrenner’s “characterization of our safety.”
More OpenAI security news, including information about the ChatGPT macOS app
The forum leak isn’t the only recent sign that safety isn’t a priority at OpenAI. Last week, data engineer Pedro José Pereira Vieito revealed that the new ChatGPT macOS application stores chat data in plain textwhich means if a bad guy gets hold of a Mac, they can easily access this information. edgeThe company noted that OpenAI released an update that encrypts the chats.
“We are aware of this issue and have released a new version of the app that encrypts these conversations,” an OpenAI spokesperson told TechRepublic in an email. “We are committed to providing a helpful user experience while maintaining our high security standards as our technology continues to evolve.”
look: Millions of Apple apps vulnerable to CocoaPods supply chain attack
May 2024 OpenAI releases statement saying it has disrupted five covert influence operations Russia, China, Iran and Israel, among others, have attempted to use its models to conduct “deceptive activities” that have been detected and blocked, including generating comments and articles, making up names and profiles for social media accounts, and translating text.
In the same month, the company announced that it had established Safety and Security Committee Develop the processes and safeguards that will be used when developing cutting-edge models.
Does the hack of the OpenAI forum indicate more AI-related security incidents?
Dr. Ilia Kolochenko, partner and head of the cybersecurity practice at Platt Law LLP, said he believes the OpenAI Forum security incident is likely just one of many. “The global AI race has become a national security issue for many countries, and as a result, state-sponsored cybercrime groups and mercenaries are actively targeting AI vendors, from talented startups to tech giants like Google or OpenAI,” he told TechRepublic in an email.
Dr. Kolochenko added that hackers are targeting valuable AI intellectual property, such as large language models, sources of training data, technical research, and business information. They may also implement backdoors in order to control or disrupt operations, similar to Recent attacks on critical national infrastructure in western countries.
“Enterprise users of all GenAI vendors should be particularly cautious when sharing or providing their proprietary data for LLM training or fine-tuning, as their data — ranging from attorney-client privileged information and trade secrets of leading industrial or pharmaceutical companies to classified military information — is also a target for AI-hungry cybercriminals who are gearing up to step up their attacks,” he told TechRepublic.
Can the development of artificial intelligence reduce the risk of security vulnerabilities?
There is no simple answer to how to mitigate the risk of security vulnerabilities from foreign adversaries when developing new AI technologies. OpenAI cannot discriminate against employees based on nationality, nor does it want to limit its talent pool by only hiring in certain regions.
It will also be difficult to prevent AI systems from being used for nefarious purposes until those purposes are revealed. From anthropological research The study found that LLMs are only slightly more helpful to bad actors in acquiring or designing biological weapons than standard internet access. Another study by OpenAI A similar conclusion was reached.
On the other hand, some experts also believe that although AI algorithms do not pose a threat at present, they may become dangerous as they become more advanced. In November 2023, representatives from 28 countries signed The Bletchley DeclarationThe report calls for global cooperation to address the challenges posed by artificial intelligence. “The most important capability of these AI models is the potential to cause serious or even catastrophic harm, whether intentional or unintentional,” the report wrote.
[ad_2]
Source link