Hackers cracked OpenAI’s internal messaging system last year

Microbiz Mag

A hacker managed to infiltrate OpenAI’s inner messaging system final 12 months and abscond with particulars in regards to the firm’s AI design, based on a report from the New York Times on Thursday. The assault focused a web-based discussion board the place OpenAI staff mentioned upcoming applied sciences and options for the favored chatbot, nevertheless, the programs the place the precise GPT code and consumer knowledge are saved weren’t impacted.

Whereas the corporate disclosed that info to its staff and board members in April 2023, the corporate declined to inform both the general public or the FBI in regards to the breach, claiming that doing so was pointless as a result of no consumer or accomplice knowledge was stolen. OpenAI doesn’t take into account the assault to represent a nationwide safety menace and believes the attacker was a single particular person with no ties to overseas powers.

Per the NYT, former OpenAI worker Leopold Aschenbrenner beforehand raised issues in regards to the state of the corporate’s safety equipment and warned that its programs may very well be accessible to the intelligence companies of adversaries like China. Aschenbrenner was summarily dismissed by the company, although OpenAI spokesperson Liz Bourgeois instructed the New York Instances his termination was unrelated to the memo.

That is removed from the primary time that OpenAI has suffered such a safety lapse. Since its debut in November 2022, ChatGPT has been repeatedly focused by malicious actors, usually leading to knowledge leaks.  In February of this 12 months, user names and passwords were leaked in a separate hack. The earlier March, OpenAI had to take ChatGPT offline entirely to repair a bug that exposed customers’ cost info, together with the primary and final title, e mail deal with, cost deal with, bank card sort, and the final 4 digits of their card quantity to different energetic customers. Final December, security researchers discovered that they may entice ChatGPT to disclose snippets of its coaching knowledge just by instructing the system to endlessly repeat the phrase “poem.”

“ChatGPT isn’t safe. Interval,” AI researcher Gary Marcus told The Street in January. “In the event you sort one thing right into a chatbot, it’s most likely most secure to imagine that (except they assure in any other case), the chatbot firm would possibly prepare on these knowledge; these knowledge might leak to different customers.” Because the assault, OpenAI has taken steps to beef up its safety programs, together with putting in extra security guardrails to stop unauthorized entry and misuse of the fashions, in addition to establishing a Safety and Security Committee to handle future points.






Sensi Tech Hub
Logo
Shopping cart