One other one among OpenAI’s lead security researchers, Lilian Weng, introduced on Friday she is departing the startup. Weng served as VP of analysis and security since August, and earlier than that, was the top of the OpenAI’s security programs workforce.
In a post on X, Weng stated that “after 7 years at OpenAI, I really feel able to reset and discover one thing new.” Weng stated her final day might be November fifteenth, however didn’t specify the place she’s going to go subsequent.
“I made the extraordinarily troublesome resolution to depart OpenAI,” stated Weng within the publish. “Taking a look at what we’ve achieved, I’m so pleased with everybody on the Security Methods workforce and I’ve extraordinarily excessive confidence that the workforce will proceed thriving.”
Weng’s departure marks the newest in a protracted string of AI security researchers, coverage researchers, and different executives who’ve exited the corporate within the final 12 months, and several other have accused OpenAI of prioritizing commercial products over AI safety. Weng joins Ilya Sutskever and Jan Leike – the leaders of OpenAI’s now dissolved Superalignment team, which tried to develop strategies to steer superintelligent AI programs – who additionally left the startup this 12 months to work on AI security elsewhere.
Weng first joined OpenAI in 2018, in keeping with her LinkedIn, engaged on the startup’s robotics workforce that ended up constructing a robotic hand that might clear up a Rubik’s dice – a process that took two years to attain, in keeping with her publish.
As OpenAI began focusing extra on the GPT paradigm, so did Weng. The researcher transitioned to assist construct the startup’s utilized AI analysis workforce in 2021. Following the launch of GPT-4, Weng was tasked with making a devoted workforce to construct security programs for the startup in 2023. In the present day, OpenAI’s security programs unit has greater than 80 scientists, researchers, and coverage specialists, in keeping with Weng’s publish.
That’s plenty of AI security people, however many have raised issues round OpenAI’s concentrate on security because it tries to construct more and more highly effective AI programs. Miles Brundage, a longtime coverage researcher, left the startup in October and introduced that OpenAI was dissolving its AGI readiness workforce, which he had suggested. On the identical day, the New York Instances profiled a former OpenAI researcher, Suchir Balaji, who stated he left OpenAI as a result of he thought the startup’s know-how would deliver extra hurt than profit to society.
OpenAI tells TechCrunch that executives and security researchers are engaged on a transition to exchange Weng.
“We deeply recognize Lilian’s contributions to breakthrough security analysis and constructing rigorous technical safeguards,” stated an OpenAI spokesperson in an emailed assertion. “We’re assured the Security Methods workforce will proceed taking part in a key position in guaranteeing our programs are protected and dependable, serving a whole bunch of thousands and thousands of individuals globally.”
Different executives who’ve left OpenAI in current months embody CTO Mira Murati, chief analysis officer Bob McGrew, and research VP Barret Zoph. In August, the distinguished researcher Andrej Karpathy and co-founder John Schulman additionally introduced they’d be leaving the startup. A few of these people, together with Leike and Schulman, left to affix an OpenAI competitor, Anthropic, whereas others have gone on to begin their very own ventures.