OpenAI CEO Sam Altman addresses equity clause error amid notable resignations, emphasizing AI safety and revised exit agreements.
Amid recent resignations at OpenAI, CEO Sam Altman has addressed concerns regarding a controversial clause in the company’s exit agreements that suggested the potential for equity cancellation.
Altman emphasized that OpenAI has never used this clause and guaranteed the vested equity will not be affected by any separation agreements or non-disparagement agreements..
OpenAI Clarification on Vested Equity
Altman stressed that the clause, which was in previous exit documents, was the wrong thing to do and should not have been there
.I take the full responsibility for this and it is one of the very few times I have been really embarrassed running OpenAI; I did not know that this was happening, and I should have Altman said
He told the ex-employees that any worries they had over this clause could be dealt with him directly for their correction.
The equity cancellation provision was the cause of many doubts regarding its purpose and possible misuse. Altman admitted the mistake and said that the company had been changing its standard exit paperwork for a month already in order to avoid such problems in the future.
Employee Resignations and Safety Concerns
The clarification from Altman comes after a number of resignations, among which is the resignation of Jan Leike, who was in charge of alignment at OpenAI. Leike, who announced his resignation on May 17th, mentioned the OpenAI’s focus on product development rather than AI safety as one of his main reasons.
in regards to recent stuff about how openai handles equity:
— Sam Altman (@sama) May 18, 2024
we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.
there was…
His resignation was preceded by the exit of Ilya Sutskever who is one of OpenAI’s co-founders and a leading figure in AI research
The departures have made OpenAI’s internal strategies and priorities to be a great focus of attention. The organization has been accused by critics of not being focused enough on the issues related to advanced AI systems. Moreover, OpenAI had earlier dissolved the ‘Superalignment’ team and combined its functions into other research projects of the company.
OpenAI’s Commitment to AI Safety
Despite the restructuring, OpenAI maintains its commitment to AI safety. Altman and President Greg Brockman have reiterated the importance of ongoing safety research. In a recent statement, Brockman expressed gratitude to departing employees and reassured that the company would continue to address safety concerns rigorously.
Brockman highlighted OpenAI’s efforts to raise awareness about the risks and opportunities of AGI (artificial general intelligence), advocate for international governance, and pioneer AI safety research.
We’re really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has raised, we wanted to explain a bit about how we think about our overall strategy.
— Greg Brockman (@gdb) May 18, 2024
First, we have… https://t.co/djlcqEiLLN
He acknowledged that the path to safely developing and deploying AGI involves complex and unprecedented challenges, requiring continuous improvement in safety measures and oversight.