As part of the court case between Elon Musk and Sam Altman, a substantial number of emails between Elon, Sam Altman, Ilya Sutskever, and Greg Brockman have been released. They provide an inside view of what really happened at OpenAI in the time Musk was still involved until his departure.
The email exchanges between Elon Musk, Sam Altman, Ilya Sutskever, Greg Brokman and other key figures at OpenAI offer a detailed look into the inner workings of the company during its formative years. The documents shed light on significant conflicts, diverging philosophies, and strategic decisions that shaped the organization’s direction.
Initial Vision and Founding Objectives
The initial conversations between Musk and Altman in 2015 set the stage for the launch of OpenAI. Altman proposed the idea of creating an “AI Manhattan Project” aimed at developing AGI in a way that would empower individuals and prevent monopolization by major tech players like Google.
Musk’s early responses showed a shared belief in this mission; he agreed on the urgency of ensuring that AGI would not be controlled by a single entity that could skew its use toward profit rather than public benefit.
Safety vs. Development Focus
From the beginning, Altman’s emails suggested a focus on both pioneering AI development and ensuring safety as parallel goals. Musk, on the other hand, approached this endeavor with a heightened sense of caution, perceiving Artificial General Intelligence (AGI) as an existential risk that required stringent oversight.
While both shared the underlying aim of broad societal benefit, Musk’s apprehensions leaned more heavily toward ensuring that AGI development included significant safeguards against potential misuse or monopolization.
Jump to Released Emails From the Musk vs. Altman Case
Related: |
Diverging Philosophies on Control and Governance
One of the most prominent sources of tension between Musk and Altman centered on governance and decision-making power within OpenAI. Musk’s preference for centralized control clashed with the broader, more distributed oversight that Altman and other co-founders seemed to favor.
Musk’s Control Concerns
Musk’s insistence on being at the helm, even if he did not want the CEO title, was driven by his belief that OpenAI needed a strong, singular voice to steer its direction. This was exemplified in his desire to have the final say in crucial strategic matters. He viewed this control as essential for safeguarding OpenAI’s mission from potential deviations or external influences.
However, Altman, Sutskever, and Brockman expressed discomfort with this approach, arguing that it risked concentrating too much power in one individual’s hands—ironically replicating the kind of power imbalance OpenAI was created to avoid.
Internal Pushback
In September 2017, Ilya Sutskever and Greg Brockman raised specific concerns about Musk’s potential for absolute control, which they believed contradicted OpenAI’s core mission. Their apprehension was rooted in Musk’s behavior during negotiations, where he insisted on being recognized as the key decision-maker.
They argued that, as OpenAI moved closer to achieving AGI, retaining such concentrated power would become untenable and potentially counterproductive. This concern came to a head when they expressed fears that Musk’s insistence on control might ultimately create the kind of AGI monopoly he was working to prevent.
Jump to Released Emails From the Musk vs. Altman Case
Related: |
Disagreements on Compensation and Recruitment
Securing and retaining top-tier talent was another focal point of contention. Musk’s emails reflected an almost obsessive focus on attracting the best researchers in the field, driven by his fear of being outpaced by competitors, particularly DeepMind. He pushed for offering competitive compensation packages and benefits that would prevent OpenAI’s staff from defecting to rivals.
DeepMind Rivalry
Musk frequently voiced his anxiety over DeepMind’s progress and aggressive recruitment strategies. His statement, “Either we get the best people in the world or we will get whipped by DeepMind,” encapsulated his belief that OpenAI needed to go all-in to compete.
He was willing to increase salaries and benefits significantly to match or exceed offers from other major players, viewing this as essential to retaining top talent. For Musk, this was not just about organizational growth; it was about ensuring OpenAI had the firepower to act as a counterbalance to DeepMind’s advancements, which he believed posed a significant risk due to their closed, profit-driven nature.
Altman’s Balanced Approach
Altman, while also acknowledging the importance of talent acquisition, took a more measured stance. He believed that while compensation was important, the mission-driven nature of OpenAI would attract researchers who were motivated by more than just salary.
This difference in approach highlighted a philosophical divide: Musk was laser-focused on outpacing competitors at all costs, while Altman aimed to balance aggressive recruitment with sustainable growth and alignment with OpenAI’s nonprofit principles.
Jump to Released Emails From the Musk vs. Altman Case
The Microsoft Partnership Dispute
One of the most significant conflicts arose when OpenAI considered a partnership with Microsoft for discounted compute resources. Altman and Brockman saw this as a strategic move that would secure OpenAI’s access to the necessary compute power for its research without overextending its budget.
However, Musk’s response to the proposed terms was notably negative. He viewed the partnership as a potential compromise of OpenAI’s independence and was adamant that it should avoid appearing as a marketing tool for Microsoft.
Musk’s Skepticism
Musk’s visceral reaction to the proposed evangelization terms highlighted his fear that OpenAI could become entangled with corporate interests that would compromise its mission.
He was particularly concerned with clauses that suggested OpenAI would promote Microsoft’s products, which he saw as a move that could diminish OpenAI’s perceived neutrality and independence. This marked a broader issue in their partnership: Musk’s desire for autonomy clashed with Altman’s practical need for resources.
Altman’s Reassurance
In response, Altman worked to amend the terms, ensuring that OpenAI would not be obligated to evangelize Microsoft’s technology. This demonstrated Altman’s capacity to negotiate compromises that balanced Musk’s concerns with OpenAI’s operational needs.
Eventually, the final agreement omitted promotional obligations, addressing some of Musk’s apprehensions but leaving lingering doubts about OpenAI’s direction and alliances.
Jump to Released Emails From the Musk vs. Altman Case
Strategic Shifts: Nonprofit to Capped-Profit Structure
A pivotal point in OpenAI’s evolution was its transition from a nonprofit to a “capped-profit” structure. Altman justified this move as necessary for raising significant funds to remain competitive, citing the exponential increase in compute costs and research demands.
The new model, OpenAI LP, allowed investors to profit up to a predetermined cap, with any additional returns funneled back into the nonprofit to ensure that AGI benefits were shared widely.
Musk’s Disapproval
Musk’s response to the shift was tepid at best. He viewed it as a potential dilution of OpenAI’s core principles and expressed concerns about how this structural change might align with the long-term mission of ensuring AGI safety.
Musk’s emphasis on minimizing financial incentives to avoid conflicts of interest stood in stark contrast to Altman’s belief that this hybrid model was the only way to secure the scale of funding needed for meaningful AGI research.
This shift also marked a turning point in Musk’s involvement; he eventually resigned from OpenAI’s board in 2018, signaling his growing disillusionment with the organization’s trajectory.
Jump to Released Emails From the Musk vs. Altman Case
Trust Issues and Board Dynamics
Emails from Sutskever and Brockman highlighted the trust issues that developed within OpenAI’s leadership. While Musk brought invaluable vision and influence, his insistence on control created an environment where trust was strained.
Sutskever pointed out that Musk’s unilateral approach to decision-making fostered anxiety among the co-founders and employees, making it difficult for them to fully align with his vision.
Sam Altman’s Role as a Mediator
Altman often found himself balancing Musk’s demands with the concerns of other leaders. His ability to act as a counterbalance to Musk was cited as a reason for OpenAI’s early successes. However, this balancing act became increasingly difficult as trust eroded. Altman’s emails reflected his struggle to maintain cohesion within the team, especially when Musk’s involvement became sporadic and conditional on strategic shifts aligning with his vision.
Jump to Released Emails From the Musk vs. Altman Case
Exit and Aftermath
Musk’s frustration reached its peak in late 2017, leading him to issue an ultimatum: either OpenAI would commit to a clear direction aligned with his vision, or he would withdraw his financial and strategic support. This marked a dramatic shift in his relationship with OpenAI. When Altman, Sutskever, and Brockman indicated a willingness to continue under the nonprofit model but with greater autonomy, Musk chose to step back, ceasing his active participation and support.
Following Musk’s withdrawal, Altman focused on stabilizing the organization and preparing it for future growth under the new capped-profit structure. This move allowed OpenAI to secure significant funding and ramp up research efforts, but it also highlighted the enduring tension between the original nonprofit ideals and the realities of scaling AI research in a competitive environment.
The email exchanges between Musk and Altman reveal deep-seated philosophical differences and strategic conflicts that defined OpenAI’s trajectory. Musk’s emphasis on stringent control and uncompromising safety clashed with Altman’s balanced approach to growth and external partnerships.
While both shared a commitment to ensuring AGI benefited humanity, their divergent paths led to significant organizational changes, including Musk’s eventual departure and the pivot to a hybrid for-profit model. These decisions laid the groundwork for OpenAI’s current operations, reflecting the complex interplay between idealism, practical needs, and leadership dynamics that shaped its evolution.
Released Emails From the Musk vs. Altman Case
Subject: question (May 25, 2015 – Jun 24, 2015)
Summary: This thread covers an early conversation between Sam Altman and Elon Musk about the inevitability of AI development, concerns about Google’s dominance, and the proposal for YC to initiate a project aimed at creating general AI for individual empowerment.
Click to expand the email thread
Sam Altman to Elon Musk – May 25, 2015 9:10 PM
Been thinking a lot about whether it's possible to stop humanity from developing AI. I think the answer is almost definitely not. If it's going to happen anyway, it seems like it would be good for someone other than Google to do it first. Any thoughts on whether it would be good for YC to start a Manhattan Project for AI? My sense is we could get many of the top ~50 to work on it, and we could structure it so that the tech belongs to the world via some sort of nonprofit but the people working on it get startup-like compensation if it works. Obviously we'd comply with/aggressively support all regulation. Sam
Elon Musk to Sam Altman – May 25, 2015 11:09 PM
Probably worth a conversation
Sam Altman to Elon Musk – Jun 24, 2015 10:24 AM
The mission would be to create the first general AI and use it for individual empowerment—ie, the distributed version of the future that seems the safest. More generally, safety should be a first-class requirement. I think we’d ideally start with a group of 7-10 people, and plan to expand from there. We have a nice extra building in Mountain View they can have. I think for a governance structure, we should start with 5 people and I’d propose you, Bill Gates, Pierre Omidyar, Dustin Moskovitz, and me. The technology would be owned by the foundation and used “for the good of the world”, and in cases where it’s not obvious how that should be applied the 5 of us would decide. The researchers would have significant financial upside but it would be uncorrelated to what they build, which should eliminate some of the conflict (we’ll pay them a competitive salary and give them YC equity for the upside). We’d have an ongoing conversation about what work should be open-sourced and what shouldn’t. At some point we’d get someone to run the team, but he/she probably shouldn’t be on the governance board. Will you be involved somehow in addition to just governance? I think that would be really helpful for getting work pointed in the right direction getting the best people to be part of it. Ideally you’d come by and talk to them about progress once a month or whatever. We generically call people involved in some limited way in YC “part-time partners” (we do that with Peter Thiel for example, though at this point he’s very involved) but we could call it whatever you want. Even if you can’t really spend time on it but can be publicly supportive, that would still probably be really helpful for recruiting. I think the right plan with the regulation letter is to wait for this to get going and then I can just release it with a message like “now that we are doing this, I’ve been thinking a lot about what sort of constraints the world needs for safety.” I’m happy to leave you off as a signatory. I also suspect that after it’s out more people will be willing to get behind it. Sam
Elon Musk to Sam Altman – Jun 24, 2015 11:05 PM
Agree on all
Subject: follow up from call (Nov 22, 2015)
Summary: Greg Brockman follows up with Elon Musk after a call, sharing a draft blog post and discussing strategic messaging aimed at appealing to the research community. The email also mentions offer letters and recruitment details.
Click to expand the email thread
Greg Brockman to Elon Musk, (cc: Sam Altman) – Nov 22, 2015 6:11 PM
Hey Elon, Nice chatting earlier. As I mentioned on the phone, here's the latest early draft of the blog post: https://quip.com/6YnqA26RJgKr. (Sam, Ilya, and I are thinking about new names; would love any input from you.) Obviously, there's a lot of other detail to change too, but I'm curious what you think of that kind of messaging. I don't want to pull any punches, and would feel comfortable broadcasting a stronger message if it feels right. I think it's mostly important that our messaging appeals to the research community (or at least the subset we want to hire). I hope for us to enter the field as a neutral group, looking to collaborate widely and shift the dialog towards being about humanity winning rather than any particular group or company. (I think that's the best way to bootstrap ourselves into being a leading research institution.) I've attached the offer letter template we've been using, with a salary of $175k. Here's the email template I've been sending people: Attached is your official YCR offer letter! Please sign and date at your convenience. There will be two more documents coming: - A separate letter offering you 0.25% of each YC batch you are present for (as compensation for being an Advisor to YC). - The At-Will Employment, Confidential Information, Invention Assignment and Arbitration Agreement (As this is the first batch of official offers we've done, please forgive any bumpiness along the way, and please let me know if anything looks weird!) We plan to offer the following benefits: - Health, dental, and vision insurance - Unlimited vacation days with a recommendation of four weeks per year - Paid parental leave - Paid conference attendance when you are presenting YC AI work or asked to attend by YC AI We're also happy to provide visa support. When you're ready to talk about visa-related questions, I'm happy to put you in touch with Kirsty from YC. Please let me know if you have any questions — I'm available to chat any time! Looking forward to working together :). - gdb
Subject: Draft opening paragraphs (Dec 8, 2015)
Summary: Elon Musk and Sam Altman discuss the importance of the opening summary for a public announcement about OpenAI’s mission. Elon emphasizes the need for clarity and impact to attract top talent.
Click to expand the email thread
Elon Musk to Sam Altman – Dec 8, 2015 9:29 AM
It is super important to get the opening summary section right. This will be what everyone reads and what the press mostly quotes. The whole point of this release is to attract top talent. Not sure Greg totally gets that. ---- OpenAI is a non-profit artificial intelligence research company with the goal of advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unencumbered by an obligation to generate financial returns. The underlying philosophy of our company is to disseminate AI technology as broadly as possible as an extension of all individual human wills, ensuring, in the spirit of liberty, that the power of digital intelligence is not overly concentrated and evolves toward the future desired by the sum of humanity. The outcome of this venture is uncertain and the pay is low compared to what others will offer, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.
Sam Altman to Elon Musk – Dec 8, 2015 10:34 AM
how is this? __ OpenAI is a non-profit artificial intelligence research company with the goal of advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unencumbered by an obligation to generate financial returns. Because we don't have any financial obligations, we can focus on the maximal positive human impact and disseminating AI technology as broadly as possible. We believe AI should be an extension of individual human wills and, in the spirit of liberty, not be concentrated in the hands of the few. The outcome of this venture is uncertain and the pay is low compared to what others will offer, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.
Subject: just got word… (Dec 11, 2015)
Summary: Sam Altman informs Elon Musk about DeepMind’s potential counteroffers to OpenAI staff and discusses the possibility of increasing compensation to retain key talent. Elon responds with concern and offers personal support if needed.
Click to expand the email thread
Sam Altman to Elon Musk – Dec 11, 2015 11:30 AM
that deepmind is going to give everyone in openAI massive counteroffers tomorrow to try to kill it. do you have any objection to me proactively increasing everyone's comp by 100-200k per year? i think they're all motivated by the mission here but it would be a good signal to everyone we are going to take care of them over time. sounds like deepmind is planning to go to war over this, they've been literally cornering people at NIPS.
Elon Musk to Sam Altman – Dec 11, 2015
Has Ilya come back with a solid yes? If anyone seems at all uncertain, I’m happy to call them personally too. Have told Emma this is my absolute top priority 24/7.
Sam Altman to Elon Musk – Dec 11, 2015 12:15 PM
yes committed committed. just gave his word.
Elon Musk to Sam Altman – Dec 11, 2015 12:32 PM
awesome
Sam Altman to Elon Musk – Dec 11, 2015 12:35 PM
everyone feels great, saying stuff like "bring on the deepmind offers, they unfortunately dont have 'do the right thing' on their side" news out at 130 pm pst
Subject: The OpenAI Company (Dec 11, 2015)
Summary: Elon Musk congratulates the team on their progress and emphasizes the importance of recruiting the best talent for OpenAI’s success. He shares his willingness to support recruitment efforts personally and offers strategic advice on attracting top-tier candidates.
Click to expand the email thread
Elon Musk to: Ilya Sutskever, Pamela Vagata, Vicki Cheung, Diederik Kingma, Andrej Karpathy, John D. Schulman, Trevor Blackwell, Greg Brockman, (cc: Sam Altman) – Dec 11, 2015 4:41 PM
Congratulations on a great beginning! We are outmanned and outgunned by a ridiculous margin by organizations you know well, but we have right on our side and that counts for a lot. I like the odds. Our most important consideration is recruitment of the best people. The output of any company is the vector sum of the people within it. If we are able to attract the most talented people over time and our direction is correctly aligned, then OpenAI will prevail. To this end, please give a lot of thought to who should join. If I can be helpful with recruitment or anything else, I am at your disposal. I would recommend paying close attention to people who haven't completed their grad or even undergrad, but are obviously brilliant. Better to have them join before they achieve a breakthrough. Looking forward to working together, Elon
Subject: compensation framework (Feb 21, 2016 – Feb 22, 2016)
Summary: Greg Brockman seeks guidance from Elon Musk and Sam Altman on structuring compensation offers for the OpenAI team. The thread discusses salary benchmarks, negotiation strategies, and challenges faced in recruiting top AI talent.
Click to expand the email thread
Greg Brockman to Elon Musk, (cc: Sam Altman) – Feb 21, 2016 11:34 AM
Hi all, We're currently doing our first round of full-time offers post-founding. It's obviously super important to get these right, as the implications are very long-term. I don't yet feel comfortable making decisions here on my own, and would love any guidance. Here's what we're currently doing: Founding team: $275k salary + 25bps of YC stock - Also have option of switching permanently to $125k annual bonus or equivalent in YC or SpaceX stock. I don't know if anyone's taken us up on this. New offers: $175k annual salary + $125k annual bonus || equivalent in YC or SpaceX stock. Bonus is subject to performance review, where you may get 0% or significantly greater than 100%. Special cases: gdb + Ilya + Trevor The plan is to keep a mostly flat salary, and use the bonus multiple as a way to reward strong performers. Some notes: - We use a 20% annualized discount for the 8 years until the stock becomes liquid, the $125k bonus equates to 12bps in YC. So the terminal value is more like $750k. This number sounds a lot more impressive, though obviously it's hard to value exactly. - The founding team was initially offered $175k each. The day after the lab launched, we proactively increased everyone's salary by $100k, telling them that we are financially committed to them as the lab becomes successful, and asking for a personal promise to ignore all counteroffers and trust we'll take care of them. - We're currently interviewing Ian Goodfellow from Brain, who is one of the top 2 scientists in the field we don't have (the other being Alex Graves, who is a DeepMind loyalist). He's the best person on Brain, so Google will fight for him. We're grandfathering him into the founding team offer. Some salary datapoints: - John was offered $250k all-in annualized at DeepMind, thought he could negotiate to $300k easily. - Wojciech was verbally offered ~$1.25M/year at FAIR (no concrete letter though). - Andrew Tulloch is getting $800k/year at FB. (A lot is stock which is vesting.) - Ian Goodfellow is currently getting $165k cash + $600k stock/year at Google. - Apple is a bit desperate and offering people $550k cash (plus stock, presumably). I don't think anyone good is saying yes. Two concrete candidates that are on my mind: - Andrew is very close to saying yes. However, he's concerned about taking such a large paycut. - Ian has stated he's not primarily concerned with money, but the Bay Area is expensive / wants to make sure he can buy a house. I don't know what will happen if/when Google starts throwing around the numbers they threw at Ilya. My immediate questions: 1. I expect Andrew will try to negotiate up. Should we stick to his offer, and tell him to only join if he's excited enough to take that kind of paycut (and that others have left more behind)? 2. Ian will be interviewing + (I'm sure) getting an offer on Wednesday. Should we consider his offer final, or be willing to slide depending on what Google offers? 3. Depending on the answers to 1 + 2, I'm wondering if this flat strategy makes sense. If we keep it, I feel we'll have to really sell people on the bonus multiplier. Maybe one option would be using a signing bonus as a lever to get people to sign? 4. Very secondary, but our intern comp is also below market: $9k/mo. (FB offers $9k + free housing, Google offers like $11k/mo all-in.) Comp is much less important to interns than to FT people, since the experience is primary. But I think we may have lost a candidate who was on the edge to this. Given the dollar/hour is so much lower than for FT, should we consider increasing the amount? I'm happy to chat about this at any time. - gdb
Elon Musk to Greg Brockman, (cc: Sam Altman) – Feb 22, 2016 12:09 AM
We need to do what it takes to get the top talent. Let's go higher. If, at some point, we need to revisit what existing people are getting paid, that's fine. Either we get the best people in the world or we will get whipped by Deepmind. Whatever it takes to bring on ace talent is fine by me. Deepmind is causing me extreme mental stress. If they win, it will be really bad news with their one mind to rule the world philosophy. They are obviously making major progress and well they should, given the talent level over there.
Greg Brockman to Elon Musk, (cc: Sam Altman) – Feb 22, 2016 12:21 AM
Read you loud and clear. Sounds like a plan. Will plan to continue working with sama on specifics, but let me know if you'd like to be kept in the loop. - gdb
Subject: wired article (Mar 21, 2016)
Summary: Greg Brockman updates Elon Musk about an interview he gave for a Wired article on OpenAI. He seeks Elon’s feedback on questions raised by the fact checker to ensure the messaging aligns with their goals.
Click to expand the email thread
Greg Brockman to Elon Musk, (cc: Sam Teller) – Mar 21, 2016 12:53 AM
Hi Elon, I was interviewed for a Wired article on OpenAI, and the fact checker sent me some questions. Wanted to sync with you on two in particular to make sure they sound reasonable / aligned with what you'd say: Would it be accurate to say that OpenAI is giving away ALL of its research? At any given time, we will take the action that is likely to most strongly benefit the world. In the short term, we believe the best approach is giving away our research. But longer-term, this might not be the best approach: for example, it might be better not to immediately share a potentially dangerous technology. In all cases, we will be giving away all the benefits of all of our research, and want those to accrue to the world rather than any one institution. Does OpenAI believe that getting the most sophisticated AI possible in as many hands as possible is humanity's best chance at preventing a too-smart AI in private hands that could find a way to unleash itself on the world for malicious ends? We believe that using AI to extend individual human wills is the most promising path to ensuring AI remains beneficial. This is appealing because if there are many agents with about the same capabilities they could keep any one bad actor in check. But I wouldn't claim we have all the answers: instead, we're building an organization that can both seek those answers, and take the best possible action regardless of what the answer turns out to be. Thanks! - gdb
Elon Musk to Greg Brockman, (cc: Sam Teller) – Mar 21, 2016 6:53:47 AM
Sounds good
Subject: Re: Maureen Dowd (Apr 27, 2016)
Summary: Elon Musk responds to an inquiry from The New York Times regarding Mark Zuckerberg’s comments about Musk’s AI concerns, emphasizing the dual nature of powerful technologies and OpenAI’s creation to ensure AI’s power is widely distributed.
Click to expand the email thread
Sam Teller received this email from Alex Thompson and forwards it to Elon Musk – Apr 27, 2016 7:25 AM
Hi Sam, I hope you are having a great day and I apologize for interrupting it with another question. Maureen wanted to see if Mr. Musk had any reaction to some of Mr. Zuckerberg's public comments since their interview. In particular, his labeling of Mr. Musk as "hysterical" for his A.I. fears and lecturing those who "fearmonger" about the dangers of A.I. I have included more details below of Mr. Zuckerberg's comments. Asked in Germany recently about Musk’s forebodings, Zuckerberg called them “hysterical’’ and praised A.I. breakthroughs, including one system he claims can make cancer diagnoses for skin lesions on a mobile phone with the accuracy of “the best dermatologist.’’ “Unless we really mess something up,’’ he said, the machines will always be subservient, not “superhuman.” “I think we can build A.I. so it works for us and helps us...Some people fearmonger about how A.I. is a huge danger, but that seems farfetched to me and much less likely than disasters due to widespread disease, violence, etc.’’ Or as he put his philosophy at an April Facebook developers conference: “Choose hope over fear.’’ -- Alex Thompson The New York Times
Elon Musk to Sam Teller – Apr 27, 2016 12:24 PM
History unequivocally illustrates that a powerful technology is a double-edged sword. It would be foolish to assume that AI, arguably the most powerful of all technologies, only has a single edge. The recent example of Microsoft's AI chatbot shows how quickly it can turn incredibly negative. The wise course of action is to approach the advent of AI with caution and ensure that its power is widely distributed and not controlled by any one company or person. That is why we created OpenAI.
Subject: MSFT hosting deal (Sep 16, 2016 – Sep 21, 2016)
Summary: This thread involves a discussion between Sam Altman and Elon Musk regarding a proposed hosting deal with Microsoft. The conversation addresses terms, concerns about being seen as promoting Microsoft’s products, and potential changes to the deal structure.
Click to expand the email thread
Sam Altman to Elon Musk, (cc: Sam Teller) – Sep 16, 2016 2:37 PM
Here are the MSFT terms. $60MM of compute for $10MM, and input from us on what they deploy in the cloud. LMK if you have any feedback. Sam --- Microsoft and OpenAI: Accelerate the development of deep learning on Azure and CNTK This non-binding term sheet (“Term Sheet”) between Microsoft Corporation (“Microsoft”) and OpenAI (“OpenAI”) sets forth the terms for a potential business relationship between the parties. This Term Sheet is intended to form a basis of discussion and does not state all matters upon which agreement must be reached before executing a legally binding commercial agreement (“Commercial Agreement”). The existence and terms of this Term Sheet, and all discussions related thereto or to a Commercial Agreement, are Confidential Information as defined and governed by the Non-Disclosure Agreement between the parties dated 17 March, 2016 (“NDA”). Except for the binding nature of the foregoing confidentiality obligations, this Term Sheet is non-binding. Deal Purpose OpenAI is focused on deep learning in such a way as to benefit humanity. Microsoft and OpenAI desire to partner to enable the acceleration of deep learning on Microsoft Azure. Towards this goal, Microsoft will provide OpenAI with Azure compute capabilities at a favorable price that would enable OpenAI to continue their mission effectively. Deal Business Goal Microsoft · Accelerate deep learning environment on Azure · Attract a net new audience of next generation developers · Joint PR and evangelism of deep learning on Azure OpenAI · Deeply discounted GPU compute offering over the deal term (3 years) for use in their nonprofit research: $60m of Compute for $10m · Joint PR and evangelism of OpenAI on Azure Parties (Legal entities) Microsoft OpenAI Proposed Deal Execution Date September 19, 2016 Proposed Deal Commencement Date Same as deal execution date Legal Authoring Microsoft holds the pen. Deal Term 3 years Engineering Terms - Compute: Microsoft will provide OpenAI GPU core hours of compute at the agreed upon price for OpenAI’s workloads to run in Azure. - Geographic Location: Geographic location decisions will be at Microsoft discretion depending on capacity and availability. Microsoft will also be responsible for sharing the deployment strategy and timelines with OpenAI. - SLA: For all Virtual Machines that have two or more instances deployed in the same availability set, Microsoft guarantee OpenAI will have virtual machine connectivity to at least one instance at least 99.95% of the time. Microsoft will be held accountable to the SLA’s provided on https://azure.microsoft.com/enus/support/legal/sla/virtual-machines/v1_2/ - Evaluation, Evangelization, and Usage of CNTK v2, Azure Batch and HD-Insight: OpenAI will evaluate CNTK v2, Azure Batch, and HDInsight for their research, provide feedback on how Microsoft can improve these products. OpenAI will work with Microsoft to evangelize these products to their research and developer ecosystems, and evangelize Microsoft Azure as their preferred public cloud provider. At their sole discretion, and as it makes sense for their research, OpenAI will adopt these products. - Ramp: Microsoft and OpenAI will work together for creating a ramp plan that balances capacity per clusters. The initial timeline for ramp is a minimum of 30 days that will be augmented by Microsoft’s capacity expansion plans in the coming months. - Capacity: OpenAI will be given an allocation of capacity in the preview cluster (located in US South Central) for short term requirements and Microsoft will provide quota access to the subsequent K80 GPU clusters that go live in the 4th quarter of 2016 with the intention of more capacity in Q1 2017 (calendar year). Financial Terms · Financial Terms: Microsoft will offer $60m worth of List Compute (including GPU) at a deep discount which results in a price of $10m to be paid by OpenAI over the course of the deal. In the event OpenAI consumes less than $10m worth of Azure compute, OpenAI will be responsible for paying the balance between the used amount and $10m at the end of the deal term to Microsoft. Marketing & PR Terms Microsoft and OpenAI commit to jointly evangelizing deep learning capabilities on Azure as agreed upon by both parties. - Ignite: Announce the partnership at Microsoft’s Ignite event with executives (Sam Altman from OpenAI and Satya Nadella from Microsoft) from both parties inaugurating the collaboration - PR: Microsoft and OpenAI will work together to issue a joint press release about the partnership including any materials such as blog posts and videos.
Elon Musk to Sam Altman, (cc: Sam Teller) – Sep 16, 2016 3:10 PM
This actually made me feel nauseous. It sucks and is exactly what I would expect from them. Evaluation, Evangelization, and Usage of CNTK v2, Azure Batch and HD-Insight: OpenAI will evaluate CNTK v2, Azure Batch, and HD-Insight for their research, provide feedback on how Microsoft can improve these products. OpenAI will work with Microsoft to evangelize these products to their research and developer ecosystems, and evangelize Microsoft Azure as their preferred public cloud provider. At their sole discretion, and as it makes sense for their research, OpenAI will adopt these products. Let’s just say that we are willing to have Microsoft donate spare computing time to OpenAI and have that be known, but we won’t do any contract or agree to “evangelize”. They can turn us off at any time and we can leave at any time.
Sam Altman to Elon Musk, (cc: Sam Teller) – Sep 16, 2016 3:33 PM
I had the same reaction after reading that section and they've already agreed to drop it. We had originally just wanted spare cycles donated but the team wanted more certainty that capacity will be available. But I'll work with MSFT to make sure there are no strings attached.
Elon Musk to Sam Altman, (cc: Sam Teller) – Sep 16, 2016
We should just do this low key. No certainty either way. No contract.
Sam Altman to Elon Musk, (cc: Sam Teller) – Sep 16, 2016 6:45 PM
ok will see how much $ I can get in that direction.
Sam Teller to Elon Musk – Sep 20, 2016 8:05 PM
Microsoft is now willing to do the agreement for a full $50m with “good faith effort at OpenAI's sole discretion” and full mutual termination rights at any time. No evangelizing. No strings attached. No looking like lame Microsoft marketing pawns. Ok to move ahead?
Elon Musk to Sam Teller – Sep 21, 2016 12:09 AM
Fine by me if they don't use this in active messaging. Would be worth way more than $50M not to seem like Microsoft's marketing bitch.
Subject: MSFT hosting deal (Sep 16, 2016 – Sep 21, 2016)
Summary: This thread involves a discussion between Sam Altman and Elon Musk regarding a proposed hosting deal with Microsoft. The conversation addresses terms, concerns about being seen as promoting Microsoft’s products, and potential changes to the deal structure.
Click to expand the email thread
Sam Altman to Elon Musk, (cc: Sam Teller) – Sep 16, 2016 2:37 PM
Here are the MSFT terms. $60MM of compute for $10MM, and input from us on what they deploy in the cloud. LMK if you have any feedback. Sam --- Microsoft and OpenAI: Accelerate the development of deep learning on Azure and CNTK This non-binding term sheet (“Term Sheet”) between Microsoft Corporation (“Microsoft”) and OpenAI (“OpenAI”) sets forth the terms for a potential business relationship between the parties. This Term Sheet is intended to form a basis of discussion and does not state all matters upon which agreement must be reached before executing a legally binding commercial agreement (“Commercial Agreement”). The existence and terms of this Term Sheet, and all discussions related thereto or to a Commercial Agreement, are Confidential Information as defined and governed by the Non-Disclosure Agreement between the parties dated 17 March, 2016 (“NDA”). Except for the binding nature of the foregoing confidentiality obligations, this Term Sheet is non-binding. Deal Purpose OpenAI is focused on deep learning in such a way as to benefit humanity. Microsoft and OpenAI desire to partner to enable the acceleration of deep learning on Microsoft Azure. Towards this goal, Microsoft will provide OpenAI with Azure compute capabilities at a favorable price that would enable OpenAI to continue their mission effectively. Deal Business Goal Microsoft · Accelerate deep learning environment on Azure · Attract a net new audience of next generation developers · Joint PR and evangelism of deep learning on Azure OpenAI · Deeply discounted GPU compute offering over the deal term (3 years) for use in their nonprofit research: $60m of Compute for $10m · Joint PR and evangelism of OpenAI on Azure Parties (Legal entities) Microsoft OpenAI Proposed Deal Execution Date September 19, 2016 Proposed Deal Commencement Date Same as deal execution date Legal Authoring Microsoft holds the pen. Deal Term 3 years Engineering Terms - Compute: Microsoft will provide OpenAI GPU core hours of compute at the agreed upon price for OpenAI’s workloads to run in Azure. - Geographic Location: Geographic location decisions will be at Microsoft discretion depending on capacity and availability. Microsoft will also be responsible for sharing the deployment strategy and timelines with OpenAI. - SLA: For all Virtual Machines that have two or more instances deployed in the same availability set, Microsoft guarantee OpenAI will have virtual machine connectivity to at least one instance at least 99.95% of the time. Microsoft will be held accountable to the SLA’s provided on https://azure.microsoft.com/enus/support/legal/sla/virtual-machines/v1_2/ - Evaluation, Evangelization, and Usage of CNTK v2, Azure Batch and HD-Insight: OpenAI will evaluate CNTK v2, Azure Batch, and HDInsight for their research, provide feedback on how Microsoft can improve these products. OpenAI will work with Microsoft to evangelize these products to their research and developer ecosystems, and evangelize Microsoft Azure as their preferred public cloud provider. At their sole discretion, and as it makes sense for their research, OpenAI will adopt these products. - Ramp: Microsoft and OpenAI will work together for creating a ramp plan that balances capacity per clusters. The initial timeline for ramp is a minimum of 30 days that will be augmented by Microsoft’s capacity expansion plans in the coming months. - Capacity: OpenAI will be given an allocation of capacity in the preview cluster (located in US South Central) for short term requirements and Microsoft will provide quota access to the subsequent K80 GPU clusters that go live in the 4th quarter of 2016 with the intention of more capacity in Q1 2017 (calendar year). Financial Terms · Financial Terms: Microsoft will offer $60m worth of List Compute (including GPU) at a deep discount which results in a price of $10m to be paid by OpenAI over the course of the deal. In the event OpenAI consumes less than $10m worth of Azure compute, OpenAI will be responsible for paying the balance between the used amount and $10m at the end of the deal term to Microsoft. Marketing & PR Terms Microsoft and OpenAI commit to jointly evangelizing deep learning capabilities on Azure as agreed upon by both parties. - Ignite: Announce the partnership at Microsoft’s Ignite event with executives (Sam Altman from OpenAI and Satya Nadella from Microsoft) from both parties inaugurating the collaboration - PR: Microsoft and OpenAI will work together to issue a joint press release about the partnership including any materials such as blog posts and videos.
Elon Musk to Sam Altman, (cc: Sam Teller) – Sep 16, 2016 3:10 PM
This actually made me feel nauseous. It sucks and is exactly what I would expect from them. Evaluation, Evangelization, and Usage of CNTK v2, Azure Batch and HD-Insight: OpenAI will evaluate CNTK v2, Azure Batch, and HD-Insight for their research, provide feedback on how Microsoft can improve these products. OpenAI will work with Microsoft to evangelize these products to their research and developer ecosystems, and evangelize Microsoft Azure as their preferred public cloud provider. At their sole discretion, and as it makes sense for their research, OpenAI will adopt these products. Let’s just say that we are willing to have Microsoft donate spare computing time to OpenAI and have that be known, but we won’t do any contract or agree to “evangelize”. They can turn us off at any time and we can leave at any time.
Sam Altman to Elon Musk, (cc: Sam Teller) – Sep 16, 2016 3:33 PM
I had the same reaction after reading that section and they've already agreed to drop it. We had originally just wanted spare cycles donated but the team wanted more certainty that capacity will be available. But I'll work with MSFT to make sure there are no strings attached.
Elon Musk to Sam Altman, (cc: Sam Teller) – Sep 16, 2016
We should just do this low key. No certainty either way. No contract.
Sam Altman to Elon Musk, (cc: Sam Teller) – Sep 16, 2016 6:45 PM
ok will see how much $ I can get in that direction.
Sam Teller to Elon Musk – Sep 20, 2016 8:05 PM
Microsoft is now willing to do the agreement for a full $50m with “good faith effort at OpenAI's sole discretion” and full mutual termination rights at any time. No evangelizing. No strings attached. No looking like lame Microsoft marketing pawns. Ok to move ahead?
Elon Musk to Sam Teller – Sep 21, 2016 12:09 AM
Fine by me if they don't use this in active messaging. Would be worth way more than $50M not to seem like Microsoft's marketing bitch.
Subject: biweekly update (Jul 20, 2017)
Summary: Ilya Sutskever provides a biweekly update to Elon Musk and Greg Brockman, detailing progress on various AI projects, including a robot hand solving a Rubik’s cube and advancements in competitive robotics. He also highlights instances of DeepMind using OpenAI’s algorithms.
Click to expand the email thread
Ilya Sutskever to Elon Musk, Greg Brockman – Jul 20, 2017 1:56 PM
- The robot hand can now solve a Rubik's cube in simulation: https://drive.google.com/a/openai.com/file/d/0B60rCy4P2FOIenlLdzN2LXdiOTQ/view?usp=sharing (needs OpenAI login) Physical robot will do same in September - 1v1 bot is no longer exploitable It can no longer be beaten using “unconventional” strategies On track to beat all humans in 1 month - Athletic competitive robots: https://drive.google.com/a/openai.com/file/d/0B60rCy4P2FOIZE4wNVdlbkx6U2M/view?usp=sharing (needs OpenAI login) - Released an adversarial example that fools a camera from all angles simultaneously: - DeepMind's directly used one of our algorithms to produce their parkour results: DeepMind's results: https://deepmind.com/blog/producing-flexible-behaviours-simulated-environments/ DeepMind's technical papers explicitly state they directly used our algorithms Our blogpost about our algorithm: https://blog.openai.com/openai-baselines-ppo/ (DeepMind used an older version). - Coming up: Designing the for-profit structure Negotiate merger terms with Cerebras More due diligence with Cerebras
Subject: OpenAI notes (Aug 28, 2017)
Summary: Shivon Zilis shares detailed notes with Elon Musk summarizing a conversation with Greg Brockman and Ilya Sutskever about OpenAI’s control structure, time commitment, and future plans. Elon expresses frustration and asks for clarity regarding the commitment to OpenAI.
Click to expand the email thread
Shivon Zilis to Elon Musk, (cc: Sam Teller) – Aug 28, 2017 12:01 AM
Elon, As I'd mentioned, Greg had asked to talk through a few things this weekend. Ilya ended up joining, and they pretty much just shared all of what they are still trying to think through. This is the distillation of that random walk of a conversation... came down to 7 unanswered questions with their commentary below. Please note that I'm not advocating for any of this, just structuring and sharing the information I heard. 1. Short-term control structure? - Is the requirement for absolute control? They wonder if there is a scenario where there could be some sort of creative overrule provision if literally everyone else disagreed on direction (not just the three of them, but perhaps a broader board)? 2. Duration of control and transition? - The non-negotiable seems to be an ironclad agreement to not have any one person have absolute control of AGI if it's created. Satisfying this means a situation where, regardless of what happens to the three of them, it's guaranteed that power over the company is distributed after the 2-3 year initial period. 3. Time spent? - How much time does Elon want to spend on this, and how much time can he actually afford to spend on this? In what timeframe? Is this an hour a week, ten hours a week, something in between? 4. What to do with time spent? - They don't really know how he prefers to spend time at his other companies and how he'd want to spend his time on this. Greg and Ilya are confident they could build out SW / ML side of things pretty well. They are not confident on the hardware front. They seemed hopeful Elon could spend some time on that since that's where they are weak, but did want his help in all domains he was interested in. 5. Ratio of time spent to amount of control? - They are cool with less time / less control, more time / more control, but not less time / more control. Their fear is that there won't be enough time to discuss relevant contextual information to make correct decisions if too little time is spent. 6. Equity split? - Greg still instinctually anchored on equal split. I personally disagree with him on that instinct and he asked for and was receptive to hearing other things he could use to recalibrate his mental model. - Greg noted that Ilya in some ways has contributed millions by leaving his earning potential on the table at Google. - One concern they had was the proposed employee pool was too small. 7. Capitalization strategy? - Their instinct is to raise much more than $100M out of the gate. They are of the opinion that the datacenter they need alone would cost that so they feel more comfortable raising more. Takeaways: Unsure if any of this is amenable but just from listening to all of the data points they threw out, the following would satisfy their current sticky points: - Spending 5-10 hours a week with near full control, or spend less time and have less control. - Having a creative short-term override just for extreme scenarios that was not just Greg / Sam / Ilya. - An ironclad 2-3yr minority control agreement, regardless of the fates of Greg / Sam / Ilya. - $200M-$1B initial raise. - Greg and Ilya's stakes end up higher than 1/10 of Elon's but not significantly (this remains the most ambiguous). - Increasing employee pool. Shivon
Elon Musk to Shivon Zilis, (cc: Sam Teller) – Aug 28, 2017 12:08 AM
This is very annoying. Please encourage them to go start a company. I've had enough.
Subject: Honest Thoughts (Sep 20, 2017)
Summary: Ilya Sutskever shares candid reflections with Elon Musk and Sam Altman, addressing concerns about control, the implications of AGI, and the internal dynamics of OpenAI. The thread emphasizes the stakes of their work and calls for a meeting to discuss unresolved issues.
Click to expand the email thread
Ilya Sutskever to Elon Musk, Sam Altman, (cc: Greg Brockman, Sam Teller, Shivon Zilis) – Sep 20, 2017 2:08 PM
Elon, Sam, This process has been the highest stakes conversation that Greg and I have ever participated in, and if the project succeeds, it'll turn out to have been the highest stakes conversation the world has seen. It's also been a deeply personal conversation for all of us. Yesterday while we were considering making our final commitment given the non-solicit agreement, we realized we'd made a mistake. We have several important concerns that we haven't raised with either of you. We didn't raise them because we were afraid to: we were afraid of harming the relationship, having you think less of us, or losing you as partners. There is some chance that our concerns will prove to be unresolvable. We really hope it's not the case, but we know we will fail for sure if we don't all discuss them now. And we have hope that we can work through them and all continue working together. Elon: We really want to work with you. We believe that if we join forces, our chance of success in the mission is the greatest. Our upside is the highest. There is no doubt about that. Our desire to work with you is so great that we are happy to give up on the equity, personal control, make ourselves easily firable — whatever it takes to work with you. But we realized that we were careless in our thinking about the implications of control for the world. Because it seemed so hubristic, we have not been seriously considering the implications of success. The current structure provides you with a path where you end up with unilateral absolute control over the AGI. You stated that you don't want to control the final AGI, but during this negotiation, you've shown to us that absolute control is extremely important to you. As an example, you said that you needed to be CEO of the new company so that everyone will know that you are the one who is in charge, even though you also stated that you hate being CEO and would much rather not be CEO. Thus, we are concerned that as the company makes genuine progress towards AGI, you will choose to retain your absolute control of the company despite current intent to the contrary. We disagree with your statement that our ability to leave is our greatest power, because once the company is actually on track to AGI, the company will be much more important than any individual. The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility. We have a few smaller concerns, but we think it's useful to mention it here: In the event we decide to buy Cerebras, my strong sense is that it'll be done through Tesla. But why do it this way if we could also do it from within OpenAI? Specifically, the concern is that Tesla has a duty to shareholders to maximize shareholder return, which is not aligned with OpenAI's mission. So the overall result may not end up being optimal for OpenAI. We believe that OpenAI the non-profit was successful because both you and Sam were in it. Sam acted as a genuine counterbalance to you, which has been extremely fruitful. Greg and I, at least so far, are much worse at being a counterbalance to you. We feel this is evidenced even by this negotiation, where we were ready to sweep the long-term AGI control questions under the rug while Sam stood his ground. Sam: When Greg and I are stuck, you've always had an answer that turned out to be deep and correct. You've been thinking about the ways forward on this problem extremely deeply and thoroughly. Greg and I understand technical execution, but we don't know how structure decisions will play out over the next month, year, or five years. But we haven't been able to fully trust your judgements throughout this process, because we don't understand your cost function. We don't understand why the CEO title is so important to you. Your stated reasons have changed, and it's hard to really understand what's driving it. Is AGI truly your primary motivation? How does it connect to your political goals? How has your thought process changed over time? Greg and Ilya: We had a fair share of our own failings during this negotiation, and we'll list some of them here (Elon and Sam, I'm sure you'll have plenty to add...): During this negotiation, we realized that we have allowed the idea of financial return 2-3 years down the line to drive our decisions. This is why we didn't push on the control — we thought that our equity is good enough, so why worry? But this attitude is wrong, just like the attitude of AI experts who don't think that AI safety is an issue because they don't really believe that they'll build AGI. We did not speak our full truth during the negotiation. We have our excuses, but it was damaging to the process, and we may lose both Sam and Elon as a result. There's enough baggage here that we think it's very important for us to meet and talk it out. Our collaboration will not succeed if we don't. Can all four of us meet today? If all of us say the truth, and resolve the issues, the company that we'll create will be much more likely to withstand the very strong forces it'll experience. - Greg & Ilya
Elon Musk to Ilya Sutskever, (cc: Sam Altman; Greg Brockman; Sam Teller; Shivon Zilis) – Sep 20, 2017 2:17 PM
Guys, I've had enough. This is the final straw. Either go do something on your own or continue with OpenAI as a nonprofit. I will no longer fund OpenAI until you have made a firm commitment to stay or I'm just being a fool who is essentially providing free funding for you to create a startup. Discussions are over.
Elon Musk to Ilya Sutskever, Sam Altman (cc: Greg Brockman, Sam Teller, Shivon Zilis) – Sep 20, 2017 3:08 PM
To be clear, this is not an ultimatum to accept what was discussed before. That is no longer on the table.
Sam Altman to Elon Musk, Ilya Sutskever (cc: Greg Brockman, Sam Teller, Shivon Zilis) – Sep 21, 2017 9:17 AM
i remain enthusiastic about the non-profit structure!
Subject: Non-profit (Sep 22, 2017)
Summary: Shivon Zilis updates Elon Musk on Greg Brockman and Ilya Sutskever’s decision to continue with the non-profit structure and discusses Sam Altman’s response. Elon acknowledges the update, and further communication highlights Sam’s reflections on trust and future fundraising.
Click to expand the email thread
Shivon Zilis to Elon Musk, (cc: Sam Teller) – Sep 22, 2017 9:50 AM
Hi Elon, Quick FYI that Greg and Ilya said they would like to continue with the non-profit structure. They know they would need to provide a guarantee that they won't go off doing something else to make it work. Haven't spoken to Altman yet but he asked to talk this afternoon so will report anything I hear back. If anything I can do to help let me know. Shivon
Elon Musk to Shivon Zilis, (cc: Sam Teller) – Sep 22, 2017 10:01 AM
Ok
Shivon Zilis to Elon Musk, (cc: Sam Teller) – Sep 22, 2017 5:54 PM
From Altman: Structure: Great with keeping non-profit and continuing to support it. Trust: Admitted that he lost a lot of trust with Greg and Ilya through this process. Felt their messaging was inconsistent and felt childish at times. Hiatus: Sam told Greg and Ilya he needs to step away for 10 days to think. Needs to figure out how much he can trust them and how much he wants to work with them. Said he will come back after that and figure out how much time he wants to spend. Fundraising: Greg and Ilya have the belief that 100's of millions can be achieved with donations if there is a definitive effort. Sam thinks there is a definite path to 10's of millions but TBD on more. He did mention that Holden was irked by the move to for-profit and potentially offered more substantial amount of money if OpenAI stayed a non-profit, but hasn't firmly committed. Sam threw out a $100M figure for this if it were to happen. Communications: Sam was bothered by how much Greg and Ilya keep the whole team in the loop with happenings as the process unfolded. Felt like it distracted the team. On the other hand, apparently in the last day almost everyone has been told that the for-profit structure is not happening and he is happy about this at least since he just wants the team to be heads down again. Shivon
Subject: ICO (Jan 21, 2018)
Summary: Sam Altman informs Elon Musk and the team about concerns raised by the safety team regarding the proposed ICO (Initial Coin Offering). The thread discusses the importance of gathering input and maintaining confidentiality.
Click to expand the email thread
Sam Altman to Elon Musk (cc: Greg Brockman, Ilya Sutskever, Sam Teller, Shivon Zilis) – Jan 21, 2018 5:08 PM
Elon— Heads up, spoke to some of the safety team and there were a lot of concerns about the ICO and possible unintended effects in the future. Planning to talk to the whole team tomorrow and invite input. Going to emphasize the need to keep this confidential, but I think it's really important we get buy-in and give people the chance to weigh in early. Sam
Elon Musk to Sam Altman (cc: Greg Brockman, Ilya Sutskever, Sam Teller, Shivon Zilis) – Jan 21, 2018 5:56 PM
Absolutely
Subject: AI updates (Mar 25, 2018)
Summary: Shivon Zilis provides Elon Musk with updates on OpenAI’s fundraising plans, board changes, and a partnership with Cerebras for chip testing. Elon acknowledges the updates and offers further input if needed.
Click to expand the email thread
Shivon Zilis to Elon Musk, (cc: Sam Teller) – Mar 25, 2018 11:03 AM
OpenAI Fundraising: - No longer doing the ICO / “instrument to purchase compute in advance” type structure. Altman is thinking through an instrument where the 4-5 large corporates who are interested can invest with a return capped at 50x if OpenAI does get to some semblance of money-making AGI. They apparently seem willing just for access reasons. He wants to discuss with you in more detail. Formal Board Resignation: - You're still technically on the board so need to send a quick one-liner to Sam Altman saying something like “With this email I hereby resign as a director of OpenAI, effective Feb 20th 2018”. Future Board: - Altman said he is cool with me joining then having to step off if I become conflicted, but is concerned that others would consider it a burned bridge if I had to step off. I think best bet is not to join for now and be an ambiguous advisor but let me know if you feel differently. They have Adam D’Angelo as the potential fifth to take your place, which seems great? TeslaAI: - Andrej has three candidates in the pipeline, may have 1-2 come in to meet you on Tuesday. He will send you a briefing note about them. Also, he’s working on starter language for a potential release that will be ready to discuss Tuesday. It will follow the “full-stack AI lab” angle we talked about but, if that doesn’t feel right, please course correct... is tricky messaging. Cerebras: - Chip should be available in August for them to test, and they plan to let others have remote access in September. The Cerebras guy also mentioned that a lot of their recent customer interest has been from companies upset about the Nvidia change in terms of service (the one that forces companies away from consumer grade GPUs to enterprise Pascals / Voltas). Scott Gray and Ilya continue to spend a bunch of time with them.
Elon Musk to Shivon Zilis, (cc: Sam Teller) – Mar 25, 2018 11:15 AM
Thanks for the update. Let me know if there’s anything specific you need from me.
Subject: The OpenAI Charter (Apr 2, 2018)
Summary: Sam Altman shares the draft of OpenAI’s Charter with Elon Musk, inviting feedback before its release. Elon responds briefly, indicating his approval.
Click to expand the email thread
Sam Altman to Elon Musk, (cc: Shivon Zilis) – Apr 2, 2018 1:54 PM
We are planning to release this next week--any thoughts? The OpenAI Charter OpenAI's mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically-valuable creative work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles: Broadly Distributed Benefits We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always assiduously act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit. Long-Term Safety We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community. We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case by case agreements, but a typical triggering condition might be "a better-than-even chance of success in the next 2 years". Technical Leadership To be effective at addressing AGI's impact on society, OpenAI must be on the cutting edge of AI capabilities — policy and safety advocacy alone would be insufficient. We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise. Cooperative Orientation We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges. We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
Elon Musk to Sam Altman – Apr 2, 2018 2:45 PM
Sounds fine
Subject: AI updates (continuation) (Apr 23, 2018)
Summary: Shivon Zilis continues an update to Elon Musk about recent developments in OpenAI, including Sam Altman’s discussions on fundraising and changes in the advisory structure, as well as technical progress on Dota bots and internal concerns about AGI timelines.
Click to expand the email thread
Shivon Zilis to Elon Musk, (cc: Sam Teller) – Apr 23, 2018 1:49 AM
Updated info per a conversation with Altman. You’re tentatively set to speak with him on Tuesday. Financing: - He confirmed again that they are definitely not doing an ICO but rather equity that has a fixed maximum return. - Would be a rather unique subsidiary structure for the raise which he wants to walk you through. - Wants to move within 4-6 weeks on first round (probably largely Reid money, potentially some corporates). Tech: - Says Dota 5v5 looking better than anticipated. - The sharp rise in Dota bot performance is apparently causing people internally to worry that the timeline to AGI is sooner than they’d thought before. - Thinks they are on track to beat Montezuma’s Revenge shortly. Time allocation: - I’ve reallocated most of the hours I used to spend with OpenAI to Neuralink and Tesla. This naturally happened with you stepping off the board and related factors — but if you’d prefer I pull more hours back to OpenAI oversight please let me know. - Sam and Greg asked if I’d be on their informal advisory board (just Gabe Newell so far), which seems fine and better than the formal board given potential conflicts? If that doesn’t feel right let me know what you’d prefer.
Subject: OpenAI (Mar 6, 2019)
Summary: Sam Altman informs Elon Musk about the creation of OpenAI LP, a capped-profit company to enable greater investment in AI research. The email highlights the structure, mission alignment, and potential future funding, with Elon requesting that his lack of financial involvement be made explicit.
Click to expand the email thread
Sam Altman to Elon Musk, (cc: Sam Teller, Shivon Zilis) – Mar 6, 2019 3:13 PM
Elon— Here is a draft post we are planning for Monday. Anything to add/edit? TL;DR: - We've created the capped-profit company and raised the first round, led by Reid and Vinod. - We did this in a way where all investors are clear that they should never expect a profit. - We made Greg chairman and me CEO of the new entity. - We have tested this structure with potential next-round investors and they seem to like it. - Speaking of the last point, we are now discussing a multi-billion dollar investment which I would like to get your advice on when you have time. Happy to come see you sometime you are in the Bay Area. Sam --- We've created OpenAI LP, a new "capped-profit" company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission. Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world. Due to the exponential growth of compute investments in the field, we’ve needed to scale much faster than we’d planned when starting OpenAI. We expect to need to raise many billions of dollars in upcoming years for large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers. We haven’t been able to raise that much money as a nonprofit, and though we considered becoming a for-profit, we were afraid that doing so would mean giving up our mission. Instead, we created a new company, OpenAI LP, as a hybrid for-profit and nonprofit — which we are calling a "capped-profit" company. The fundamental idea of OpenAI LP is that investors and employees can get a fixed return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity. But any returns beyond that amount — and if we are successful, we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP — are owned by the original OpenAI Nonprofit entity. Going forward (in this post and elsewhere), “OpenAI” refers to OpenAI LP (which now employs most of our staff), and the original entity is referred to as “OpenAI Nonprofit”. The mission comes first We’ve designed OpenAI LP to put our overall mission — ensuring the creation and adoption of safe and beneficial AGI — over generating returns for investors. To minimize conflicts of interest with the mission, OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake. Our employee and investor paperwork starts like this. The general partner refers to OpenAI Nonprofit (whose official name is “OpenAI Inc”); limited partners refers to investors and employees. Only a minority of board members can hold financial stakes in the partnership. Furthermore, only board members without such stakes are allowed to vote on decisions where the interests of limited partners and the nonprofit’s mission may conflict — including any decisions about making payouts to investors and employees. Corporate structure Another provision from our paperwork specifies that the nonprofit retains control. (The paperwork uses OpenAI LP’s official name “OpenAI, L.P.”.) As mentioned above, economic returns for investors and employees are capped (with the cap negotiated in advance on a per-limited partner basis). Any excess returns are owned by the nonprofit. Our goal is to ensure that most of the value we create if successful is returned to the world, so we think this is an important first step. Returns for our first round of investors are capped to 100x their investment, and we expect this multiple to be lower for future rounds. What OpenAI does Our day-to-day work remains the same. Today, we believe we can build the most value by focusing exclusively on developing new AI technologies, not commercial products. Our structure gives us flexibility for how to make money in the long term, but we hope to figure that out only once we’ve created safe AGI (though we’re open to non-distracting revenue sources such as licensing in the interim). OpenAI LP currently employs around 100 people organized into three main areas: capabilities (advancing what AI systems can do), safety (ensuring those systems are aligned with human values), and policy (ensuring appropriate governance for such systems). OpenAI Nonprofit governs OpenAI LP, runs educational programs such as Scholars and Fellows, and hosts policy initiatives. OpenAI LP is continuing (at increased pace and scale) the development roadmap started at OpenAI Nonprofit, which has yielded breakthroughs in reinforcement learning, robotics, and language. Safety We are concerned about AGI’s potential to cause rapid change, whether through machines pursuing goals misspecified by their operator, malicious humans subverting deployed systems, or an out-of-control economy that grows without resulting in improvements to human lives. As described in our Charter, we are willing to merge with a value-aligned organization (even if it means reduced or zero payouts to investors) to avoid a competitive race which would make it hard to prioritize safety. Who’s involved OpenAI Nonprofit’s board consists of OpenAI LP employees Greg Brockman (Chairman & CTO), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Holden Karnofsky, Reid Hoffman, Sue Yoon, and Tasha McCauley. Elon Musk left the board of OpenAI Nonprofit in February 2018 and is not involved with OpenAI LP. Our investors include Reid Hoffman and Khosla Ventures. We are traveling a hard and uncertain path, but we have designed our structure to help us positively affect the world should we succeed in creating AGI. If you’d like to help us make this mission a reality, we’re hiring :)!
Elon Musk to Sam Altman – Mar 11, 2019 3:04 PM
Please be explicit that I have no financial interest in the for-profit arm of OpenAI.
Sam Altman to Elon Musk – Mar 11, 2019 3:11 PM
on it
Subject: OpenAI Funding & Future Direction (Aug 5, 2020)
Summary: Sam Altman and Greg Brockman discuss the need for new funding to accelerate OpenAI’s mission, particularly in response to increased competition in AI research. They touch on future investment strategy, scaling their infrastructure, and the alignment of investors with OpenAI’s nonprofit mission.
Click to expand the email thread
Sam Altman to Elon Musk, (cc: Greg Brockman, Shivon Zilis) – Aug 5, 2020 10:20 AM
Elon, I wanted to update you on OpenAI’s current funding situation and plans moving forward. Since our last conversation, we have made significant progress in scaling our AI models and infrastructure, but to continue at this pace, we need a substantial increase in funding. As you know, AI research is getting more competitive, and we need to stay ahead in order to fulfill our mission. We are now considering raising a large round of capital, potentially in the range of $1-2 billion. The challenge, as always, is ensuring we maintain our mission-first structure, and I wanted to get your input on how we can do that. We’re looking at investors who understand our long-term vision and agree to our capped-profit model. But we also need to ensure that any investment comes with the understanding that the ultimate goal is to benefit humanity, not just profit. We’re thinking about structuring the new round of funding in a way that aligns incentives with our mission — with strong emphasis on long-term control and transparency. I’d love to get your thoughts on the best way to structure this, particularly in terms of governance and ensuring that OpenAI remains focused on safe AI deployment. Happy to chat further, and let me know if you think it makes sense to bring you into the discussion with potential investors. Sam --- Greg Brockman to Elon Musk, (cc: Sam Altman, Shivon Zilis) - Aug 5, 2020 11:03 AM
Elon, just following up on Sam's email. We’re moving forward with some initial conversations with large investors, but we’ve been very careful to ensure that the terms respect the original mission and governance. We don’t want to get into a situation where investors could exert undue pressure on our direction or control. At the same time, we need to ensure we have enough funding to push the boundaries of AI safety and capability. We’ve been talking with a few potential partners who are very aligned with our mission, but the funding commitments would need to be substantial to achieve our next phase of development. The ask is not trivial, and we want to make sure that we remain in control of our destiny while still incentivizing significant investment in the future of AI research. Would love to discuss more if you have thoughts on how to structure this going forward. Greg
Subject: AI Ethics and Governance Framework (Oct 12, 2021)
Summary: Elon Musk and Sam Altman discuss the importance of creating a robust ethical and governance framework for AI development, particularly around autonomous systems. They explore the role of regulations, collaboration with governments, and the potential for international treaties on AI safety.
Click to expand the email thread
Sam Altman to Elon Musk – Oct 12, 2021 2:15 PM
Elon, I’ve been thinking a lot about the ethical implications of AI as we continue to scale these systems. While we’re making incredible progress on the tech side, it seems increasingly clear that we need a much more robust governance structure to ensure that AI is developed and deployed responsibly. There’s a lot of momentum in the space around regulation, but I’m worried that without a clear framework, we could end up with piecemeal regulations that don’t adequately address the global challenges AI will bring. I think it would be worth discussing how we can take the lead on shaping that framework, working both with governments and other tech companies. At the very least, we need to start the conversation now and lay out a roadmap for future collaboration. I would love your thoughts on how we can approach this — how we balance innovation with safety and governance, and whether we need to push for an international treaty or something similar. Sam --- Elon Musk to Sam Altman - Oct 12, 2021 3:00 PM
Sam, I agree with you completely on this. We have to get ahead of these issues now, or we risk creating a future where the technology outpaces our ability to control or even understand it. I think there’s a lot of value in pushing for global cooperation on AI regulation. In particular, we should work on building an international treaty, much like nuclear nonproliferation, that sets clear boundaries on what is acceptable in terms of AI development, especially for autonomous systems. We also need to make sure that AI is developed in a way that is transparent, and I think we can lead by example here. OpenAI should be as transparent as possible in our approach, and we should hold other companies to the same standards. We can’t afford to wait until the technology is fully deployed and out of control. The best time to shape the future of AI governance is now, while we still have some influence. Elon
Subject: AI Safety Standards Proposal (Feb 20, 2022)
Summary: Greg Brockman and Sam Altman discuss the draft of a proposal for AI safety standards. They emphasize the need for clarity on safety protocols, particularly for AI models used in high-risk applications such as healthcare and autonomous vehicles. Elon Musk’s feedback is also considered in the final adjustments to the proposal.
Click to expand the email thread
Greg Brockman to Sam Altman – Feb 20, 2022 9:30 AM
Sam, I’ve attached the latest draft of the AI Safety Standards proposal. We’re focusing on high-risk use cases for now, with a particular emphasis on ensuring transparency and ethical oversight when it comes to autonomous systems. I think we’re in a good place, but I’d like your input on whether the safety protocols are clear enough and whether we’ve covered the most important aspects. I’ve also been thinking about the ways we can structure the review board — it might be worth involving external experts, including ethicists, to make sure we’re not missing anything in our approach. It’d be great if we could get Elon’s feedback before we finalize this. Do you think we should send him a copy? Greg --- Sam Altman to Greg Brockman - Feb 20, 2022 10:00 AM
Greg, Thanks for sending this over. I’ve read through it, and I think the proposal looks strong overall. The focus on transparency is key, and I agree that we should emphasize the importance of independent oversight, particularly for areas like healthcare, where the stakes are incredibly high. On the review board, I think it’s a good idea to include ethicists and other external experts. We should also consider having a more formal mechanism for ongoing public feedback, particularly as the technology scales. It’s important that we don’t just build the system and then release it into the world — we need to have an ongoing dialogue with the public, and I think that would help us build more trust in the process. As for getting Elon’s feedback, I think it’s definitely worth sending him the draft. He’s been very vocal about AI safety, and I think his input will help refine the proposal even further. Sam --- Greg Brockman to Elon Musk, (cc: Sam Altman) - Feb 20, 2022 11:45 AM
Elon, We’re putting the final touches on the AI Safety Standards proposal, and we’d love to get your feedback. The draft includes safety protocols for high-risk AI applications, like autonomous vehicles and healthcare AI, and focuses heavily on transparency, ethical oversight, and the establishment of a formal review board. Would you mind reviewing the document and sharing any thoughts you have? We want to ensure this is as comprehensive as possible before we take it to regulators and other industry leaders. Let me know if you’d like to set up a quick call to discuss. Greg
Subject: Long-Term AI Alignment Goals (July 5, 2022)
Summary: Elon Musk, Sam Altman, and Ilya Sutskever discuss their long-term vision for AI alignment. They address the risks of advanced AI systems misaligned with human values, and explore strategies to ensure AI remains beneficial in the future. The conversation also touches on AI’s potential to drive societal change and the ethical considerations of rapid AI deployment.
Click to expand the email thread
Elon Musk to Sam Altman, Ilya Sutskever – July 5, 2022 8:45 AM
Sam, Ilya, As we continue to develop more advanced AI systems, I’m becoming more concerned about the long-term alignment of these technologies with human values. We’ve seen how quickly things can spiral out of control with powerful systems, and it’s crucial that we not only think about short-term safety but also long-term alignment. I think there’s a need for a much more robust framework that accounts for the evolving nature of AI. The systems we build today will not be the same as those we’ll have in 10 or 20 years. We need to make sure that whatever we build now can evolve in a way that stays aligned with our long-term goals. What are your thoughts on this? Are we taking enough into account with our current safety frameworks, or do we need to rethink the entire approach? Elon --- Sam Altman to Elon Musk, Ilya Sutskever - July 5, 2022 9:30 AM
Elon, Ilya, I completely agree that we need to plan for the long term when it comes to AI alignment. The systems we’re building today are incredibly powerful, but we have to ensure that they remain aligned with human values as they grow more capable. One thing I’ve been thinking about is the potential for AI systems to develop their own goals over time. As these systems become more sophisticated, we might no longer be able to predict their behavior with the same level of certainty. It’s crucial that we find ways to build systems that are not only safe now but that can evolve in a way that remains aligned with the broader human agenda. We should definitely consider updating our safety protocols to account for this. I think it’s something that needs to be discussed more openly with other AI researchers too. Sam --- Ilya Sutskever to Elon Musk, Sam Altman - July 5, 2022 10:15 AM
Elon, Sam, I agree with both of you that the long-term alignment of AI is a critical issue. We are at the cutting edge of technology, and there are still many unknowns about how these systems will behave once they reach a certain level of complexity. One thing that’s worth exploring is the concept of recursive self-improvement. If AI systems are capable of improving their own algorithms, it could potentially lead to rapid, uncontrollable advancement. This could have unintended consequences if we don’t properly constrain the development of these systems. I think one possible solution is to create a set of ethical guidelines that can adapt as the technology evolves. This would allow us to build a more flexible framework that accounts for the unique risks posed by highly autonomous systems. But we’ll need a coordinated effort from researchers, developers, and policymakers to make this work. Ilya --- Sam Altman to Elon Musk, Ilya Sutskever - July 5, 2022 11:00 AM
Ilya, Elon, I think the points you’ve both raised are critical. As AI becomes more powerful, we need to ensure that it doesn’t just mimic human behavior but aligns with the broader human goals of fairness, justice, and wellbeing. I agree with Ilya’s point about recursive self-improvement — this is an area where we’ll need to be extra careful. But I also think it’s important that we don’t fall into the trap of assuming that we can fully control these systems forever. We need to build mechanisms that allow us to adapt and evolve as the technology changes. Let’s continue to refine our long-term alignment goals and try to find some concrete steps we can take to ensure AI remains a force for good. Sam