More from Greg Brockman
AI has recently crossed a utility threshold, where cutting-edge models such as GPT-3, Codex, and DALL-E 2 are actually useful and can perform tasks computers cannot do any other way. The act of producing these models is an exploration of a new frontier, with the discovery of unknown capabilities, scientific progress, and incredible product applications as the rewards. And perhaps most exciting for me personally, because the field is fundamentally about creating and studying software systems, great engineers are able to contribute at the same level as great researchers to future progress. “A self-learning AI system.” by DALL-E 2. I first got into software engineering because I wanted to build large-scale systems that could have a direct impact on people’s lives. I attended a math research summer program shortly after I started programming, and my favorite result of the summer was a scheduling app I built for people to book time with the professor. Specifying every detail of how a program should work is hard, and I’d always dreamed of one day putting my effort into hypothetical AI systems that could figure out the details for me. But after taking one look at the state of the art in AI in 2008, I knew it wasn’t going to work any time soon and instead started building infrastructure and product for web startups. DALL-E 2’s rendition of “The two great pillars of the house of artificial intelligence” (which according to my co-founder Ilya Sutskever are great engineering, and great science using this engineering) It’s now almost 15 years later, and the vision of systems which can learn their own solutions to problems is becoming incrementally more real. And perhaps most exciting is the underlying mechanism by which it’s advancing — at OpenAI, and the field generally, precision execution on large-scale models is a force multiplier on AI progress, and we need more people with strong software skills who can deliver these systems. This is because we are building AI models out of unprecedented amounts of compute; these models in turn have unprecedented capabilities, we can discover new phenomena and explore the limits of what these models can and cannot do, and then we use all these learnings to build the next model. “Harnessing the most compute in the known universe” by DALL-E 2 Harnessing this compute requires deep software skills and the right kind of machine learning knowledge. We need to coordinate lots of computers, build software frameworks that allow for hyperoptimization in some cases and flexibility in others, serve these models to customers really fast (which is what I worked on in 2020), and make it possible for a small team to manage a massive system (which is what I work on now). Engineers with no ML background can contribute from the day they join, and the more ML they pick up the more impact they have. The OpenAI environment makes it relatively easy to absorb the ML skills, and indeed, many of OpenAI’s best engineers transferred from other fields. All that being said, AI is not for every software engineer. I’ve seen about a 50-50 success rate of engineers entering this field. The most important determiner is a specific flavor of technical humility. Many dearly-held intuitions from other domains will not apply to ML. The engineers who make the leap successfully are happy to be wrong (since it means they learned something), aren’t afraid not to know something, and don’t push solutions that others resist until they’ve gathered enough intuition to know for sure that it matches the domain. “A beaver who has humbly recently become a machine learning engineer” by DALL-E 2 I believe that AI research is today by far the most impactful place for engineers who want to build useful systems to be working, and I expect this statement to become only more true as progress continues. If you’d like to work on creating the next generation of AI models, email me (gdb@openai.com) with any evidence of exceptional accomplishment in software engineering.
The text of my speech introducing OpenAI Five at Saturday’s OpenAI Five Finals event, where our AI beat the world champions at Dota 2: “Welcome everyone. This is an exciting day. First, this is an historic moment: this will be the first time that an AI has even attempted to play the world champions in an esports game. OG is simply on another level relative to other teams we’ve played. So we don’t know what’s going to happen, but win or lose, these will be games to remember. And you know, OpenAI Five and DeepMind’s very impressive StarCraft bot This event is really about something bigger than who wins or loses: letting people connect with the strange, exotic, yet tangible intelligences produced by today’s rapidly progressing AI technology. We’re all used to computer programs which have been meticulously coded by a human programmer. Do one thing that the human didn’t anticipate, and the program will break. We think of our computers as unthinking machines which can’t innovate, can’t be creative, can’t truly understand. But to play Dota, you need to do all these things. So we needed to do something different. OpenAI Five is powered by deep reinforcement learning — meaning that we didn’t code in how to play Dota. We instead coded in the how to learn. Five tries out random actions, and learns from a reward or punishment. In its 10 months of training, its experienced 45,000 years of Dota gameplay against itself. The playstyle it has devised are its own — they are truly creative and dreamed up by our computer — and so from Five’s perspective, today’s games are going to its first encounter with an alien intelligence (no offense to OG!). The beauty of this technology is that our learning code doesn’t know it’s meant for Dota. That makes it general purpose with amazing potential to benefit our lives. Last year we used it to control a robotic hand that no one could program. And we expect to see similar technology in new interactive systems, from elderly care robots to creative assistants to other systems we can’t dream of yet. This is the final public event for OpenAI Five, but we expect to do other Dota projects in the future. I want to thank the incredible team at OpenAI, everyone who worked directly on this project or cheered us on. I want to thank those who have supported the project: Valve, dozens of test teams, today’s casters, and yes, even all the commenters on Reddit. And I want to give massive thanks today to our fantastic guests OG who have taken time out of their tournament schedule to be here today. I hope you enjoy the show — and just to keep things in perspective, no matter how surprising the AIs are to us, know that we’re even more surprising to them!”
This post is co-written by Greg Brockman (left) and Ilya Sutskever (right). We’ve been working on OpenAI for the past three years. Our mission is to ensure that artificial general intelligence (AGI) — which we define as automated systems that outperform humans at most economically valuable work — benefits all of humanity. Today we announced a new legal structure for OpenAI, called OpenAI LP, to better pursue this mission — in particular to raise more capital as we attempt to build safe AGI and distribute its benefits. In this post, we’d like to help others understand how we think about this mission. Why now? # The founding vision of the field of AI was “… to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”, and to eventually build a machine that thinks — that is, an AGI. But over the past 60 years, progress stalled multiple times and people started thinking of AI as a field that wouldn’t deliver. Since 2012, deep learning has generated sustained progress in many domains using a small simple set of tools, which have the following properties: Generality: deep learning tools are simple, yet they apply to many domains, such as vision, speech recognition, speech synthesis, text synthesis, image synthesis, translation, robotics, and game playing. Competence: today, the only way to get competitive results on most “AI-type problems” is through the use of deep learning techniques. Scalability: good old fashioned AI was able to produce exciting demos, but its techniques had difficulty scaling to harder problems. But in deep learning, more computational power and more data leads to better results. It has also proven easy (if costly) to rapidly increase the amount of compute productively used by deep learning experiments. The rapid progress of useful deep learning systems with these properties makes us feel that it’s reasonable to start taking AGI seriously — though it’s hard to know how far away it is. The impact of AGI # Just like a computer today, an AGI will be applicable to a wide variety of tasks — and just like computers in 1900 or the Internet in 1950, it’s hard to describe (or even predict) the kind of impact AGI will have. But to get a sense, imagine a computer system which can do the following activities with minimal human input: Make a scientific breakthrough at the level of the best scientists Productize that breakthrough and build a company, with a skill comparable to the best entrepreneurs Rapidly grow that company and manage it at large scale The upside of such a computer system is enormous — for an illustrative example, an AGI following the pattern above could produce amazing healthcare applications deployed at scale. Imagine a network of AGI-powered computerized doctors that accumulates a superhuman amount of clinical experience, allowing it to produce excellent diagnoses, deeply understand the nuanced effect of various treatments in lots of conditions, and greatly reduce the human error factor of healthcare — all for very low cost and accessible to everyone. Risks # We already live in a world with entities that surpass individual human abilities, which we call companies. If working on the right goals in the right way, companies can produce huge amounts of value and improve lives. But if not properly checked, they can also cause damage, like logging companies that cut down rain forests, cigarette companies that get children smoking, or scams like Ponzi schemes. We think of AGI as being like a hyper-effective company, with commensurate benefits and risks. We are concerned about AGI pursuing goals misspecified by its operator, malicious humans subverting a deployed AGI, or an out-of-control economy that grows without resulting in improvements to human lives. And because it’s hard to change powerful systems — just think about how hard it’s been to add security to the Internet — once they’ve been deployed, we think it’s important to address AGI’s safety and policy risks before it is created. OpenAI’s mission is to figure out how to get the benefits of AGI and mitigate the risks — and make sure those benefits accrue to all of humanity. The future is uncertain, and there are many ways in which our predictions could be incorrect. But if they turn out to be right, this mission will be critical. If you’d like to work on this mission, we’re hiring! About us # Ilya: I’ve been working on deep learning for 16 years. It was fun to witness deep learning transform from being a marginalized subfield of AI into one the most important family of scientific advances in recent history. As deep learning was getting more powerful, I realized that AGI might become a reality on a timescale relevant to my lifetime. And given AGI’s massive upside and significant risks, I want to maximize the positive parts of this impact and minimize the negative. Greg: Technology causes change, both positive and negative. AGI is the most extreme kind of technology that humans will ever create, with extreme upside and downside. I work on OpenAI because making AGI go well is the most important problem I can imagine contributing towards. Today I try to spend most of my time on technical work, and also work to spark better public discourse about AGI and related topics.
The text of my speech introducing OpenAI Five at yesterday’s Benchmark event: “We’re here to watch humans and AI play Dota, but today’s match will have implications for the world. OpenAI’s mission is to ensure that when we can build machines as smart as humans, they will benefit all of humanity. That means both pushing the limits of what’s possible and ensuring future systems are safe and aligned with human values. We work on Dota because it is a great training ground for AI: it is one of the most complicated games, involving teamwork, real time strategy, imperfect information, and an astronomical combinations of heroes and items. We can’t program a solution, so Five learns by playing 180 years of games against itself every day — sadly that means we can’t learn from the players up here unless they played for a few decades. It’s powered by 5 artificial neural networks which act like an artificial intuition. Five’s neural networks are about the size of the brain of an ant — still far from what we all have in our heads. One year ago, we beat the world’s top professionals at 1v1 Dota. People thought 5v5 would be totally out of reach. 1v1 requires mechanics and positioning; people did not expect the same system to learn strategy. But our AI system can learn problems it was not even designed to solve — we just used the same technology to learn to control a robotic hand — something no one could program. The computational power for OpenAI Five would have been impractical two years ago. But the availability of computation for AI has been increasing exponentially, doubling every 3.5 months since 2012, and one day technologies like this will become commonplace. Feel free to root for either team. Either way, humanity wins.” I’m very excited to see where the upcoming months of OpenAI Five development and testing take us.
More in programming
The best engineering teams take control of their tools. They help develop the frameworks and libraries they depend on, and they do this by running production code on edge — the unreleased next version. That's where progress is made, that's where participation matters most. This sounds scary at first. Edge? Isn't that just another word for danger? What if there's a bug?! Yes, what if? Do you think bugs either just magically appear or disappear? No, they're put there by programmers and removed by the very same. If you want bug-free frameworks and libraries, you have to work for it, but if you do, the reward for your responsibility is increased engineering excellence. Take Rails 8.1, as an example. We just released the first beta version at Rails World, but Shopify, GitHub, 37signals, and a handful of other frontier teams have already been running this code in production for almost a year. Of course, there were bugs along the way, but good automated testing and diligent programmers caught virtually all of them before they went to production. It didn't always used to be this way. Once upon a time, I felt like I had one of the only teams running Rails on edge in production. But now two of the most important web apps in the world are doing the same! At an incredible scale and criticality. This has allowed both of them, and the few others with the same frontier ambition, to foster a truly elite engineering culture. One that isn't just a consumer of open source software, but a real-time co-creator. This is a step function in competence and prowess for any team. It's also an incredible motivation boost. When your programmers are able to directly influence the tools they're working with, they're far more likely to do so, and thus they go deeper, learn more, and create connections to experts in the same situation elsewhere. But this requires being able to immediately use the improvements or bug fixes they help devise. It doesn't work if you sit around waiting patiently for the next release before you dare dive in. Far more companies could do this. Far more companies should do this. Whether it's with Ruby, Rails, Omarchy, or whatever you're using, your team could level up by getting more involved, taking responsibility for finding issues on edge, and reaping the reward of excellence in the process. So what are you waiting on?
Here on a summer night in the grass and lilac smell Drunk on the crickets and the starry sky, Oh what fine stories we could tell With this moonlight to tell them by. A summer night, and you, and paradise, So lovely and so filled with grace, Above your head, the universe has hung its … Continue reading Dreams of Late Summer →
The first in a series of posts about doing things the right way
A deep dive into the Action Cache, the CAS, and the security issues that arise from using Bazel with a remote cache but without remote execution
(I present to you my stream of consciousness on the topic of casing as it applies to the web platform.) I’m reading about the new command and commandfor attributes — which I’m super excited about, declarative behavior invocation in HTML? YES PLEASE!! — and one thing that strikes me is the casing in these APIs. For example, the command attribute has a variety of values in HTML which correspond to APIs in JavaScript. The show-popover attribute value maps to .showPopover() in JavaScript. hide-popover maps to .hidePopover(), etc. So what we have is: lowercase in attribute names e.g. commandfor="..." kebab-case in attribute values e.g. show-popover camelCase for JS counterparts e.g. showPopover() After thinking about this a little more, I remember that HTML attributes names are case insensitive, so the browser will normalize them to lowercase during parsing. Given that, I suppose you could write commandFor="..." but it’s effectively the same. Ok, lowercase attribute names in HTML makes sense. The related popover attributes follow the same convention: popovertarget popovertargetaction And there are many other attribute names in HTML that are lowercase, e.g.: maxlength novalidate contenteditable autocomplete formenctype So that all makes sense. But wait, there are some attribute names with hyphens in them, like aria-label="..." and data-value="...". So why isn’t it command-for="..."? Well, upon further reflection, I suppose those attributes were named that way for extensibility’s sake: they are essentially wildcard attributes that represent a family of attributes that are all under the same namespace: aria-* and data-*. But wait, isn’t that an argument for doing popover-target and popover-target-action? Or command and command-for? But wait (I keep saying that) there are kebab-case attribute names in HTML — like http-equiv on the <meta> tag, or accept-charset on the form tag — but those seem more like legacy exceptions. It seems like the only answer here is: there is no rule. Naming is driven by convention and decisions are made on a case-by-case basis. But if I had to summarize, it would probably be that the default casing for new APIs tends to follow the rules I outlined at the start (and what’s reflected in the new command APIs): lowercase for HTML attributes names kebab-case for HTML attribute values camelCase for JS counterparts Let’s not even get into SVG attribute names We need one of those “bless this mess” signs that we can hang over the World Wide Web. Email · Mastodon · Bluesky