Motivating the best people to work on a problem, especially a difficult problem, is hard. Oftentimes, people of great intelligence are also acutely concerned with the impacts of their actions. It’s necessary for them to be contributing positively to society and the future.

We should be happy this is the case. Great minds pursuing great aims is undoubtedly bullish for humanity. But how might we persuade these great minds to work on something which is not so positive?

AI is one such topic. Most well known AI researchers entertain some thoughts that an AI can be built which actively attempts to harm humanity. These beliefs have hampered AI research in recent years, most notably at Google where ethics committees and other “morality blockers” prevented Google from capitalizing on their breakthroughs in attention and transformers.

OpenAI was more clever with their pitch to concientious researchers. They said: “someone will build AGI, our aim is to get their first so we can control it”. They bolstered this pitch with a strong focus on alignment. You might say the Manhattan project, profiled in the movie Oppenheimer, had a similar pitch. Many scientists on the project were pacificists or had objections to the bomb, but they worked feverishly at it anyway so that the technology did not fall into the wrong hands.

Now, we are witnessing a backlash to this pitch, where OpenAI’s stance of keeping their model weights private (they have every right to do so) is arguably leading to more progress on the backs of open source models. Llama 34B, the second largest open source LLM at the time of this writing (after Llama 70B) has recently been tuned to run fast on an M2 Ultra and there is some speculation that adding a Mixture of Experts architecture to Llama could lead to GPT-4 levels of performance.

In this line of work, researchers are motivated by releasing AGI to everyone all at once, a third evolution from:

  • Not releasing any model with instructions
  • Releasing an instruction tuned model via a chat interface
  • Releasing the full model weights

For ambitious projects, it’s a good idea to think through the core motivations of people working on them and be sure to tune the pitch to fulfill their deepest psychological objections to the work. On this front, we have to give Sam Altman and the leadership team at OpenAI credit for overcoming the objections of scientists circa 2017 and giving us all access to this great technology.