Ric Raftis consulting logo

Ric Raftis Consulting

The Open Source vs. Closed Source AI Debate – Aligning Safety and Accessibility

Ric Raftis consulting logo

In a matter of days, the focus in the AI world has shifted from who is delivering the latest and greatest model to intense scrutiny around safety and alignment. This article has no intention of regurgitating all this news, but will focus on the question around safety, alignment and open source. But first, a recap of recent events to set the scene.

It started with the departure of Ilya Sutskever with Jan Leike hot on his heels. Since then Leike has joined Anthropic (Claude) and several pundits in the AI space are predicting that Sutskever will join him. In the meantime, Helen Toner, the ex board member of OpenAI provided an interview to TED AI which was the first time, to my knowledge, there was some clarity placed around the sacking of Sam Altman last year. It was widely speculated that Toner engineered the dismissal of Altman. Her comments in the interview did not pull any punches and it would appear there is no love lost between her and Altman.

Toner made several claims in the interview including that the board found out about the release of ChatGPT via Twitter. If true, this raises some serious questions about governance at OpenAI. Surpisingly Toner also accused Altman of lying which is terminology not usually heard in the professional space. She made this claim in response to several questions. You can follow the links at the foot of the article for more information.

With that backdrop in mind, we now turn to the question of safety and alignment with AI. It is an important issue that must be addressed. The next question of course is who will be the party to address it? Before addressing those questions, let us consider the debate about open and closed source AI. Two of the major models, being OpenAI (ChatGPT) and Anthropic (Claude) are both closed source. Meta (Llama) and X.ai (Grok) are both open source. There are others, but these are adequate for the purpose of the discussion. There are proponents and arguments for both open and closed source models but the issue is not necessarily that simple.

If the models are closed source, then only a select few know what is really going on behind the closed doors. They also then have total control over the distribution and availability of the models. The primary argument appears to be around safety and these companies calling on the government to regulate AI. There’s also the risk of what’s called ‘regulatory capture‘ – when government regulations get heavily influenced by the companies they are supposed to be regulating. The worry is that big tech companies could lobby and pressure governments to put rules in place that actually protect their interests while making it very difficult for new startups and competitors to enter the market.

It’s like a wealthy neighborhood association writing restrictive rules that keep housing costs high and prevent more affordable homes from being built – protecting their wealthy residents while excluding others. The regulation is supposed to be for the public good, but ends up benefiting the insiders who had the most influence over crafting the rules. Let us not forget either that just because the government legislates and regulates does not mean it will follow its own rules. Remember Edward Snowden and Julian Assange?

Conversely, open source puts the power of the models in the public domain. Such a move creates both full transparency about the model and also provides others with the opportunity to improve it. Open source projects have been extremely successful including Mozilla and Joomla to name but two. The main argument against open source appears to be that it is providing anybody with access to these models. Again, a legitimate argument, but this also has the potential to defeat one of the primary purposes of AI which is the democratisation of knowledge and spreading abundance to all people in the world, not just those who can afford it.

The answer must lie somewhere inbetween. If it is to be closed source, the public cannot debate issues when they do not know what is in the models. Open source provides the agency to have these discussions. Closed source can protect the human race from extinction, but if it is controlled by the government, are these the people you wish to trust anyway? Open source also allows the democratisation of AI and the entire world should benefit from these technologies that can end suffering if properly applied.

For me, it is not a binary question. On balance however, I support open source. I would much prefer to know what is going on and have the opportunity to debate the issues openly. Where do you stand? Let me know in the comments.

References:

Facebook
Twitter
LinkedIn

Leave a Comment

Your email address will not be published. Required fields are marked *