What can possibly go wrong when you embed someone else’s AI models in your systems? This episode of the AI Today podcast aims to answer this question. And, provide alternative options to Open Source AI. Despite the increasingly walled garden that is becoming the Large Language Models (LLMs) such as OpenAI’s ChatGPT, organizations are creating and embedding AI solutions powered by third-party models they have little visibility and control into.
With the recent surge of interest in generative AI, mega tech companies are in the race for AI dominance. There are only two ways to dominate with new technology: build or buy. However, the industry is moving way too fast to build your way to dominance. In this podcast, we dig into why megatech companies established venture arms to invest in the latest new startups. And why the FTC has launched a major inquiry into generative AI, focusing on investments and partnerships among key tech players like Alphabet, Amazon, and Microsoft.
Is OpenAI no longer open source?
Given it’s name, it would be fair to assume that OpenAI is in fact open, and open source. Despite the word “Open” in its name, there’s very little open about OpenAI. You can access their models via an API call and you can get some understanding of what went into building the model, but you can’t see the training data, and more importantly, you can’t actually see the model weights and details of the model so you can configure it how you like. You also can’t run these models on your own platforms.
The most popular LLMs, OpenAI’s ChatGPT and Google Bard are proprietary. That means that they are owned by a company and can only be used by those who accept the terms of a free license or pay for the license fees for embedding in their solution. The licenses provide some capabilities but also restrictions on how to use the LLM.
Why open source AI?
There are a few major advantages to open source LLMs. These include reduced vendor dependency (“ so called vendor lock-in”). You can also have Enhanced data security and privacy. Also, cost savings and reduced vendor dependency. Open source LLMs also provide transparency, improved customization, and active community support. We dig into each of these topics in greater detail in the podcast.
What are the risks of open-source AI?
At this point, you may be saying open source sounds like a great option. But, what are the downsides? Large Language Models are resource-intensive. They require significant computational power for training and operation. This can be a barrier for individuals and organizations with limited resources.
After listening to the podcast, you be the judge. Are you ready to move to open source AI platforms?