Yesterday at an AI Optimization and Security event in San Francisco that was hosted by Rahul Parundekar & Jonathan Bennion, a discussion on this topic ensued. My fellow genAI entrepreneurs saba imran (Cofounder at Khoj) and Saurabh Shintre(Founder Realm Labs) had some good points:
– Open source models have publicly available weights/biases, so bad actors can potentially exploit them easily.
– Closed source models because of lack of public transparency may have more vulnerabilities.
I feel that open source AI models aren’t same as open source code. Open source code can be quickly changed whereas changing a model requires a long training cycle. While changing the model interface layer to block vulnerabilities could be a potential option, it may not really be a solution.
Are open source AI models safer than closed source?

Leave a Reply