{"id":7,"date":"2025-01-15T20:14:34","date_gmt":"2025-01-15T20:14:34","guid":{"rendered":"https:\/\/blog.huby.ai\/?p=7"},"modified":"2025-01-27T03:17:48","modified_gmt":"2025-01-27T03:17:48","slug":"are-open-source-ai-models-safer-than-closed-source","status":"publish","type":"post","link":"https:\/\/blog.huby.ai\/?p=7","title":{"rendered":"Are open source AI models safer than closed source?"},"content":{"rendered":"Yesterday at an AI Optimization and Security event in San Francisco that was hosted by Rahul Parundekar &amp; Jonathan Bennion, a discussion on this topic ensued. My fellow genAI entrepreneurs saba imran (Cofounder at Khoj) and Saurabh Shintre(Founder Realm Labs) had some good points:<br \/>&#8211; Open source models have publicly available weights\/biases, so bad actors can potentially exploit them easily.<br \/>&#8211; Closed source models because of lack of public transparency may have more vulnerabilities. <br \/>I feel that open source AI models aren&#8217;t same as open source code. Open source code can be quickly changed whereas changing a model requires a long training cycle. While changing the model interface layer to block vulnerabilities could be a potential option, it may not really be a solution.","protected":false},"excerpt":{"rendered":"<p>Yesterday at an AI Optimization and Security event in San Francisco that was hosted by Rahul Parundekar &amp; Jonathan Bennion, a discussion on this topic ensued. My fellow genAI entrepreneurs saba imran (Cofounder at Khoj) and Saurabh Shintre(Founder Realm Labs) had some good points:&#8211; Open source models have publicly available weights\/biases, so bad actors can [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":32,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[4,5,3],"class_list":["post-7","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-learnings","tag-ai-models","tag-model-security","tag-open-source-models"],"_links":{"self":[{"href":"https:\/\/blog.huby.ai\/index.php?rest_route=\/wp\/v2\/posts\/7","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.huby.ai\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.huby.ai\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.huby.ai\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.huby.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7"}],"version-history":[{"count":1,"href":"https:\/\/blog.huby.ai\/index.php?rest_route=\/wp\/v2\/posts\/7\/revisions"}],"predecessor-version":[{"id":8,"href":"https:\/\/blog.huby.ai\/index.php?rest_route=\/wp\/v2\/posts\/7\/revisions\/8"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.huby.ai\/index.php?rest_route=\/wp\/v2\/media\/32"}],"wp:attachment":[{"href":"https:\/\/blog.huby.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.huby.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.huby.ai\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}