Synopsis: Ben Cotton, head of community for Kusari, delves into the ongoing debate surrounding the definition of "open AI models." The discussion highlights challenges in applying the traditional open source concept to AI, particularly due to the inclusion of data and model weights. Cotton emphasizes that while efforts are underway to create an open source AI definition, the complexity of AI systems means that new terminologies and frameworks — such as different degrees of openness — may be necessary for the community to address biases, licensing and transparency issues effectively.

In this interview, Mike Vizard speaks with Ben Cotton, head of community for Kusari, about the complexities surrounding the definition of “open AI” and the challenges in adopting an open source approach in the AI space. Cotton explains that while open source in software is well-defined, translating that concept to AI—where models rely on both software and vast amounts of training data—complicates things. A significant challenge arises from the data’s influence on AI output, which makes transparency critical for assessing bias and reliability. However, restrictions around data access often prevent a full view into how a model functions, creating tension in the open AI community.

Cotton discusses the Open Source Initiative’s (OSI) efforts to establish a clear definition for open-source AI, which may initially follow a simple yes/no framework. He notes, however, that AI’s complexity might demand a more nuanced approach, potentially involving labels indicating openness in specific model aspects, such as data or weights. As the open AI community navigates this process, some developers worry about proprietary interests, fearing revenue loss if others fork their models. The debate over definitions has divided the community, but Cotton believes that consensus, whether in support or opposition, will eventually guide open-source AI’s future.