Hugging Face, the startup that offers developers a GitHub-like coding environment for creating open-source AI applications, will now let users create their own customized generative AI chatbots, giving them an alternative to OpenAI and its GPTs.
The organization this month launched Hugging Chat Assistants, a service that will let developers create their open source chatbots in two clicks using whatever open large-language model (LLM) they want. It comes three months after OpenAI, which supercharged the generative AI space with the released of ChatGPT in November 2022, unveiled a platform for creating custom chatbots based on the company’s GPT-4 LLM.
Users of OpenAI’s platform can make their custom chatbots publicly available in the vendor’s GPT Store, which opened last month.
Julien Chaumond, co-founder and CTO of Hugging Face, wrote in a post on X (formerly Twitter) that the GPT Store was “kinda sad” and added that he wished “the Open Source AI community would build something way better.”
Ten days later, Hugging Face did just that, giving developers the tools to build custom versions of Hugging Face Chat and making them available to others.
Philipp Schmid, the tech lead at Hugging Face, announced the service in his own post on X.
Free and Open Chatbots
While both Chaumond and Schmid noted that Assistants are similar to OpenAI’s GPTs, there are significant differences. Not only are Assistants seemingly easier to spin up – noting the need for only two clicks – but developers can choose the LLM that they want to use to build the chatbot, such as Llama2 or Mixtral. With GPTs, users need to use OpenAI’s platform and LLMs, either GPT-4, GPT-4 Vision, or GPT-4 Turbo.
In addition, Assistants are free to use for both the creator and the user. Those using OpenAI’s GPT platform have to choose from three options, with ChatGPT Plus being the cheapest at $20 a month. For ChatGPT Team, it’s $25 a month, while the cost for ChatGPT Enterprise – which includes such features as high-speed access to GPT-4 and tools like Dall-E text-to-image, enterprise-level security, and an expanded context window for longer inputs – is determined by the size of the organization.
Hugging Face seems to have a good understanding of user needs. According to Chaumond, in the three days since the company announced its new service, 4,000 Assistants were created and 1,500 users clicked on the Chaumond’s own custom chatbot, dubbed Clone of HF CTO.
After the initial rush last year to embrace generative AI following the release of ChatGPT, there has been a push among developers and organizations to be able to customize the tools through such steps as training of LLMs on proprietary corporate data and, in this case, creating chatbots that more align with personal or enterprise needs.
The demand can be seen in the growing number of chatbot builders popping up in the market, with names like Tidio, ManyChat, Chatfuel, Botsonic and CustomGPT.
Improvements on the Way
Hugging Face’s Chaumond said the company is looking at enhancements to Assistants based on feedback from initial users, including enabling developers to edit their Assistants through an API so that they can push up-to-date information to them.
Other possibilities include adding retrieval augmented generation (RAG) – an approach used to improve LLMs by using custom data – and web search to the Assistants, generating Thumbnail Assistants through AI, and suggesting changes to other users’ Assistants.
Hugging Face, which launched in 2016, is looking to expand its capabilities. Late last month, the company partnered with Google Cloud that allows developers to use Google Cloud’s infrastructure for Hugging Face services as well as for training and serving Hugging Face models.
The company also has partnerships with such top-tier IT firms in the AI space as Amazon Web Services, Meta and IBM, and chip makers like AMD and Nvidia.