Result sharing on the project: Democratizing ChatGPT like service with publicly available LLMs

This project has been going well so far. Since all the timeline has been passed, let me share the news:

  • find the project proposal in the proposal category
  1. I made multiple SFT(Supervised Fine-Tuned) models more than 10 including Alpaca-LoRA(7/13/30/65B), GPT4 generated Alpaca LoRA(7/13B), and EvolInstruct Vicuna LoRA(7/13B). Additionally, I have had experimented fine-tuning with OpenLLaMA and StarCoder even though the results were not good enough yet.

  2. Along the way, I have created GradioChat which is very similar to HuggingChat from Hugging Face but entirely built on top of Gradio. This is a helpful side project for users to experience multiple chat histories with saving/loading features. Also, this is particularly helpful when you don’t want to host models on different servers but in an app.

  3. Along the way, I have massively updated PingPong project to support more than 10 different prompting styles depending on the type of SFT model. And the basic functionality such as how many recent conversations to look up has been verified to be working successfully.

  4. Based on GradioChat and PingPong projects, the LLM As Chatbot project has been updated accordingly, and it has reached 2.4k GitHub star. It gained almost 1k GitHub star since this proposal was approved which I think is pretty stunning result.

  5. LLM As Chatbot is basically designed for personal use only. For those who want to serve multiple models within a same system such as DGX which has lots of VRAMs, I have made modified LLM As Chatbot and created a sibling project for doing this. The readme is not properly updated, but It has been tested with some of the selected user groups (my Facebook friends). With all the efforts so far, LLM As Chatbot project has become one of the base image(framework) in which is a GPU cloud VM provider. All the models are pre-downloaded in an external Volume, and it is attached automatically when you create an instance. Hence, you don’t have to waste of time on huge size of large language models yourself all the time. Just click provisioning, then you are all set to go.

1 Like