Proposal for Providing GPU Resources for Korean Language Model Research (LLM) Research

Title: Proposal for Providing GPU Resources for Korean Language Model Research (LLM) Research

Author: Kichang Yang, kevin-ai

Date posted: 2023/09/24

Summary

The rapid advancements in natural language processing and AI have highlighted the significance of LLMs in various applications, ranging from machine translation to chatbots and sentiment analysis. As the demand for Korean language-specific AI solutions grows, we will research Korean LLM for Korean AI Ecosystem. Our goal is to find the Korean LLM model to be implemented not only as various simple AI bots but also to evolve into a dynamic AI agent applied in web3 projects.

Mission and Value Alignment

How does this project help the AIN DAO achieve its mission and align with its values?

Having an efficient and robust GPU infrastructure is crucial for accelerating LLM research. AI Network can help us by providing the necessary resources to achieve this goal.

We aim to find and verify an open-source-based LLM that can be applied to various Web3 projects, including NFTs. Our project will be a fine-tuned model to inspire Web3 creators in the AIN DAO who want to create AI services.

Background

What motivated you to write this proposal? Was there a problem to be solved or an opportunity? Why do you think it’s a good idea?

Insert related links (Forum posts, Discord discussions, etc.)

  • We are excited about the potential for our Korean LLM model to expand in several ways. LLM research has traditionally focused on English, and big tech companies have controlled most Korean research. We aim to address these issues and find a Korean LLM model open to all applications by fine-tuning it for problem-solving on Web 2 and new advancements in Web 3.

Scope of Work

What is within the scope of this proposal? What objectives, goals, features, and executables do you propose achieving with this proposal?

Write what is realistically possible within the budget. You can set out the budget for only some of the projects. You may apply for additional funding as the idea and the project develop. Start with an MVP and incrementally develop into V1, V2, and so on!

  • Deploy Inference Server
  • Instruction Tuning with SFT Algorithm

Specification

Describe your project in detail. Addressing any feedback or suggestions and giving numbers and factual information will increase your chance of getting your proposal approved. This section has no fixed structure, so feel free to be creative.

  1. SFT Open-Source LLM with limited number of GPUs: make LLM in healthcare domain with less GPU (<8) using Q-LoRA or Adapter algorithm and PEFT.
  2. Make healthcare dataset using LLAMA-2, 8xA100 GPUs.
  3. Using Efficient Inference server: Deploy the server using accelerated and lightweight inference engine such as DeepSpeed MII.
  4. Share the LoRa weight: like LoRa weight of CivitAI( https://civitai.com/models/63556 ), share our models with plug-and-playable modules on the well-known open-LLM models such as Polyglot-Ko, LLAMA.

Timeline

  • Oct - Nov : Data Augmentation for SFT Algorithm Training (if need)
  • Nov - Dec : Iterative Development (Integration) & Evaluation (Infer / training / Evaluation / model selection)

Requirement

Write a detailed breakdown of your project’s requirements.

  • You need to provide the machine specifications you requested below.
    1. How many GPUs do you need, and choose the type? Type: 8x A100 80G
    2. What is the machine used for? For research LLM Models

Successful metrics and KPI

What are the project’s success metrics and KPIs? and How will the project’s success be measured?

Our goal is to achieve one or more of the following:

  • Scalability and Adaptability: Applies to AI Network Project , which aims to provide services to more than 100 users or projects to api server or demo using funded gpu(s), including plug-and-playable modules.
  • Impressions: Release research reports (papers) on the major research platform/community such as arxiv.org

Contribution

Please explain how it contributes to the AI network ecosystem. AI Network receives its resources from ecosystem participants. Therefore, please consider ways to pay it forward and provide them. For example…

  • We are happy to assist with any AI Network project that uses our research. For example… create a chatbot or AIN token-connected services…
  • We will create a generation AI demo server similar to Chat GPT which members of the AI Network ecosystem can easily with this model. This server can be accessed for free by AI Network ecosystem members who wish to experience generation AI.
  • We will release our research paper to major research plaform/community such as Arxiv.org to promote together the vision and mission of the AI Network.
    • AI developers around the world will become aware of AI Network, creating an opportunity to attract developers to the AI network developers community.
    • The AI collaborative infrastructure project of the AI Network demonstrates the immense value it can provide.

Brand Usage

If the project will use the AIN DAO brand outside the DAO, explain in detail how the brand will be used and add an “AIN DAO” tag to the project.

Squad Background

Introduce the team and the members’ interests, backgrounds, time commitments, etc.

Voting

The voting period will be between 2 to 7 days. Please send the voting options indicating if they are for or against your proposal. If available, include all links to previously held surveys and/or voting (i.e., on Discord).

  • Examples: Approve/Disapprove or Yes/No
  • Something like “support the proposal but needs revision” may be an option but will count towards the disapproval of funding the project.
  • Snapshots: We use snapshot.io as the official voting platform. Once the proposal gains enough approvals, it will be promoted from Discord to Forum and finally to Snapshot. For more information on the voting process, refer to this document.

Thanks for your proposal and let me start 1 round voting now.

This proposal was approved in the second round. We will contact you directly for the next steps.

  1. Discord
  2. Snapshot

After reviewing the proposal, we will provide 4 A100 for 3 months this time. Depending on the results in 3 months, we may reissue Runo. Runo is scheduled to issue 24ea High GPUs.

Congratulations! 24 of The Korean LLM Runo were sold out last Friday. :slight_smile: We will contact you directly to provide the resources.