Proposal for GPUs for TUNiB LLM research project

Title: Request for TUNiB LLM Research Project
Author: Soohwan Kim
Date posted: 2023/11/06

Summary

  • Research on leveraging pre-trained English language models (such as Llama2, Mistral 7B, etc.) to develop Korean language capabilities.
  • Research on how to optimize the use of GPU resources.

Background

  • There are many open-source English language models available today (like Llama2 and Mistral 7B). To harness their full potential for Korean language development, it’s crucial to consider training strategies (Supervised Fine-Tuning, RLHF, etc.) and the choice of training data.
  • Additionally, various methods (e.g., LoRA, Quantization, and more) can be employed to efficiently train foundation models.
  • Encouraging the public to make use of foundation models to enhance their services.

Scope of Work

  • Goals
    • Further training (using methods like SFT, RLHF, etc.) of pre-trained English language models (e.g., Llama2, Mistral 7B, etc.) to enhance their Korean language capabilities.
    • Optimizing foundation models for efficient AI service inference.
  • Features
    • Enhanced Korean language capabilities of pre-trained English language models.
    • Optimized foundation models for efficient inference.

Specification

The project will primarily focus on maximizing the utilization of available GPU resources to carry out further training and optimization tasks. This will involve designing and implementing training pipelines, fine-tuning strategies, and optimization techniques. The evaluation of models will be based on performance metrics, including language understanding, generation quality, inference speed, and resource efficiency.

Selecting appropriate training data, defining training objectives, and implementing training algorithms are integral parts of this effort. Moreover, model optimization methods will be explored and implemented to enhance the efficiency of foundation models.

The ultimate objective is to deliver pre-trained English language models with enhanced Korean language capabilities and optimized foundation models, making them valuable for enhancing AI services’ accessibility and performance.

Request

  • Description: Request for 1 Workstation for the research project.
  • Resource type
    • DGX (80G A100 x 8) Workstation
  • Amount: 1
  • Duration: 3 months
  • Impact: The TUNiB LLM research project is expected to yield the following effects.
  1. Enhanced Korean Language Capabilities: By investing in the development and optimization of pre-trained English language models for Korean, we anticipate significant improvements in their performance and accuracy. This will lay a robust foundation for advancements in AI, enabling more sophisticated and effective applications.
  2. Optimized Foundation Models: Through these efforts, we aim to create optimized and highly efficient foundation models. This optimization will result in models that require fewer computational resources and offer faster inference times, increasing accessibility and cost-effectiveness.
  3. Wider Adoption of AI Services: Advances in foundation models will facilitate broader public adoption of AI services. With more user-friendly and optimized models, individuals and businesses will find it easier to integrate AI technologies into their daily lives, opening up new opportunities for innovation and problem-solving.

Targets

  • Improved Model Performance: The research project aims to enhance the performance metrics of the foundation models, including accuracy, efficiency, and scalability. The goal is to surpass existing models and establish the project’s models as superior solutions in the field of AI.
  • Practical Application and Impact: The research project aims to bridge the gap between theoretical advancements and the practical application of foundation models. It seeks to develop use cases and demonstrations that illustrate the real-world impact and value of the models.

Contribution

  • Additional rewards to TUNiB Runo holders who provide user feedback about their chatbot, DearMate.
  • We intend to provide LLM chatbot services to the AIN NFT marketplace, making our services accessible to all AIN participants.

Squad Background

Introduce the team and the members’ interests, backgrounds, time commitments, etc.

  • A team of NLP engineers from TUNiB.

Voting

The voting period will be between 2 to 7 days. Please send the voting options indicating if they are for or against your proposal. If available, include all links to previously held surveys and/or voting (i.e. on Discord).

  • Examples: Approve/Disapprove or Yes/No
  • Something like “support the proposal but needs revision” may be an option but will count towards the disapproval of funding the project.

*Snapshots: We use snapshot.io as the official voting platform. Once the proposal gains enough approvals, it will be promoted from Discord to Forum and finally to Snapshot. For more information on the voting process, refer to this document .

Thank you for submitting the proposal. DAO voting just started for the GPU resource support.

Since this proposal was approved on the 1st and 2nd rounds, this proposal is accepted.

  1. Discord
  2. Snapshot

We will open 32 High GPU Runos very soon.