Title: Request server for TUNiB LLM research project
Author: Soohwan Kim
Date posted: 2023/06/16
Summary
- Research on how to further train pre-trained LLMs for purposes.
- Research on how to optimize the use of GPU resources.
Background
- There are many open-source models available today. (like Polyglot) To get the most out of these models, you need to think about how you want to train them (Supervised Fine-Tuning, RLHF, etc.) and what data you want to train them with.
- Also there are many methods(such as LoRA, Quantization and more) to train foundation model efficiently.
- Boost the public to use foundation models to improve their services
Scope of Work
- Goals
- Futher training(using SFT, RLHF, etc.) of pre-trained LLMs (Polyglot 1.3b, 3.8b, 5.8b, 12.8b)
- Optimize foundation models to ensure efficient inferencing of AI services
- Features
- Further trained of pre-trained LLMs
- Optimized foundation models
Specification
The project will focus on leveraging the available GPU resources to conduct further training and optimization tasks. It will involve designing and implementing training pipelines, fine-tuning procedures, and optimization strategies. The models will be evaluated based on their performance metrics, including language understanding, generation quality, inference speed, and resource efficiency.
It will involve selecting appropriate training data, defining training objectives, and implementing training algorithms. Additionally, model optimization techniques will be explored and implemented to improve the efficiency of the foundation models.
The ultimate goal is to deliver further trained LLMs and optimized foundation models that can be utilized to enhance AI services and improve their accessibility and performance.
Request
- Description: 2 Workstation for research project
- Resource type
- DGX (80G A100 x 8) Workstation
- DGX (40G A100 x 8) Workstation
- Amount: 2
- Date: ASAP
- Impact: TUNiB LLM research project has the potential to yield the following effects.
- Enhanced Foundation Model: By investing in the development and optimization of the foundation model, we can expect a significant improvement in its performance, capabilities, and accuracy. This will lay a solid groundwork for further advancements in the field of AI, enabling more robust and sophisticated applications.
- Optimized Foundation Model: Through these efforts, we aim to create a optimized and highly efficient foundation model. This optimization will result in a model that requires fewer computational resources and delivers faster inference times, making it more accessible and cost-effective for users.
- Easy Adoption of AI Services: The advancements in the foundation model will facilitate the widespread adoption of AI services by the general public. With a more optimized and user-friendly model, individuals and businesses will find it easier to integrate AI technologies into their daily lives, opening up new possibilities for innovation and problem-solving.
Targets
- Improved Model Performance: The research project targets the enhancement of the foundation model’s performance metrics, including accuracy, efficiency, and scalability. The goal is to outperform existing models and establish the research project’s model as a enhanced solution in the field of AI.
- Practical Application and Impact: The research project seeks to bridge the gap between theoretical advancements and practical applications of the foundation model. It aims to develop use cases and demonstrations that showcase the real-world impact and value of the model.
Squad Background
Introduce the team and the members’ interests, backgrounds, time commitments, etc.
- A team of NLP engineers from TUNiB.
Voting
The voting period will be between 2 to 7 days. Please send the voting options indicating if they are for or against your proposal. If available, include all links to previously held surveys and/or voting (i.e. on Discord).
- Examples: Approve/Disapprove or Yes/No
- Something like “support the proposal but needs revision” may be an option but will count towards the disapproval of funding the project.
*Snapshots: We use snapshot.io as the official voting platform. Once the proposal gains enough approvals, it will be promoted from Discord to Forum and finally to Snapshot. For more information on the voting process, refer to this document.