

Date and time is TBD
|Online
LLAMA 3 Hackathon
⚙️ Enhancing AI with Fine-Tuned LLAMA 3, a large language model by Meta, to surpass the performance of LLAMA 3-8b-instruct in a specific domain or task ✍️ Receive support and feedback from industry experts 🤝 Join individually or create a team with other participants
Date and time is TBD
Online
👀 Looking for a team? Join the Telegram chat "Ogon AI Hackathons Community" to find a team
Hackathon Challenge
Choose your focus for fine-tuning from the following areas, or feel free to explore beyond:
Domain-Specific Enhancement: Fine-tune the model on specialized texts like legal documents, medical journals, or engineering papers to boost its performance in your chosen field.
Creative Text Generation: Work on literary styles such as poetry or prose to enhance the model’s creativity.
Instruction-Based Learning: Improve how the model understands and responds to instructions by using a curated set of question-answer pairs.
Language Inclusivity: Boost the model’s proficiency in languages that were underrepresented in the initial training set.
Who can participate
Participants who have reached the age of majority in their country of residence.
Prizes:
______________
Ideas and examples for task outcomes
Dataset Description: Detail the dataset used for fine-tuning. If the dataset is programmatically generated, include the scripts used for its generation.
Fine-Tuning Script: Provide the script used for the fine-tuning process.
Model Weights: Submit the weights of the fine-tuned model.
Evaluation Dataset: Include the dataset on which the model’s output was evaluated.
Model Outputs: Present the outputs of both the fine-tuned and the instruct model on the evaluation dataset.
Reproducibility Package: Offer a detailed setup guide and steps necessary to replicate the results.
Judging Criteria
Novelty: Originality of the approach and creativity in addressing the problem.
The potential impact and practical value of the solution, Business Value
Relevance and practical usefulness of the fine-tuned model.
Performance: Improvement in performance compared to the baseline instruct model, assessed through specific metrics provided by the hackathon organizers.
Reproducibility: Ease of replicating the results using the provided scripts and documentation.
The clarity and effectiveness of the project presentation.
Resources Provided to participants:
Access to a cloud server equipped with an RTX-4090 GPU.
Base model (LLAMA 3-base-8b) and instruction-tuned model (LLAMA 3-8b-instruct) weights.
Expert assistance via a designated Telegram chat throughout the hackathon.
Timeline:
The hackathon dates will be determined later.
Support:
Participants can seek guidance and ask technical questions through the dedicated Telegram chat. Regular check-ins and tips will be provided to assist participants in navigating the challenges of model fine-tuning.
👀 Looking for a team? Join the Telegram chat "Ogon AI Hackathons Community" to find a team
