Large language models like GPT4 are very good at solving various natural language tasks. Even though open source models are not as powerful as GPT4, if we collect the right dataset and finetune them, they can potentially become as good as GPT4 for specific tasks.
Finetuning Large Language Models (LLMs) involves adjusting an existing pretrained model to become more specialized in a particular task or domain. This process is crucial in achieving high performance, as it allows the model to adapt to specific nuances and contexts that were not originally covered during its initial training.
At Further 2023, we will finetune various open source LLMs like LLAMA2 7B/13B, Mistral 7B and FLAN-T5 for a task you like. We will have GPUs available to try out your ideas!
We will learn how to curate the right datasets and use them in different finetuning paradigms like full-finetuning and parameter efficient finetuning(PEFT)