Large Language Models & Generative Text

Unlock the potential of LLMs with our intensive workshop. Learn hands-on techniques for fine-tuning LLMs with your own data. Elevate your technical skills into the generative AI age.

Develop an in-depth understanding of large language models (LLMs) and the transformer architecture

Explore the Hugging Face transformers library, applying LLMs for generative and prediction tasks

Fine-tune generative text LLMs using your own custom datasets for specific use cases

Apply parameter efficient fine-tuning (PEFT) approaches and model quantization to fine-tune LLMs using consumer hardware

With the release of ChatGPT, large language models (LLMs) have taken the world by storm and significantly disrupted both the data science and artificial intelligence space as well as traditional industries, and revolutionizing natural language processing and paving the way for transformative applications.

In this hands-on 3-hour workshop, we invite aspiring machine learning engineers, established data scientists, and individuals new to the realm of AI and machine learning looking to rapidly upskill in LLMs to dive into the transformer architecture, and fine-tuning custom models with custom datasets.

Uncover how these skills can drive tangible business value, propelling you into the forefront of the rapidly evolving field of generative AI. Whether you’re a seasoned data expert or a newcomer to the world of machine learning, this workshop promises valuable insights and practical knowledge, equipping you to leverage LLMs for impactful and innovative applications. Join us and unlock the immense potential of generative AI!

Copyright 2023, NLP from scratch.