Skip to main content

FOCI LLM Users Group: "A Guide to Open-Source Large Language Models and Fine-Tuning Techniques" (18 Oct)

Posted October 12, 2023
OpenAI DALL-E rendering of an AI sheep...
6p, 18 Oct 2023, Carnegie 113

WHAT: "A Guide to Open-Source Large Language Models and Fine-Tuning Techniques"
LEADERS: Inwon Kang and Tripp Lyons 
WHERE: Carnegie 113 (New location!)
VIDEO: https://youtu.be/5Ul0Wz9p9vU
WHEN: 6p, 18 Oct 2023 (pizza & salads at 5:45)
NEXT DATES: 15 Nov
CONTACTS: Aaron Green <greena12@rpi.edu>, John Erickson <erickj4@rpi.edu>
MAILING LIST: https://cs.sympa.rpi.edu/wws/info/foci-llm-users

The introduction of ChatGPT and Large Language Models (LLM) has brought rapid advancements in neural NLP research. However, these models have the drawback of requiring a large amount of computing power and storage. In addition, state-of-the-art models are locked behind paywalls in many cases, limiting access for researchers outside of the industry.

This talk aims to provide some background information on how to use LLaMa-2 and other open-source models and fine-tuning. Open-source LLMs provide a cost-free way to test one's LLM pipeline, which can be adapted into paid commercial models, provided they have a permissive software license. We will also discuss the basics of technologies that make LLMs more accessible, such as quantization and low-rank approximation, and touch on fine-tuning these models for specific domains using Hugging Face's transformers library.

Slides for this talk are available here (PDF)

Inwon Kang is a Ph.D student in CS at RPI with an interest in blockchain systems and integration of ML with blockchains, working with Prof. Oshani Seneviratne; Tripp Lyons is a RPI CS undergrad.

Recordings of previous FOCI Users Group sessions:

Remote video URL