Improving Accuracy of LM Applications: A Partnership with Meta and LAMAI
Many developers have experienced frustration when working with large language models (LLMs), often encountering inconsistent results that are just right one moment, only to miss the mark the next. This course aims to guide learners through a development pattern that systematically improves the accuracy and reliability of LLM applications.
In partnership with Meta and LAMAI, this course will be taught by experts Sharen Joe and Amit Sani. Many developers have worked with Sharon Joe, CEO of Lini, who provides integrated EL inference and tuning for enterprises and developers. She has also made a significant investment in the company. Additionally, Amit Sanii is the director of Hana Engineering for the Llama team, bringing a wealth of experience in optimizing AL applications using Meta's Llama family of models.
The approach that Sharon Joe and her team have found to be most effective starts with creating the initial application by stringing together all the different steps. Next, they add in evaluation metrics or "evals" to measure performance. After that, they use prompt engineering and self-reflection to encourage the model to perform better. However, this often isn't enough to reach production-level quality, so they move on to fine-tuning the model. Fine-tuning involves creating a fine-tuning data set and iteratively expanding on it. Techniques like Laura, a form of fine tuning called memory tuning, have become more rapid and affordable, systematically removing hallucinations and embedding facts in the model weights.
Fine-tuning has also introduced a kind of determinism within a probabilistic model. In this course, learners will walk through all these steps with the Llama 3, an 8-billion parameter model, on an example application for Texas sequel of a custom schema. Learners can expect to see both theory and practicality in building OMM applications, as well as hands-on experience fine-tuning a llama model.
The course is taking advantage of Meta's open-source nature to give learners hands-on experience with fine-tuning a Llama model. Sharon Joe appreciates Meta's work on releasing the Llama family of models as open source and believes that this short course will cover both some theory on how to think about building OMM applications and the practicalities of hands-on building. She hopes that learners will join her in exploring these cutting-edge techniques for optimizing their LLM applications.
The instructors, Sharen Joe and Amit Sani, are excited to bring this knowledge to learners, who can expect to fine-tune their skills with this course. The course aims to provide learners with the tools and expertise needed to improve the accuracy and reliability of their LMM applications.
"WEBVTTKind: captionsLanguage: enI'm excited to introduce improving accuracy of LM applications built in partnership with lamai and meta taught by sharen Joe and Amit Sani many developers have experienced frustration of inconsistent results when working with ls sometimes the results are just right while other times they miss the mark this course will guide you through a development pattern that systematically improves the accuracy and reliability of om applications I'm Inu and instructors Sharon Joe who is CEO of lini which provides integrated El inference and tuning for Enterprises and developers and is actually a company I made a SE investment in as well as Ami sanii who is director of Hana engineering for the Llama team and mether both are attorney instructors who bring a wealth of experience in optimizing Al applications and using specifically meta's llama family of models Sharon you work with many C is on building production ready applications can you say a bit more about what you are seeing absolutely thank you Andrew the approach that we found to be most effective starts with creating the initial application stringing together all the different steps then we add in evaluation metrics or what I like to call evals to measure performance after that we use prompt engineering and self-reflection to encourage the model to perform better often this isn't enough enough however to reach production level quality so we move on to fine-tuning the model this involves creating a fine-tuning data set and iteratively expanding on it fine tuning has become more rapid and affordable with techniques like Laura a form of fine tuning called memory tuning also systematically removes hallucinations and embeds facts in the model weights a kind of determinism within a probabilistic model in this course we'll walk you through all of these steps with the Llama 3 8 billion parameter model on an example application for Texas sequel of a custom schema that sounds exciting and I mean you've obviously worked extensively with the L 3 family of models what can Learners expect to see in this course thanks Andrew I'm excited to be back in our previous course on prompt engineering with llama 2 and three we focused on how to best prompt these models now we are taking advantage of llama's open source nature to give Learners hands-on experience fine-tuning a llama model I really appreciate meta's work on releasing the Llama family of models as open source and I think what you just shared is an example of the types of work that developers can do almost uniquely on models whose ways are available this short course will cover both some theory on how to think about building omm applications and the practicalities of Hands-On building we hope you'll join us in exploring these Cutting Edge techniques for optimizing your llm applications yep in fact I hope you'll be ready to join us and fine tune your skills with this courseI'm excited to introduce improving accuracy of LM applications built in partnership with lamai and meta taught by sharen Joe and Amit Sani many developers have experienced frustration of inconsistent results when working with ls sometimes the results are just right while other times they miss the mark this course will guide you through a development pattern that systematically improves the accuracy and reliability of om applications I'm Inu and instructors Sharon Joe who is CEO of lini which provides integrated El inference and tuning for Enterprises and developers and is actually a company I made a SE investment in as well as Ami sanii who is director of Hana engineering for the Llama team and mether both are attorney instructors who bring a wealth of experience in optimizing Al applications and using specifically meta's llama family of models Sharon you work with many C is on building production ready applications can you say a bit more about what you are seeing absolutely thank you Andrew the approach that we found to be most effective starts with creating the initial application stringing together all the different steps then we add in evaluation metrics or what I like to call evals to measure performance after that we use prompt engineering and self-reflection to encourage the model to perform better often this isn't enough enough however to reach production level quality so we move on to fine-tuning the model this involves creating a fine-tuning data set and iteratively expanding on it fine tuning has become more rapid and affordable with techniques like Laura a form of fine tuning called memory tuning also systematically removes hallucinations and embeds facts in the model weights a kind of determinism within a probabilistic model in this course we'll walk you through all of these steps with the Llama 3 8 billion parameter model on an example application for Texas sequel of a custom schema that sounds exciting and I mean you've obviously worked extensively with the L 3 family of models what can Learners expect to see in this course thanks Andrew I'm excited to be back in our previous course on prompt engineering with llama 2 and three we focused on how to best prompt these models now we are taking advantage of llama's open source nature to give Learners hands-on experience fine-tuning a llama model I really appreciate meta's work on releasing the Llama family of models as open source and I think what you just shared is an example of the types of work that developers can do almost uniquely on models whose ways are available this short course will cover both some theory on how to think about building omm applications and the practicalities of Hands-On building we hope you'll join us in exploring these Cutting Edge techniques for optimizing your llm applications yep in fact I hope you'll be ready to join us and fine tune your skills with this course\n"