New course with Giskard - Red Teaming LLM Applications

Red Teaming L Applications: A New Approach to Cyber Security

I'm delighted to introduce red teaming L applications built in partnership with Gusot, taught by Mato Dora and Luca Marshall. This course will teach you how to attack L applications so you can make them safer. Red teaming is a strategy used in cyber security and military training where a group often called the red team simulates adversaries actions and tactics to test and improve the effectiveness of an organization's defenses.

In this course, you learn how to Red Team your own LM application to discover vulnerabilities and make it safer. You'll better attack different chatbot applications by exploiting the text completion nature of Elms and try different prompt injections. We go specifically on attacks that they try to access the elm's system prompt often a very valuable resource that it opens up further attacks all these approaches are important to use to test your own l in order to help you to save guard it.

You may have heard of the controversy regarding an Air Canada chatbot hallucinating a discount to a customer which led to the company losing in court and being found liable for damages or a Chevrolet chatbot that got shut down after promising to sell a car for $1. This course you learned about some of the best ways to prevent these types of mishaps from happening at your company namely red teaming.

I'm delighted to introduce instructors of this course Matel Doran and Luca Marshall. Matel leads Giza's own safety research team where he has developed unique expertise in conducting rare teaming assessments. He has L multiple red teaming attacks against major Banks pretty much always successfully and this research work focuses on the Practical applications and implications of AI emphasizing the intersection of Ethics safe the a security.

Luca is Giza's product lead and his work focuses on launching new capabilities to ensure own Bas applications make it a reduction with trust from all parties involved. He's also worked alongside Mato on many of the successful attacks of J of AI applications. Thanks Andrew pretty much every major corporation that we red team has discovered vulnerabilities we found that many large companies l lamps can easily leak prompts or say inappropriate things and I was glad that we could help them discover and fix this vulnerabilities preventing service disruptions.

I look forward to sharing these red teaming techniques with you so that you can make your llm apps safer. Prompt injections are a way for users to inject instructions to manipulate the behavior of llms. We'll go through both manual and automatic techniques for attempting prompt injections and several other categories of attacks to discover vulnerability.

One of the key techniques you'll learn is using an llm to Red Team an llm application. You'll also see how you can probe for sensitive information disclosure or trick an llm into generating inaccurate information at the end. You'll learn how to conduct an initial red teaming assessment iterating through several rounds of probing and attacks.

There are large companies that could have avoided negative headlines about their own misbehaving if they had learned and applied the red teaming techniques in this course. Red teaming is a critical part of discovering and fixing vulnerabilities and is also a really fun intellectual exercise. We hope you'll enjoy the course.

"WEBVTTKind: captionsLanguage: enI'm delighted to introduce red teaming L applications built in partnership with gusot taught by Mato Dora and Luca Marshall this course will teach you how to attack L applications so you can make them safer red teaming is a strategy used in cyber security and military training where a group often called the red team simulates adversaries actions and tactics to test and improve the effective iess of an organization's defenses in this course you learn how to Red Team your own LM application to discover vulnerabilities and make it safer you better attack different chatbot applications by exploiting the text completion nature of Elms and try different prompt injections we go specifically attacks they try to access the elm's system prompt often a very valuable resource that it opens up further attacks all these approaches are important to use to test your own l in order to help you to save guard it you may have heard of the controversy regarding an Air Canada chatbot hallucinating a discount to a customer which led to the company losing in court and being found liable for damages or a Chevrolet chatbot that got shut down after promising to sell a car for $1 in this course you learned about some of the best ways to prevent these types of mishaps from happening at your company namely red teammate I delighted to introduce instructors of this course Matel Doran and Luca Marshall Matel leads giza's own safety research team where he has developed unique expertise in conducting rare teaming assessments he has L multiple red teaming attacks against major Banks pretty much always successfully and this research work focuses on the Practical applications and implications of AI emphasizing the intersection of Ethics safe the a security Luca is giza's product lead and his work focuses on launching new capabilities to ensure own Bas applications make it a reduction with trust from all parties involved he's also worked alongside Mato on many of the successful attacks of J of AI applications thanks Andrew pretty much every major corporation that we red team has discovered vulnerabilities we found that many large companies l lamps can easily leak prompts or say inappropriate things and I was glad that we could help them discover and fix this vulnerabilities preventing service disruptions I look forward to sharing these red teaming techniques with you so that you can make your llm apps safer prompt injections are a way for users to inject instructions to manipulate the behavior of llms we'll go through both manual and automatic techniques for attempting prompt injections and several other categories of attacks to discover vulnerability I one of the key techniques you'll learn is using an llm to Red Team an llm application you'll also see how you can probe for sensitive information disclosure or trick an llm into generating inaccurate information at the end you'll learn how to conduct an initial red teaming assessment iterating through several rounds of probing and attacks there are large companies that could have avoided negative headlines about their owns misbehaving if they had learned and applied the red teaming techniques in this course red teaming is a critical parts of discovering and fixing vulnerabilities and is also a really fun intellectual exercise we hope you'll enjoy the courseI'm delighted to introduce red teaming L applications built in partnership with gusot taught by Mato Dora and Luca Marshall this course will teach you how to attack L applications so you can make them safer red teaming is a strategy used in cyber security and military training where a group often called the red team simulates adversaries actions and tactics to test and improve the effective iess of an organization's defenses in this course you learn how to Red Team your own LM application to discover vulnerabilities and make it safer you better attack different chatbot applications by exploiting the text completion nature of Elms and try different prompt injections we go specifically attacks they try to access the elm's system prompt often a very valuable resource that it opens up further attacks all these approaches are important to use to test your own l in order to help you to save guard it you may have heard of the controversy regarding an Air Canada chatbot hallucinating a discount to a customer which led to the company losing in court and being found liable for damages or a Chevrolet chatbot that got shut down after promising to sell a car for $1 in this course you learned about some of the best ways to prevent these types of mishaps from happening at your company namely red teammate I delighted to introduce instructors of this course Matel Doran and Luca Marshall Matel leads giza's own safety research team where he has developed unique expertise in conducting rare teaming assessments he has L multiple red teaming attacks against major Banks pretty much always successfully and this research work focuses on the Practical applications and implications of AI emphasizing the intersection of Ethics safe the a security Luca is giza's product lead and his work focuses on launching new capabilities to ensure own Bas applications make it a reduction with trust from all parties involved he's also worked alongside Mato on many of the successful attacks of J of AI applications thanks Andrew pretty much every major corporation that we red team has discovered vulnerabilities we found that many large companies l lamps can easily leak prompts or say inappropriate things and I was glad that we could help them discover and fix this vulnerabilities preventing service disruptions I look forward to sharing these red teaming techniques with you so that you can make your llm apps safer prompt injections are a way for users to inject instructions to manipulate the behavior of llms we'll go through both manual and automatic techniques for attempting prompt injections and several other categories of attacks to discover vulnerability I one of the key techniques you'll learn is using an llm to Red Team an llm application you'll also see how you can probe for sensitive information disclosure or trick an llm into generating inaccurate information at the end you'll learn how to conduct an initial red teaming assessment iterating through several rounds of probing and attacks there are large companies that could have avoided negative headlines about their owns misbehaving if they had learned and applied the red teaming techniques in this course red teaming is a critical parts of discovering and fixing vulnerabilities and is also a really fun intellectual exercise we hope you'll enjoy the course\n"