Llama 3 8B - BIG Step for Local AI Agents! - Full Tutorial (Build Your Own Tools)

Using Llama 3 to Automate Tasks with Email Integration

One of the exciting features of Llama 3 is its ability to send and receive emails, making it an ideal tool for automating tasks. In this setup, we were able to integrate Llama 3 with our email system to create a seamless workflow. We started by creating a system message that would trigger the function call when a user input contained specific keywords related to email. The use case was to send an email to the user's specified email address, and Llama 3 was able to extract the information and create the email with ease.

To test this setup, we went back to our system message and entered our email address, which triggered the function call. We were pleased to see that the email was sent successfully, and we received a confirmation in our inbox. The email contained the desired content, which demonstrated the effectiveness of Llama 3's ability to follow instructions and complete tasks.

Unlocking Local Access to Llama 3

One of the significant advantages of using Llama 3 is its local access capabilities. This allows users to have more control over their AI agents and experiment with different scenarios without relying on cloud-based services. We were able to unlock this feature locally, which made a big difference in our setup. While GPT-4 was previously used for similar tasks, the smaller Llama 3 model proved to be more responsive and effective.

Adding Custom Functions to Llama 3

To take our Llama 3 setup to the next level, we wanted to add custom functions that would enhance its capabilities. We created a function called "Write to Notes" that would append content to a text file called "notes.txt." This function was designed to write or append specific content to the notes file based on user input.

To integrate this function with our existing setup, we updated our system message to include a new property for the note content. We also added a new function call to our convert to open AI function, which included the newly created "Write to Notes" function. Finally, we updated our chat function to include an if statement that would trigger the "Write to Notes" function based on user input.

Testing and Validating the Custom Function

Once we had added the custom function, we wanted to test its effectiveness. We entered a command that triggered the function call, which wrote the email address of Chris from All About AI to our notes.txt file. While it was a bit unexpected, the function worked as intended, demonstrating its potential for automating tasks.

The Future of Llama 3 and Its Community

As we explored Llama 3's capabilities, we realized that there were many opportunities for growth and development. We shared our experience with the community, including access to the full code on GitHub, which is now available to members of the channel. The Discord server also hosts a growing community of users who can share their experiences, ask questions, and learn from each other.

The plans for future content include exploring different use cases and features, as well as experimenting with new styles and approaches. We hope that our article has provided valuable insights into Llama 3's capabilities and inspires readers to explore its potential for automating tasks and enhancing productivity.

"WEBVTTKind: captionsLanguage: enokay so let me show you how this works now so our agent now has all these tools it can use right so the first tool we're going to use is search Google so we're going to search Google with if we start with this query here we can follow up with let's say llama 3 human eval this is going to be our query right we going to go out to Google we're going to collect that we're going to put it into a rag so you can see we scraped these two URLs here from AI meta and The Verge the content has been added to our vault right if we go to our vault here now you can see all of the text here from those two web pages has been embedded and now we can start searching this right so if we go uh back to our terminal here and now we can use the frames look in W that means we're going to access our Rag and ask how many tokens was llama 3 trained on right now it's going to go fetch that hopefully and give us the answer back so you can see according to the relevant context From the Vault it seems that llama 3 uh was trained on up to 15 trillion to tokens so llama 3 was trained on 15 trillion tokens so yeah perfect our agent also has a tool that allows it to send mail so we can just go send a mail with that information to my email address please so let's try that okay so you can see email sent successfully and here we got some information uh according to the text llama 3 continued to improve log linear after being trained up to 15 trillion tokens so yeah I just think that shows how powerful the new llama 38b is just running low locally on AMA there's no Lang chain or anything like that just a custom script with some good instruction following uh very impressed how good it actually is to just following the instructions we gave it uh so yeah we're going to take a look at how I kind of set this up in the code and some more examples recently I had a lot of quest if I could spend a bit more time on kind of explaining how the codes worked and kind of how the logic and how this combines with the AI or llm model right so I think I'm going to do this in this video so if this is something you are not interested in learning if you just want to see kind of how this exact script works you can just skip a bit ahead but if you want to learn kind of how to set this up how to create these function calls for yourself H yeah you're welcome to stay okay so I just want to start going quickly through what kind of functions I wanted for my system so I created a send email function this gives us the opportunity to actually send out mails as you saw in the intro I have a search Google function that uses the Ser API to bring back URLs from Google and then we can kind of go scrape those URLs and get information that will be put into our rag system the check context function is that the one we are using to actually search our rag setup and that is kind of the tools I have available now uh we're going to add one more tool so I kind of show you how you can do that too uh but I wanted to start down here in our uh chat function because this is where the intelligent part happens right so in our system message uh I want to focus on the search Google part here so I just set this up pretty straightforward so you are an AI Agent experting following instructions and yeah we just go ahead you have a set of functions you can use and you have access to these functions and then we kind of dump the functions here uh if you see these lists here yeah you can see this and this is also going to bring in all of these properties with these descriptions uh yeah I'll come back to that uh but let's focus on the lines where we are uh instructing how we can use the search Google function so what I just went ahead and tried is if users input contains search Google or similar or some sort of question or query generate a function call in the following format so the format is quite important here so we have kind of these wrapper tags right fun call and inside those rapper tags we have this structure that the we have something called parse function call so this function is always out looking for this kind of structures if it should trigger a function I'm going to explain that uh right after this now so you can see replace the user by the query with actual query by the the user so this is what requires our model to be intelligent and understand what kind of query we actually are looking for right and you can see we are actually using the dolphin Tre version dolphin vers now of llama 3 very happy with this I've set the temperature to zero to bring this uh yeah mostly the terministic as much as we can right okay so I just want to go into now the explanation of kind of how this works right okay so we kind of went through the system hopefully that was clear uh now let's move on to how the function gets uh function call gets executed so let's say you're the user you put in hey I need to find some stuff about the new llama tree from meta AI on Google can you search for me and step two then is going to be the AI system understands this is kind of the intelligent part and when uh our system kind of sees this user input it sees Google it seees search uh then it kind of knows that okay now we have to prepare uh a function call right and you can see the AI system listens to your request and figures out that you want to search the web so that is kind of the intelligent part I tried this with a few other different local models not all of them can do this but llama 38b it looks very good for this and then it kind of prepares a response with two parts so uh it could be like a natural language reply so sure let me search Google for the new MEI and we have a second part that is kind of a secret instruction note and this comes back to our function called rappers right so it create this here because you kind of understood that uh oh here we need to use one of our functions right and it creates the name of the function we want to use uh we need some arguments right so we have query and then it understands what is the what the user wants to search for here uh it doesn't want to search for Stuff uh it understands that we want to search Google for new llama 3 from meta Ai and it puts this into our uh parameter here that is query right so uh I wanted to kind of show you if we think of this more like this P titon dictionary here we have a function we have our parameter name and the model has to kind of figure out what value it should put into this query here right and if it's intelligent enough it kind of understands that this has to be the llama 3 from meta AI part of our input right and then we can kind of move on to step three and that is actually finding the instruction note so there's a special function called pars function call that acts like this detective this means that it's always looks through the ai's response for this secret instruction note which is wrapped in this function call and function call tags so I would think of this like this uh parse function call is always is monitoring the output from uh our AI model and if it detects these two wrappers here it's going to grab this note and yeah uh do the function call based on the instructions inside of this secret note right so you can see once it finds the note it reads the instructions inside and if we go down here you can see step four understanding the instructions So the instructions are written in like a special code Json which looks like yeah like we just had a look at here the par function call function translates this code into like a simple list python dictionary that the system can't understand so this is kind of what I showed you up here so it kind of translate this the secret note into this very simple readable dictionary here right and like I said this list tells the system which function to run and what information to use in the arguments and the last part is of course executing on this instructions so now that the syst understands what you want it runs the search Google function because we put that in our dictionary right this goes uh this function goes to Google searches for the term we put in and grabs the top results it saves this information this could be URLs right in its memory and then we can kind of after searching and saving the information the system lets you know what it did so this could be something like uh top search result and it lists the URL and we added this to context so that is kind of how uh this annoys me okay that is kind of how this function calling work kind of how I set it up we not depended on any Lang chain or anything this is just should you call it manual function calling but we do depend on something called uh open a i function so this is kind of how we describe our functions so we always want kind of this description and this gets added to our function list so convert to open AI function uh and yeah this is also helpful I I don't think I'm going to go through that in specific detail but this is also a part of it uh so yeah I kind of hope uh you understood that and uh I think this was I learned from this too to kind of go through in deep depth how this works and we're going to create a new function so you can kind of show how we set this up too but yeah hopefully this was understandable okay so let me kind of show you how the system works before we can add a new function uh but I just want to show you one quick thing here in the code so this is kind of the surveillance part I was talking about so we have something called message content right and this is what the chat function returns and you can see the surveillance part of this is this variable called function call so this takes the returned message content right and it puts it into this parse function call if you go up here again you can see this is searching for for these two wrapper tags so if our output from the chat function contains these wrapper tags uh then it will kind of understands that if function call is one of these tags uh you can see if one of them is understood to be sent email yeah we're going to do this and yeah you see this is kind of the surveillance part uh of our system so I just want to quickly show you that down here is just a simple true loop with a user and an agent and we have our conversation history so we append everything uh from our assistant and our user to this conversation history right to keep the context uh other than that I think that's fine let's just do a few more yeah Showcases of the system before we create a new tool or function okay so let's run this now so this is running on the dolphin llama 3 Model on AMA right so let's say hey I'm doing some research into local llms it would be great if you can help me in the search to find the available AMA models maybe use Google so let's start with this okay so we have some results back here you can see it kind of understood that it had to search for available AMA models that's good so we return all of this so it says content added to context if we go to our text file here and open this you can see we kind of yeah got some information here about AMA so now we are kind of ready to search this so let's ask about maybe llama Tre uh okay so let's try that but before we do that if we see in our system message now if the user input contains check context or similar Follow by a message then we can kind of go into our rag system right so let's try that so let's just go check context does ama have the Llama 3 Model okay so let's press that so now we're going to do the rag search like and bring back the information with this so you can see yes or llama supports the Llama 3 Model you can use it ama pool Lama 3 command to import it and create a custom prompt so yeah this is looking good uh okay so let's say we wanted to send this information to our email now right if you go back to our system message you can see if the use input contains saying email or similar extract the information and yeah create this email and send it and Trigger the function call so let's go back here if we do good send that information to my email address yes so I put in my mail and let's try it okay so you can see email sent successfully and yeah we got a mail here so let's read it so you can see ama supports the Llama 3 Model you can use this to pull it and yeah this works great so every single function we kind of asked for in this setup here now worked perfectly good so I would say I'm really impressed how responsive this llama 3 Model is to following instructions it's it's a big step for this agent but remember this is a 8B model uh you could do this with GPT 4 before but I didn't have any great success with these smaller models but uh unlocking this locally I think is a big deal for these AI agents right okay so now let me kind of show you how you can add your own functions let's say you wanted to create a specific tool for a specific use case so uh the function I want to add is called Write to notes so this is going to contain some content that we are going to write or append in this case to our text file just called notes. text right so let me show you kind of how we can integrate uh and make this into a function call now that we have our function the first thing I want to update is our system message so let's just go down here make some space so let's just paste that in here and let's go over it so yeah just leave that like this so if the user input contains right note or similar followed by some content generate a function call in the following format and then we have our wrapper tags right function call and we have uh right to notes and of course we need an argument and this is going to be the note content and the content we want to put in right so replace the user Prov the content with actual content provided by the user so it's a very easy setup uh but yeah and we gave some example here so if the user write note use uh uh note content we're going to add this to notes. text right so that is kind of the first part we want to do update our system message next I kind of want to update our convert to open AI function so I just want to add like a property here that is going to be note content it's a string we have a description so content to be written to not do text file right okay and let's just uh add this something here to our functions list right so we want to convert this and let's grab our function here so that's going to be right to notes right okay so that is the setup for that next we want to go down to our chat function and add some logic here so uh I think it's just going to be like this so we're going to add the lse if statement here and yeah that's it so this is the right to notes function and that should be it so so uh let's go ahead and test this now and see if we can actually write something to the note do notes. text okay so let's just run this now so let's do can you search Google to find an email address from Chris from All About AI so let's see if we can find it okay so I did find my website and that's added to context let's do check context what is the email address to Chris uh okay so we have an email address here okay now let's try to write this to our notes let's just try can you write to notes the email address to Chris did that work let's have a look yeah Chris konis email address okay uh yeah that was a bit weird but uh we got our email address so yeah our function is working yeah that is what I wanted to share today had a lot of fun playing around with llama 3 uh Wednesday's video is going to be a video featuring Gro and the Llama 370b model uh so look forward to that that's just crazy uh but other than that hope you learned something from this and as always if you want access to just a full code here uh just become a member of the channel I will invite you to our community GitHub uh you will get access to the community Discord a lot of people there now so uh yeah a lot of fun and there'll be more uh examples like this coming out soon uh a bit of a different style this time I hope it wasn't too long and I kind of hope you learned something uh other than that have fun playing around llama 3 and I see you again on Wednesdayokay so let me show you how this works now so our agent now has all these tools it can use right so the first tool we're going to use is search Google so we're going to search Google with if we start with this query here we can follow up with let's say llama 3 human eval this is going to be our query right we going to go out to Google we're going to collect that we're going to put it into a rag so you can see we scraped these two URLs here from AI meta and The Verge the content has been added to our vault right if we go to our vault here now you can see all of the text here from those two web pages has been embedded and now we can start searching this right so if we go uh back to our terminal here and now we can use the frames look in W that means we're going to access our Rag and ask how many tokens was llama 3 trained on right now it's going to go fetch that hopefully and give us the answer back so you can see according to the relevant context From the Vault it seems that llama 3 uh was trained on up to 15 trillion to tokens so llama 3 was trained on 15 trillion tokens so yeah perfect our agent also has a tool that allows it to send mail so we can just go send a mail with that information to my email address please so let's try that okay so you can see email sent successfully and here we got some information uh according to the text llama 3 continued to improve log linear after being trained up to 15 trillion tokens so yeah I just think that shows how powerful the new llama 38b is just running low locally on AMA there's no Lang chain or anything like that just a custom script with some good instruction following uh very impressed how good it actually is to just following the instructions we gave it uh so yeah we're going to take a look at how I kind of set this up in the code and some more examples recently I had a lot of quest if I could spend a bit more time on kind of explaining how the codes worked and kind of how the logic and how this combines with the AI or llm model right so I think I'm going to do this in this video so if this is something you are not interested in learning if you just want to see kind of how this exact script works you can just skip a bit ahead but if you want to learn kind of how to set this up how to create these function calls for yourself H yeah you're welcome to stay okay so I just want to start going quickly through what kind of functions I wanted for my system so I created a send email function this gives us the opportunity to actually send out mails as you saw in the intro I have a search Google function that uses the Ser API to bring back URLs from Google and then we can kind of go scrape those URLs and get information that will be put into our rag system the check context function is that the one we are using to actually search our rag setup and that is kind of the tools I have available now uh we're going to add one more tool so I kind of show you how you can do that too uh but I wanted to start down here in our uh chat function because this is where the intelligent part happens right so in our system message uh I want to focus on the search Google part here so I just set this up pretty straightforward so you are an AI Agent experting following instructions and yeah we just go ahead you have a set of functions you can use and you have access to these functions and then we kind of dump the functions here uh if you see these lists here yeah you can see this and this is also going to bring in all of these properties with these descriptions uh yeah I'll come back to that uh but let's focus on the lines where we are uh instructing how we can use the search Google function so what I just went ahead and tried is if users input contains search Google or similar or some sort of question or query generate a function call in the following format so the format is quite important here so we have kind of these wrapper tags right fun call and inside those rapper tags we have this structure that the we have something called parse function call so this function is always out looking for this kind of structures if it should trigger a function I'm going to explain that uh right after this now so you can see replace the user by the query with actual query by the the user so this is what requires our model to be intelligent and understand what kind of query we actually are looking for right and you can see we are actually using the dolphin Tre version dolphin vers now of llama 3 very happy with this I've set the temperature to zero to bring this uh yeah mostly the terministic as much as we can right okay so I just want to go into now the explanation of kind of how this works right okay so we kind of went through the system hopefully that was clear uh now let's move on to how the function gets uh function call gets executed so let's say you're the user you put in hey I need to find some stuff about the new llama tree from meta AI on Google can you search for me and step two then is going to be the AI system understands this is kind of the intelligent part and when uh our system kind of sees this user input it sees Google it seees search uh then it kind of knows that okay now we have to prepare uh a function call right and you can see the AI system listens to your request and figures out that you want to search the web so that is kind of the intelligent part I tried this with a few other different local models not all of them can do this but llama 38b it looks very good for this and then it kind of prepares a response with two parts so uh it could be like a natural language reply so sure let me search Google for the new MEI and we have a second part that is kind of a secret instruction note and this comes back to our function called rappers right so it create this here because you kind of understood that uh oh here we need to use one of our functions right and it creates the name of the function we want to use uh we need some arguments right so we have query and then it understands what is the what the user wants to search for here uh it doesn't want to search for Stuff uh it understands that we want to search Google for new llama 3 from meta Ai and it puts this into our uh parameter here that is query right so uh I wanted to kind of show you if we think of this more like this P titon dictionary here we have a function we have our parameter name and the model has to kind of figure out what value it should put into this query here right and if it's intelligent enough it kind of understands that this has to be the llama 3 from meta AI part of our input right and then we can kind of move on to step three and that is actually finding the instruction note so there's a special function called pars function call that acts like this detective this means that it's always looks through the ai's response for this secret instruction note which is wrapped in this function call and function call tags so I would think of this like this uh parse function call is always is monitoring the output from uh our AI model and if it detects these two wrappers here it's going to grab this note and yeah uh do the function call based on the instructions inside of this secret note right so you can see once it finds the note it reads the instructions inside and if we go down here you can see step four understanding the instructions So the instructions are written in like a special code Json which looks like yeah like we just had a look at here the par function call function translates this code into like a simple list python dictionary that the system can't understand so this is kind of what I showed you up here so it kind of translate this the secret note into this very simple readable dictionary here right and like I said this list tells the system which function to run and what information to use in the arguments and the last part is of course executing on this instructions so now that the syst understands what you want it runs the search Google function because we put that in our dictionary right this goes uh this function goes to Google searches for the term we put in and grabs the top results it saves this information this could be URLs right in its memory and then we can kind of after searching and saving the information the system lets you know what it did so this could be something like uh top search result and it lists the URL and we added this to context so that is kind of how uh this annoys me okay that is kind of how this function calling work kind of how I set it up we not depended on any Lang chain or anything this is just should you call it manual function calling but we do depend on something called uh open a i function so this is kind of how we describe our functions so we always want kind of this description and this gets added to our function list so convert to open AI function uh and yeah this is also helpful I I don't think I'm going to go through that in specific detail but this is also a part of it uh so yeah I kind of hope uh you understood that and uh I think this was I learned from this too to kind of go through in deep depth how this works and we're going to create a new function so you can kind of show how we set this up too but yeah hopefully this was understandable okay so let me kind of show you how the system works before we can add a new function uh but I just want to show you one quick thing here in the code so this is kind of the surveillance part I was talking about so we have something called message content right and this is what the chat function returns and you can see the surveillance part of this is this variable called function call so this takes the returned message content right and it puts it into this parse function call if you go up here again you can see this is searching for for these two wrapper tags so if our output from the chat function contains these wrapper tags uh then it will kind of understands that if function call is one of these tags uh you can see if one of them is understood to be sent email yeah we're going to do this and yeah you see this is kind of the surveillance part uh of our system so I just want to quickly show you that down here is just a simple true loop with a user and an agent and we have our conversation history so we append everything uh from our assistant and our user to this conversation history right to keep the context uh other than that I think that's fine let's just do a few more yeah Showcases of the system before we create a new tool or function okay so let's run this now so this is running on the dolphin llama 3 Model on AMA right so let's say hey I'm doing some research into local llms it would be great if you can help me in the search to find the available AMA models maybe use Google so let's start with this okay so we have some results back here you can see it kind of understood that it had to search for available AMA models that's good so we return all of this so it says content added to context if we go to our text file here and open this you can see we kind of yeah got some information here about AMA so now we are kind of ready to search this so let's ask about maybe llama Tre uh okay so let's try that but before we do that if we see in our system message now if the user input contains check context or similar Follow by a message then we can kind of go into our rag system right so let's try that so let's just go check context does ama have the Llama 3 Model okay so let's press that so now we're going to do the rag search like and bring back the information with this so you can see yes or llama supports the Llama 3 Model you can use it ama pool Lama 3 command to import it and create a custom prompt so yeah this is looking good uh okay so let's say we wanted to send this information to our email now right if you go back to our system message you can see if the use input contains saying email or similar extract the information and yeah create this email and send it and Trigger the function call so let's go back here if we do good send that information to my email address yes so I put in my mail and let's try it okay so you can see email sent successfully and yeah we got a mail here so let's read it so you can see ama supports the Llama 3 Model you can use this to pull it and yeah this works great so every single function we kind of asked for in this setup here now worked perfectly good so I would say I'm really impressed how responsive this llama 3 Model is to following instructions it's it's a big step for this agent but remember this is a 8B model uh you could do this with GPT 4 before but I didn't have any great success with these smaller models but uh unlocking this locally I think is a big deal for these AI agents right okay so now let me kind of show you how you can add your own functions let's say you wanted to create a specific tool for a specific use case so uh the function I want to add is called Write to notes so this is going to contain some content that we are going to write or append in this case to our text file just called notes. text right so let me show you kind of how we can integrate uh and make this into a function call now that we have our function the first thing I want to update is our system message so let's just go down here make some space so let's just paste that in here and let's go over it so yeah just leave that like this so if the user input contains right note or similar followed by some content generate a function call in the following format and then we have our wrapper tags right function call and we have uh right to notes and of course we need an argument and this is going to be the note content and the content we want to put in right so replace the user Prov the content with actual content provided by the user so it's a very easy setup uh but yeah and we gave some example here so if the user write note use uh uh note content we're going to add this to notes. text right so that is kind of the first part we want to do update our system message next I kind of want to update our convert to open AI function so I just want to add like a property here that is going to be note content it's a string we have a description so content to be written to not do text file right okay and let's just uh add this something here to our functions list right so we want to convert this and let's grab our function here so that's going to be right to notes right okay so that is the setup for that next we want to go down to our chat function and add some logic here so uh I think it's just going to be like this so we're going to add the lse if statement here and yeah that's it so this is the right to notes function and that should be it so so uh let's go ahead and test this now and see if we can actually write something to the note do notes. text okay so let's just run this now so let's do can you search Google to find an email address from Chris from All About AI so let's see if we can find it okay so I did find my website and that's added to context let's do check context what is the email address to Chris uh okay so we have an email address here okay now let's try to write this to our notes let's just try can you write to notes the email address to Chris did that work let's have a look yeah Chris konis email address okay uh yeah that was a bit weird but uh we got our email address so yeah our function is working yeah that is what I wanted to share today had a lot of fun playing around with llama 3 uh Wednesday's video is going to be a video featuring Gro and the Llama 370b model uh so look forward to that that's just crazy uh but other than that hope you learned something from this and as always if you want access to just a full code here uh just become a member of the channel I will invite you to our community GitHub uh you will get access to the community Discord a lot of people there now so uh yeah a lot of fun and there'll be more uh examples like this coming out soon uh a bit of a different style this time I hope it wasn't too long and I kind of hope you learned something uh other than that have fun playing around llama 3 and I see you again on Wednesday\n"