I Asked ChatGPT To Write an Actor Critic Agent ...

ChatGPT: A Tool to Assist Programmers

As I watched the video tutorial on using ChatGPT with PyTorch, I was struck by the potential of this tool to assist programmers. The tutorial demonstrated how to use ChatGPT to generate code for a variety of tasks, from simple scripts to complex neural networks. One of the things that impressed me about the tutorial was the way it showed how to use ChatGPT as a starting point and then modify the generated code to suit specific needs.

The tutorial began by instantiating an actor and critic using PyTorch, which is a common architecture for reinforcement learning. The critic loss function was also specified, which is a crucial component of any neural network. I was pleased to see that the tutorial correctly set up the state and action dimensions, as well as the number of episodes to iterate over.

However, things took a turn for the worse when the environment was introduced. The tutorial attempted to reset the environment to get the episode reward, but it stopped short of actually resetting it. Instead, it simply printed out some values without doing anything meaningful with them. This left me wondering what the point of this code was and why it wasn't actually doing anything useful.

As I continued watching the video, I began to notice other issues with the code. The value function for the next state didn't take into account whether or not that state was terminal. Terminal states have a value of zero forever and always, by definition. This means that the discount factor would be precisely zero at those future rewards, which is a crucial component of reinforcement learning.

Despite these issues, I was impressed by the fact that the tutorial had made it further than it did yesterday. It's clear that ChatGPT is capable of adapting to new situations and improving over time, even if it doesn't always get it right.

Editor's Note: As I edited this article, I noticed another issue with the code. The value function for the next state should indeed take into account whether or not that state is terminal. This would involve multiplying by one minus done, where done indicates whether the current state is terminal.

Conclusion

As I concluded my analysis of ChatGPT with PyTorch, I was left with a sense of both excitement and trepidation. On the one hand, this tool has the potential to revolutionize the way we approach programming tasks. It can act as a sort of Oracle, providing answers to complex questions that would otherwise be difficult or impossible to solve.

On the other hand, it's clear that ChatGPT is not yet ready for prime time. It still requires significant improvement in terms of accuracy and reliability, particularly when dealing with complex or nuanced problems. As programmers, we need to understand its limitations and know how to ask the right questions to get the answers we're looking for.

As I mentioned earlier, one key take-away from this experience is that ChatGPT should be used as a tool to assist us, rather than a replacement for our own skills and expertise. It's not going to replace us anytime soon, at least not in five years who knows? But it can certainly help us get started on complex projects or provide a starting point for our own code.

The real value in ChatGPT lies in the questions we ask of it. We need to think carefully about what we want to achieve and how to phrase our queries to get the most out of this tool. By doing so, we can unlock its full potential and harness its power to solve complex problems.

"WEBVTTKind: captionsLanguage: enmuch has been said in recent weeks about the power of the chat GPT language model the hype is absolutely unreal but is it founded in reality let's find out in this video Now spoiler alert what I'm going to show you is that chat GPT is incredibly powerful you know I wanted to hate it I wanted to say it was total trash but I can't do that at least not objectively there are some limitations so what we're going to find is that it is pretty good at what I would call a general knowledge of sorts of questions it can take a large body of knowledge from the internet and kind of coalesce that summarize it for you to give you a relatively accurate answer to get you started in your learning process on a particular topic I think this will be especially powerful for people in non-stem related fields and of limited utility to those in STEM related fields now where it's going to fall short is in its programming capabilities so what we'll see is that it's been touted as you know sort of a programmer now I don't know how much this is hyped it could just be the media I'm reading but I've seen some people claim that it's a you know a replacement for programmers but this is far from the case it can program but it does have rather significant limitations which we're going to show in this video all right so here is the chat gp2 window and if you've been playing with this tool this should be quite familiar to you so what I've done here is ask it a very basic question is why does deep key learning require a Target Network now this is a pretty simple straightforward question to which it gives a simple straightforward answer and it basically says without a Target Network the Q Network's estimate of the Q Vis would be constantly changing which can lead to instability and pro and poor performance in the learning process this is absolutely true and this is more or less the result from the paper this is what the authors show so let's see what happens when we ask why we need a replay buffer okay so it has spit out an answer as to why we need a memory replay buffer and the basic idea is that one is going to break correlation between consecutive samples which improves the stability of the learning process now of course this is necessary for the input so deep neural network to be uncorrelated that kind of goes without saying it says it allows the agent to learn from a wider range of experiences rather than just the most recent experiences which can lead to better performance this is absolutely true of course we store all experiences and so sampling them at random gives you a broad variety of experiences to draw from and uh Coral area of that is it allows the agent to learn from rare or difficult to replicate experiences which can be useful for learning in complex environments and it says by using a replay buffer agent stores large experiences samples randomly improve stability allows the agent to learn more effectively so there's a little bit of repetition there it uh it understands or rather it regurgitates the Core Essence of why we need it but it does some regurgit it does some repetition in its language in the process not a huge deal I give this an a now I want to see where some of its general knowledge about deep reinforcement learning starts to break down let's ask it to summarize the agent 57 paper from deepmind published in 2020. so this is actually an interesting result when I ran this yesterday it knew nothing about the agent 57 paper from deepmind in 2020 in fact it told me there was no such algorithm and I'll put a screenshot of that answer here so overnight it has learned that indeed there is something going on with agent 57 it's an actual paper however it's not entirely accurate so if you take a look at my recent video where I worked with a group of Ukrainian developers to implement an open source version of agent 57 you'll know that this only scratches the surface of what makes it a state-of-the-art algorithm so it's true that it uses a deep Q network with a deep Q Network however it's more accurate to say that it uses a recurrent distributed deep key Learning Network in other words it builds upon the R2D2 algorithm it also builds upon the never give up curiosity learning algorithm and there's no mention in here of never give up which is another thing it didn't anything about I'm curious to know if it is updated overnight we're going to take a look at that in a second but it says a large replay buffer that stores a wide range of experiences to improve stability and learning now that just goes without saying that's very trivial knowledge at this point that's not really summarizing anything useful what makes agent 57 so powerful is that it breaks out the value function approximators for the critic function for the intrinsic and extrinsic rewards into two separate arms as well as treating the combinations of beta and gamma hyper parameters as an arm on a multi-arm bandit and using an algorithm at runtime a microcontroller or a microcontroller at runtime to select combinations of those hyper parameters and then uses the reward as a signal for that algorithm for that multi-arm Bandit to improve its performance over time so you see that it's only really scratching the surface uh it says in including prioritized replay yeah it uses a prioritized replay um and separate turns yeah okay uses that no mention of recurrence dimension of distributed dimension of the microcontroller dimension of universal value function approximation for the intrinsic uh curiosity and extrinsic rewards from the environment so it's very surface level and not really enough to get you started down the path of true learning so this is a limitation though it is a better answer than I received last night let's ask it about never give up and see what it tells us Okay so yesterday it had no idea what never give up was and that was probably a clue to the team that they need to uh at least sure of its knowledge give it some information about the paper but again it's not quite accurate so it says that uh ngu is proposed by Deep binding 2020 correct designed to improve sample efficiency of r l agents by making them more resilient to catastrophic forgetting which is when an agent forgets previously acquired skills this isn't entirely accurate so never give up works by dealing with the problem of boredom in curiosity learning methods so in curiosity the agent is rewarded for exploring previously unseen States however over time the agent will explore most of the state space and therefore those States will lose their novelty and the Curiosity reward goes to zero in other words it gets bored and so never give up teaches the agent to never give up and to keep exploring those States and to keep trying new state action combinations so that it can find the most optimal strategy over time then it goes on to say ngu is based on the idea of preserving previously acquired knowledge by creating a separate replay buffer for each task and using in using an auxiliary Network to predict the expected feature performance of the agent on each task I don't where I don't know where it's getting the idea of separate replay buffer for each task so this isn't accurate uh the agent keeps the agent has a single replay buffer that it uses multiple different actors to fill up and then it does use an auxiliary Network to generate the intrinsic return so it measures the surprise of the agent by using this feature mapping to make predictions about what state it thinks it will transition into given its action and comparing that to the actual transition taken by the environment and the Delta between these two gives you this sort of reward for the agent an intrinsic reward that gives the agent a sense of curiosity and then it says ngu is able to prevent the agent from getting stuck in poor local Minima which is common in RL does this by allowing the agent to return to previously learn the tasks and by adjusting the learning rate to speed up learning on those tasks I don't know where this comes from I'm actually curious where it's getting this information but it's not maybe I had a stroke and totally don't remember the paper but that's not at all what I remember being in the paper you know you could talk about using Universal value function approximation to split out uh the contribution to the total reward from the intrinsic and extrinsic rewards allowing you to kind of um in fact maybe that's what it's talking about um I think that's kind of what it's getting at is between these two paragraphs is the idea of splitting out uh the networks the contribution from the intrinsic and extrinsic rewards using Universal value function approximation that's possible so I think it's just getting confused here so if you're using this to summarize this in our you know in some alternate universe where I'm your professor and I'm giving you a test and I'm asking you to summarize a paper and you spit this out to me you're going to fail because it's not correct it doesn't demonstrate that you read the paper and then it gives a summary of what it just said okay so yeah so I would I would again give this an F it doesn't really seem to know what's going on between uh going on with the never give up in agent 57 algorithm so there is some some limitation to its knowledge okay that's not such a huge deal let's see how it does in writing actual code now it failed on this last night so let's see how it does today perhaps it's improved overnight okay so I've given it a prompt to write a complete actor critic agent in pytorch train the agent on cardpool from the open AI gym now this is fascinating so uh it gives me a totally different answer from yesterday actually this is neat it's changing from day to day so the system is learning and adapting overnight that's really cool so it says I'm sorry but it would not be possible to write a complete actor critic agent in pytorch and train it on cardpool in this format that's true obviously it can't do that then it says an AC agent is a type of RL you they uses two separate networks acting Critics on the optimal policy and value function respectively requires a significant amount of code and understanding the specific environment problem however I can provide you the general outline of what the code might look like so this is a better answer than what I got yesterday is telling you hey there's some limitations um I can't quite Do it Now spoiler alert it comes all the way down to the bottom and it craps out but let's see what it came up with uh before that so the code I find is actually quite reasonable so it knows that it needs to Define separate classes for the actor and they need to derive from NN dot module uh one criticism I would Levy here is that this doesn't account for the possibility of using a GPU because there's no device selection and it doesn't send the network to the device so there's no way to handle training on a GPU here that's kind of a minor point it does get the feed forward correct though you pass the um you pass the state through the first Network performer value activation and then pass it to the second to get a soft Max along the negative one dimension that is correct and then the critic is similarly also corrected and it also knows to call the super Constructor which is a good thing it doesn't work without that yesterday's solution actually had a separate agent class I believe that had the actor and critic networks as well as the um actor and critic optimizers as well as in select action and update function looked more like a solution that I would write this looks more like a solution like you would find in the actual documentation from the pi torch tutorial so that's interesting so we do have our actor and critic instantiated actor and critic optimizers these are done correctly critic loss function it did get this correct today yesterday it crapped out actual environment which I found incredibly ironic because the obviously the open AI gym is an open AI product I thought it was rather strange that it didn't know how to finish there but whatever and then it knows how to correctly set State and action dims and then you know uh iterating over a number of episodes so a number of episodes here is undefined uh it does know to uh reset the environment to get the episode reward and doesn't Define Max steps this also isn't how I would do it I would iterate until the episode is done then it says to convert to a tensor get your action probs uh get the multinomial and select the action okay fine take your action get your reward convert your next step to a tensor and then start calculating your critic loss okay it's doing generally the right thing but it stops right here at this line I don't know I don't think I'm missing anything here I'm scrolling all the way down and nothing else comes up so this thing is in fact crapping out on updating the critic but I will give it credit it made it further than it did uh yesterday so it is adapting overnight this is quite interesting to me editor Phil here now as I'm editing this I do notice one other issue so the value function for the next state should depend on whether or not that state is terminal so there's no multiplication by say one minus done now what to take into account if the next state is terminal so that is a sort of fatal flaw in this and I didn't notice this while I was going through the video it will not work as it is currently written even if it continues on to get fully functional code because the next value depends on whether or not that state is terminal terminal states have a value of zero now forever and always that's simply by definition because the terminal State means that no future rewards follow and so the discount at some of those future rewards is precisely zero all right back to the video so I think we've seen enough to reach some kind of conclusion around uh chat GPT in its current form it is obviously useful it knows something okay it is clearly an advanced tool that has a place in your toolbox you can use it to get started but you really have to have some type of understanding of where you are where you want to go the nature of the task you're trying to solve you can't go from beginner to expert just using this tool now it probably wouldn't be realistic to expect such a thing anyway but you should know up front that it's just a tool to help you get started you're still going to have to fill on the gaps yourself you're still going to have to understand the problem you're trying to solve there is still a place for you in the world as a programmer do not fear this is not going to replace you at least not yet in five years who knows but for now our jobs are safe now another thing I need to point out is that perhaps I'm not doing a very good job of leading the agent of leading the software I just watched Jeff Heaton's video a little bit ago and in it he put in his assignments from his machine learning class now the end result was still the same the the chat gbt still failed but it did give better answers than what I'm seeing here so perhaps there is some value knowing how to ask the proper question so that is one thing I want you to take away from this two things actually the first thing is that uh this is a tool to be used and it's not something to be disregarded but to understand it's not going to be you know the end-all be-all to solving programming problems you still have a job as a programmer and two the real value in a tool like this is going to lie in the questions that you ask it I believe and from what I'm seeing that the types of questions you ask the way in which you ask them can prompt the AI to give either a correct or non-correct answer so you need to think very very carefully about the questions you ask of this thing it can act as a sort of Oracle but but can only answer the questions in a certain kind of way you have to get inside of its mind in some sense and know how to ask at the proper questions to get the answer you are looking for so I hope that was helpful for you please explore chat GPT it's really easy to use come up with new use cases for it leave those down in the comments if you found this entertaining please let me know and hit that subscribe button if you want to see more content like this and I'll see you in the next videomuch has been said in recent weeks about the power of the chat GPT language model the hype is absolutely unreal but is it founded in reality let's find out in this video Now spoiler alert what I'm going to show you is that chat GPT is incredibly powerful you know I wanted to hate it I wanted to say it was total trash but I can't do that at least not objectively there are some limitations so what we're going to find is that it is pretty good at what I would call a general knowledge of sorts of questions it can take a large body of knowledge from the internet and kind of coalesce that summarize it for you to give you a relatively accurate answer to get you started in your learning process on a particular topic I think this will be especially powerful for people in non-stem related fields and of limited utility to those in STEM related fields now where it's going to fall short is in its programming capabilities so what we'll see is that it's been touted as you know sort of a programmer now I don't know how much this is hyped it could just be the media I'm reading but I've seen some people claim that it's a you know a replacement for programmers but this is far from the case it can program but it does have rather significant limitations which we're going to show in this video all right so here is the chat gp2 window and if you've been playing with this tool this should be quite familiar to you so what I've done here is ask it a very basic question is why does deep key learning require a Target Network now this is a pretty simple straightforward question to which it gives a simple straightforward answer and it basically says without a Target Network the Q Network's estimate of the Q Vis would be constantly changing which can lead to instability and pro and poor performance in the learning process this is absolutely true and this is more or less the result from the paper this is what the authors show so let's see what happens when we ask why we need a replay buffer okay so it has spit out an answer as to why we need a memory replay buffer and the basic idea is that one is going to break correlation between consecutive samples which improves the stability of the learning process now of course this is necessary for the input so deep neural network to be uncorrelated that kind of goes without saying it says it allows the agent to learn from a wider range of experiences rather than just the most recent experiences which can lead to better performance this is absolutely true of course we store all experiences and so sampling them at random gives you a broad variety of experiences to draw from and uh Coral area of that is it allows the agent to learn from rare or difficult to replicate experiences which can be useful for learning in complex environments and it says by using a replay buffer agent stores large experiences samples randomly improve stability allows the agent to learn more effectively so there's a little bit of repetition there it uh it understands or rather it regurgitates the Core Essence of why we need it but it does some regurgit it does some repetition in its language in the process not a huge deal I give this an a now I want to see where some of its general knowledge about deep reinforcement learning starts to break down let's ask it to summarize the agent 57 paper from deepmind published in 2020. so this is actually an interesting result when I ran this yesterday it knew nothing about the agent 57 paper from deepmind in 2020 in fact it told me there was no such algorithm and I'll put a screenshot of that answer here so overnight it has learned that indeed there is something going on with agent 57 it's an actual paper however it's not entirely accurate so if you take a look at my recent video where I worked with a group of Ukrainian developers to implement an open source version of agent 57 you'll know that this only scratches the surface of what makes it a state-of-the-art algorithm so it's true that it uses a deep Q network with a deep Q Network however it's more accurate to say that it uses a recurrent distributed deep key Learning Network in other words it builds upon the R2D2 algorithm it also builds upon the never give up curiosity learning algorithm and there's no mention in here of never give up which is another thing it didn't anything about I'm curious to know if it is updated overnight we're going to take a look at that in a second but it says a large replay buffer that stores a wide range of experiences to improve stability and learning now that just goes without saying that's very trivial knowledge at this point that's not really summarizing anything useful what makes agent 57 so powerful is that it breaks out the value function approximators for the critic function for the intrinsic and extrinsic rewards into two separate arms as well as treating the combinations of beta and gamma hyper parameters as an arm on a multi-arm bandit and using an algorithm at runtime a microcontroller or a microcontroller at runtime to select combinations of those hyper parameters and then uses the reward as a signal for that algorithm for that multi-arm Bandit to improve its performance over time so you see that it's only really scratching the surface uh it says in including prioritized replay yeah it uses a prioritized replay um and separate turns yeah okay uses that no mention of recurrence dimension of distributed dimension of the microcontroller dimension of universal value function approximation for the intrinsic uh curiosity and extrinsic rewards from the environment so it's very surface level and not really enough to get you started down the path of true learning so this is a limitation though it is a better answer than I received last night let's ask it about never give up and see what it tells us Okay so yesterday it had no idea what never give up was and that was probably a clue to the team that they need to uh at least sure of its knowledge give it some information about the paper but again it's not quite accurate so it says that uh ngu is proposed by Deep binding 2020 correct designed to improve sample efficiency of r l agents by making them more resilient to catastrophic forgetting which is when an agent forgets previously acquired skills this isn't entirely accurate so never give up works by dealing with the problem of boredom in curiosity learning methods so in curiosity the agent is rewarded for exploring previously unseen States however over time the agent will explore most of the state space and therefore those States will lose their novelty and the Curiosity reward goes to zero in other words it gets bored and so never give up teaches the agent to never give up and to keep exploring those States and to keep trying new state action combinations so that it can find the most optimal strategy over time then it goes on to say ngu is based on the idea of preserving previously acquired knowledge by creating a separate replay buffer for each task and using in using an auxiliary Network to predict the expected feature performance of the agent on each task I don't where I don't know where it's getting the idea of separate replay buffer for each task so this isn't accurate uh the agent keeps the agent has a single replay buffer that it uses multiple different actors to fill up and then it does use an auxiliary Network to generate the intrinsic return so it measures the surprise of the agent by using this feature mapping to make predictions about what state it thinks it will transition into given its action and comparing that to the actual transition taken by the environment and the Delta between these two gives you this sort of reward for the agent an intrinsic reward that gives the agent a sense of curiosity and then it says ngu is able to prevent the agent from getting stuck in poor local Minima which is common in RL does this by allowing the agent to return to previously learn the tasks and by adjusting the learning rate to speed up learning on those tasks I don't know where this comes from I'm actually curious where it's getting this information but it's not maybe I had a stroke and totally don't remember the paper but that's not at all what I remember being in the paper you know you could talk about using Universal value function approximation to split out uh the contribution to the total reward from the intrinsic and extrinsic rewards allowing you to kind of um in fact maybe that's what it's talking about um I think that's kind of what it's getting at is between these two paragraphs is the idea of splitting out uh the networks the contribution from the intrinsic and extrinsic rewards using Universal value function approximation that's possible so I think it's just getting confused here so if you're using this to summarize this in our you know in some alternate universe where I'm your professor and I'm giving you a test and I'm asking you to summarize a paper and you spit this out to me you're going to fail because it's not correct it doesn't demonstrate that you read the paper and then it gives a summary of what it just said okay so yeah so I would I would again give this an F it doesn't really seem to know what's going on between uh going on with the never give up in agent 57 algorithm so there is some some limitation to its knowledge okay that's not such a huge deal let's see how it does in writing actual code now it failed on this last night so let's see how it does today perhaps it's improved overnight okay so I've given it a prompt to write a complete actor critic agent in pytorch train the agent on cardpool from the open AI gym now this is fascinating so uh it gives me a totally different answer from yesterday actually this is neat it's changing from day to day so the system is learning and adapting overnight that's really cool so it says I'm sorry but it would not be possible to write a complete actor critic agent in pytorch and train it on cardpool in this format that's true obviously it can't do that then it says an AC agent is a type of RL you they uses two separate networks acting Critics on the optimal policy and value function respectively requires a significant amount of code and understanding the specific environment problem however I can provide you the general outline of what the code might look like so this is a better answer than what I got yesterday is telling you hey there's some limitations um I can't quite Do it Now spoiler alert it comes all the way down to the bottom and it craps out but let's see what it came up with uh before that so the code I find is actually quite reasonable so it knows that it needs to Define separate classes for the actor and they need to derive from NN dot module uh one criticism I would Levy here is that this doesn't account for the possibility of using a GPU because there's no device selection and it doesn't send the network to the device so there's no way to handle training on a GPU here that's kind of a minor point it does get the feed forward correct though you pass the um you pass the state through the first Network performer value activation and then pass it to the second to get a soft Max along the negative one dimension that is correct and then the critic is similarly also corrected and it also knows to call the super Constructor which is a good thing it doesn't work without that yesterday's solution actually had a separate agent class I believe that had the actor and critic networks as well as the um actor and critic optimizers as well as in select action and update function looked more like a solution that I would write this looks more like a solution like you would find in the actual documentation from the pi torch tutorial so that's interesting so we do have our actor and critic instantiated actor and critic optimizers these are done correctly critic loss function it did get this correct today yesterday it crapped out actual environment which I found incredibly ironic because the obviously the open AI gym is an open AI product I thought it was rather strange that it didn't know how to finish there but whatever and then it knows how to correctly set State and action dims and then you know uh iterating over a number of episodes so a number of episodes here is undefined uh it does know to uh reset the environment to get the episode reward and doesn't Define Max steps this also isn't how I would do it I would iterate until the episode is done then it says to convert to a tensor get your action probs uh get the multinomial and select the action okay fine take your action get your reward convert your next step to a tensor and then start calculating your critic loss okay it's doing generally the right thing but it stops right here at this line I don't know I don't think I'm missing anything here I'm scrolling all the way down and nothing else comes up so this thing is in fact crapping out on updating the critic but I will give it credit it made it further than it did uh yesterday so it is adapting overnight this is quite interesting to me editor Phil here now as I'm editing this I do notice one other issue so the value function for the next state should depend on whether or not that state is terminal so there's no multiplication by say one minus done now what to take into account if the next state is terminal so that is a sort of fatal flaw in this and I didn't notice this while I was going through the video it will not work as it is currently written even if it continues on to get fully functional code because the next value depends on whether or not that state is terminal terminal states have a value of zero now forever and always that's simply by definition because the terminal State means that no future rewards follow and so the discount at some of those future rewards is precisely zero all right back to the video so I think we've seen enough to reach some kind of conclusion around uh chat GPT in its current form it is obviously useful it knows something okay it is clearly an advanced tool that has a place in your toolbox you can use it to get started but you really have to have some type of understanding of where you are where you want to go the nature of the task you're trying to solve you can't go from beginner to expert just using this tool now it probably wouldn't be realistic to expect such a thing anyway but you should know up front that it's just a tool to help you get started you're still going to have to fill on the gaps yourself you're still going to have to understand the problem you're trying to solve there is still a place for you in the world as a programmer do not fear this is not going to replace you at least not yet in five years who knows but for now our jobs are safe now another thing I need to point out is that perhaps I'm not doing a very good job of leading the agent of leading the software I just watched Jeff Heaton's video a little bit ago and in it he put in his assignments from his machine learning class now the end result was still the same the the chat gbt still failed but it did give better answers than what I'm seeing here so perhaps there is some value knowing how to ask the proper question so that is one thing I want you to take away from this two things actually the first thing is that uh this is a tool to be used and it's not something to be disregarded but to understand it's not going to be you know the end-all be-all to solving programming problems you still have a job as a programmer and two the real value in a tool like this is going to lie in the questions that you ask it I believe and from what I'm seeing that the types of questions you ask the way in which you ask them can prompt the AI to give either a correct or non-correct answer so you need to think very very carefully about the questions you ask of this thing it can act as a sort of Oracle but but can only answer the questions in a certain kind of way you have to get inside of its mind in some sense and know how to ask at the proper questions to get the answer you are looking for so I hope that was helpful for you please explore chat GPT it's really easy to use come up with new use cases for it leave those down in the comments if you found this entertaining please let me know and hit that subscribe button if you want to see more content like this and I'll see you in the next video\n"