GPT-4 Honest Review: Almost Perfect!

The Cost of GPT4: A Breakdown of Pricing and Use Cases

When it comes to the latest advancements in artificial intelligence, GPT4 is one of the most promising developments. However, as with any cutting-edge technology, there are costs associated with its use. In this article, we'll explore the pricing of GPT4 and its various use cases.

The cost of GPT4 can vary depending on the specific model being used. For example, a basic model would have cost around $400 per 100,000 requests. However, as of now, the exact same number of requests with GPT4 will cost you $7,500. This balloons to $15,000 for the same hundred thousand requests if you're using a 32k context window version of GPT4.

It's worth noting that these prices are likely to come down over time as more developers and businesses begin to use GPT4. However, for now, there is a clear distinction between two different use cases. On one hand, we have GPT3.5 turbo models that are cheap and fast. These models are suitable for people who need accurate responses and can provide a good starting point for development. On the other hand, we have GPT4, which offers increased accuracy and knowledge but comes at a higher price.

One of the key differences between GPT3.5 and GPT4 is the level of accuracy. According to Paul Jacobian, the founder of Copy AI, there was once a 1 in 10 chance that GPT2 would provide an accurate answer. However, with GPT3, this number jumped to 9 out of 10. Unfortunately, we don't see the same level of improvement with GPT4.

While GPT4 is not as revolutionary as some might have hoped, it does offer a more polished version of GPT3.5. This means that developers can use GPT4 to address issues such as factual errors and hallucinations. Additionally, GPT4 can help overcome problems like "do anything now" scenarios, which were previously possible with GPT3.

The Multimodal Ability: A Game-Changer for Entrepreneurs

One of the most exciting aspects of GPT4 is its multimodal ability. This means that developers can use GPT4 to integrate it with other tools and platforms, creating a wide range of new possibilities. We're already seeing businesses like Duolingo, Stripe, and Khan Academy integrate GPT4 technology into their products.

This trend is likely to continue as more developers begin to explore the potential of multimodal AI. The ability to create complex applications that combine natural language processing with other technologies like computer vision or machine learning will open up new avenues for entrepreneurship. We can expect to see a wide range of businesses emerge that leverage GPT4 and its related technologies.

Entrepreneurial Opportunities with GPT4

As we continue to explore the potential of GPT4, it's clear that there are many opportunities for entrepreneurs. From developing new applications to integrating GPT4 into existing products, there are countless ways to make money with this technology.

However, one area that is particularly exciting is the use of GPT4 in industries like medicine and law. These fields require a high level of accuracy and knowledge, making GPT4 an ideal solution. Additionally, as more developers begin to explore the potential of multimodal AI, we can expect to see new businesses emerge that combine natural language processing with other technologies.

What Does the Future Hold for GPT4?

As we look to the future, it's clear that GPT4 is going to play a major role in shaping the world of artificial intelligence. While it may not be as revolutionary as some might have hoped, it does offer a significant improvement over its predecessors. With its multimodal ability and increased accuracy, GPT4 has the potential to open up new avenues for entrepreneurship.

However, there are still limitations to consider. One key area is the lack of internet access and training data. Unfortunately, GPT4 has not been built with the ability to search the internet to get recent information, which can be a significant limitation.

Conclusion

In conclusion, GPT4 offers a range of exciting possibilities for developers and entrepreneurs. While it may not be as revolutionary as some might have hoped, it does offer a significant improvement over its predecessors. With its multimodal ability and increased accuracy, GPT4 has the potential to open up new avenues for entrepreneurship.

As we look to the future, it's clear that GPT4 is going to play a major role in shaping the world of artificial intelligence. Whether you're looking to develop new applications or integrate GPT4 into existing products, there are countless ways to make money with this technology.

WEBVTTKind: captionsLanguage: engpt4 has finally been released after years of anticipation and development but how does it stack up against its predecessors and why are there people saying that it hasn't really lived up to the hype and how can you start utilizing it in your daily life that's exactly what we're going to be talking about in this video here are the six key things that you need to know about the gpt4 release firstly availability and API access gpt4 is available to access now through the chat CPT pro version which is 20 per month if you're anything like me and couldn't be bothered paying the subscription then now's your excuse to actually get in there and try it out you get faster access to chat gbt as normal but you also get access to gbt4 within the chat gbt model selector however if you don't want to pay this 20 per month you can be sneaky about like I have and applied for API access I now have access to the API so I can write up a little chatbot script and start using this API for free essentially I've also got a video coming soon for all of the cool things that I've been able to build with this gpd4 API access so if you want to see any of that and don't want to miss it be sure to subscribe and hit the Bell down below secondly enhance performance let's talk about how impressed of gpt4 actually is this model has outperformed its predecessors in a number of ways firstly it is able to Ace the bar exam with a top 10 performance it's been able to pass the acap exams which are good enough to get college credits with and its ability to understand code is evident in its lead code programming question results which are far beyond previous models so it seems that in terms of intelligence this model is more intelligent it understands things better and more like a human than a the previous models like GPT 3.5 thirdly The increased input limit on this model is insane we have seen an explosion in the word limit from gpt3 to gpt4 from 3000 words to 25 000 words if you've ever tried to build applications with gpd3 you'll know that this is one of the major issues we run into constantly and there's all sorts of things his entire ecosystem of things built around this issue in order to get around this limit by expanding the token limit this much opens up so many opportunities for people to build applications without all the sort of additional stuff you used to have to use in order to get around that token limit essentially this is going to lower the bar and make it easy easier for people to build applications because you don't have to do all the complex stuff to get around these limits now the current versions of gpt4 that we have access to are limited to 8 000 tokens I believe or at least that's what I've seen when I've been playing around with the API this is because open AI is going to slowly scale up the use of this model there's so much attention on it right now there's actually a limit of 25 messages every three hours on chat GPT Pro currently but I assume they're going to be making this 25 000 word limit standard across the board very soon fourth is its multimodal functionality here is where things get wild gpt4's groundbreaking ability to now accept images as well as text as input a crazy example that was shown within the developer live stream for GPT 4's launch was Greg Brockman the co-founder of openai using a hand-drawn image of a website showing it to gpt4 and it was able to build a functioning version of that website in a few seconds there was another point in the developer live stream where gpt4 was shown a meme of a iPhone with a large old style charger cord put in it and it was able to analyze those images and explain why it was funny I think that these models being able to understand memes now could have some very interesting flow on effects for internet culture but that's yet to be seen now unfortunately despite all the hype this feature is not actually available right now so as you'll see in chat GPT Pro there's no image upload button so I guess they're working on rolling out the implementation of this but it will be coming very soon it's hard to wrap your head around just how groundbreaking this is and the kind of use cases that will flow on from this ability but if you want me to make a video on that be sure to let me know down below I'd love to do some brainstorming and come up with a bunch of ways that I could see this being used in effect in different markets like marketing development Etc so let me know down below if you want that video fifth is customizability with great power comes great customizability developers who are building on top of gpt4 through the API now I have access to a system message which allows for much greater customizability of the kind of outputs as mentioned in the developer live stream and on the open AR documentation this system message is not fully set up yet it's not quite working as well as they as they plan to have it in the future over time they're looking to increase how much effect this system message has on the kind of of outputs now the key thing about the system message and the prompt engineering that you have to do to sort of get it right is that it will give a top-down vision of what this particular application or this this chatbot is going to do meaning that there's not going to be as many chances for people to deviate from the purpose so if you're setting it into a certain mode people are going to have a harder time getting in and out of it because of that system message being set so by telling it what you want it to do and what you don't want it to do and that system message is going to be carried through the entire conversation and ensure that you get the kind of outputs and experience customer experience that you really want out of it this is going to really open up the doors to Greater customization and utilization of this AI technology and finally it's reduced bias and better fact checking an essential aspect of any good AI is trustworthiness gpd4 is now 82 percent less likely to give disallowed responses and 40 more likely to give accurate responses these improvements make the AI much more safer reliable and trustworthy for daily use if you've used chat GPT at all you'll be very familiar with the hallucinations that is known for it will just make up some stuff that never existed it and tell it to you like it's a fact now while gpt4 enter gbt pro has been seen as being too stale and boring and very restrictive and its responses compared to the previous 3.5 model that we've been using what I think we're going to see is that gbt4 with its very scientific and uh sort of bland responses is going to be used in a certain use case and in the older models like chpt 3.5 that you're familiar with using that are good at different things they're going to have its own use case as well so I think we'll see a bit of a Divergence and how these things are being applied but it's good as developers and as business people looking to build within the space that we have different different flavors available to us essentially through these models now despite its fantastic capabilities gpt4 does have its share of downsides firstly are the slower processing speeds gpt4 does know how to deliver but it comes at the cost of slower output speeds compared to the lightning quick GPT 3.5 turbo which we've all been using recently this could be caused by a number of factors firstly just the model is more complex and sophisticated so therefore it takes longer secondly the token limit is greatly increased at the moment so from three thousand to twenty five thousand is a huge jump more tokens be more time to process Etc and finally because we are in the early stages of its release uh open area doesn't necessarily have the compute power and the ability to push us out on a large scale we might see a jump from gpt4 to gpt4 Turbo like we saw with gpt3 so I think in time we'll see this end up being a much faster model secondly as it's greatly increased price point for people using the API building applications with gpt4 is a lot more expensive than using the previous models open AI bills people for the API access just to put it in perspective 100 000 requests with the GPT 3.5 turbo model would have cost you 400 and the exact same hundred thousand requests with gpt4 will cost you seven thousand five hundred dollars this balloons to fifteen thousand dollars for that same hundred thousand if you're using a 32k context window version of gpt4 now as I mentioned before I think this will come down to that sort of two different use cases things like GPT 3.5 turbo which are so cheap and so fast will have their own use cases by people who are relying on the accuracy and knowledge and increased ability to understand things and give accurate responses of gpt4 will be willing to pay that extra price to build applications like medicinal medicinal industry or a legal industry where you need that accuracy and of course over time you would expect the cost of these models to come down and thirdly limited internet access and training data unfortunately open AI has not built gpd4 to have the ability to search the internet to get recent information as we probably all hope if you're talking to it through chat CPT Pro you'll see that it has basically the same knowledge cut off as the previous model so it doesn't have all the more recent information from sort of 2022 and 2023 so far which is a pretty significant limitation I'd say now if we're looking at something like Bing chat which I believe has been confirmed recently that it is actually and has been using GPT for the whole time being able to query something and get the latest information that has actually come from sources on the web is a huge huge factor and I think might be a little deal that Microsoft has with openai look please don't add this feature into your publicly available API access we want to keep it sort of locked away in this being a Bing chat version that we have will we get access to the search functionality eventually I think so but as of the current version we don't so does gpt4 really live up to the hype I would say what we've seen here is more of an incremental step from GPT 3.5 to gpt4 not on the level that we saw from gpt2 to gpt3 in my interview with Paul Jacobian the founder of copy AI which I'll link up here great interview if you haven't seen it when Paul was playing around with gpt2 he said that one in 10 answers would be good and he said the jump from gpt2 to gpt3 was from 1 in 10 answers being good to 9 and 10 ounces being good so that is a a huge jump we saw from gpt2 to gpt3 have we seen that kind of massive jump from GPT theater gpt4 I don't think so I think what we have is a more polished version of GPT 3.5 fixing these sort of factual issues in the hallucinations and then also fixing a lot of the issues where you were able to get like the do anything now problems and sort of jailbreak it and get around it I think it's a more a sterilized version but definitely better for certain purposes now as always on this channel we like to bring things back to entrepreneurship and how you can build businesses with this I think the multimodal ability is going to open up so many doors for entrepreneurs looking to make money and build businesses in AI we are already seeing huge businesses like Duolingo stripe and Khan Academy integrate gpt4 technology into their products and I think this is a trend we're going to see uh continue to increase over time I'm currently working on a ton of interesting projects using gpt4 that I can't wait to show you guys so if you don't want to miss out on that please hit down below and subscribe to the channel hit the Bell if you're interested in AI entrepreneurship content that's all we talk about here on this channel so thank you so much for watching please leave a like if you've enjoyed and hit down below to the comment section and let me know your thoughts on gpt4 where do you see this going what are the interesting use cases that you have how are you planning to build businesses with this I want to hear all of it and the community does too so head down there leave a comment and that's all for today thank you so much for watching and I'll see you in the next onegpt4 has finally been released after years of anticipation and development but how does it stack up against its predecessors and why are there people saying that it hasn't really lived up to the hype and how can you start utilizing it in your daily life that's exactly what we're going to be talking about in this video here are the six key things that you need to know about the gpt4 release firstly availability and API access gpt4 is available to access now through the chat CPT pro version which is 20 per month if you're anything like me and couldn't be bothered paying the subscription then now's your excuse to actually get in there and try it out you get faster access to chat gbt as normal but you also get access to gbt4 within the chat gbt model selector however if you don't want to pay this 20 per month you can be sneaky about like I have and applied for API access I now have access to the API so I can write up a little chatbot script and start using this API for free essentially I've also got a video coming soon for all of the cool things that I've been able to build with this gpd4 API access so if you want to see any of that and don't want to miss it be sure to subscribe and hit the Bell down below secondly enhance performance let's talk about how impressed of gpt4 actually is this model has outperformed its predecessors in a number of ways firstly it is able to Ace the bar exam with a top 10 performance it's been able to pass the acap exams which are good enough to get college credits with and its ability to understand code is evident in its lead code programming question results which are far beyond previous models so it seems that in terms of intelligence this model is more intelligent it understands things better and more like a human than a the previous models like GPT 3.5 thirdly The increased input limit on this model is insane we have seen an explosion in the word limit from gpt3 to gpt4 from 3000 words to 25 000 words if you've ever tried to build applications with gpd3 you'll know that this is one of the major issues we run into constantly and there's all sorts of things his entire ecosystem of things built around this issue in order to get around this limit by expanding the token limit this much opens up so many opportunities for people to build applications without all the sort of additional stuff you used to have to use in order to get around that token limit essentially this is going to lower the bar and make it easy easier for people to build applications because you don't have to do all the complex stuff to get around these limits now the current versions of gpt4 that we have access to are limited to 8 000 tokens I believe or at least that's what I've seen when I've been playing around with the API this is because open AI is going to slowly scale up the use of this model there's so much attention on it right now there's actually a limit of 25 messages every three hours on chat GPT Pro currently but I assume they're going to be making this 25 000 word limit standard across the board very soon fourth is its multimodal functionality here is where things get wild gpt4's groundbreaking ability to now accept images as well as text as input a crazy example that was shown within the developer live stream for GPT 4's launch was Greg Brockman the co-founder of openai using a hand-drawn image of a website showing it to gpt4 and it was able to build a functioning version of that website in a few seconds there was another point in the developer live stream where gpt4 was shown a meme of a iPhone with a large old style charger cord put in it and it was able to analyze those images and explain why it was funny I think that these models being able to understand memes now could have some very interesting flow on effects for internet culture but that's yet to be seen now unfortunately despite all the hype this feature is not actually available right now so as you'll see in chat GPT Pro there's no image upload button so I guess they're working on rolling out the implementation of this but it will be coming very soon it's hard to wrap your head around just how groundbreaking this is and the kind of use cases that will flow on from this ability but if you want me to make a video on that be sure to let me know down below I'd love to do some brainstorming and come up with a bunch of ways that I could see this being used in effect in different markets like marketing development Etc so let me know down below if you want that video fifth is customizability with great power comes great customizability developers who are building on top of gpt4 through the API now I have access to a system message which allows for much greater customizability of the kind of outputs as mentioned in the developer live stream and on the open AR documentation this system message is not fully set up yet it's not quite working as well as they as they plan to have it in the future over time they're looking to increase how much effect this system message has on the kind of of outputs now the key thing about the system message and the prompt engineering that you have to do to sort of get it right is that it will give a top-down vision of what this particular application or this this chatbot is going to do meaning that there's not going to be as many chances for people to deviate from the purpose so if you're setting it into a certain mode people are going to have a harder time getting in and out of it because of that system message being set so by telling it what you want it to do and what you don't want it to do and that system message is going to be carried through the entire conversation and ensure that you get the kind of outputs and experience customer experience that you really want out of it this is going to really open up the doors to Greater customization and utilization of this AI technology and finally it's reduced bias and better fact checking an essential aspect of any good AI is trustworthiness gpd4 is now 82 percent less likely to give disallowed responses and 40 more likely to give accurate responses these improvements make the AI much more safer reliable and trustworthy for daily use if you've used chat GPT at all you'll be very familiar with the hallucinations that is known for it will just make up some stuff that never existed it and tell it to you like it's a fact now while gpt4 enter gbt pro has been seen as being too stale and boring and very restrictive and its responses compared to the previous 3.5 model that we've been using what I think we're going to see is that gbt4 with its very scientific and uh sort of bland responses is going to be used in a certain use case and in the older models like chpt 3.5 that you're familiar with using that are good at different things they're going to have its own use case as well so I think we'll see a bit of a Divergence and how these things are being applied but it's good as developers and as business people looking to build within the space that we have different different flavors available to us essentially through these models now despite its fantastic capabilities gpt4 does have its share of downsides firstly are the slower processing speeds gpt4 does know how to deliver but it comes at the cost of slower output speeds compared to the lightning quick GPT 3.5 turbo which we've all been using recently this could be caused by a number of factors firstly just the model is more complex and sophisticated so therefore it takes longer secondly the token limit is greatly increased at the moment so from three thousand to twenty five thousand is a huge jump more tokens be more time to process Etc and finally because we are in the early stages of its release uh open area doesn't necessarily have the compute power and the ability to push us out on a large scale we might see a jump from gpt4 to gpt4 Turbo like we saw with gpt3 so I think in time we'll see this end up being a much faster model secondly as it's greatly increased price point for people using the API building applications with gpt4 is a lot more expensive than using the previous models open AI bills people for the API access just to put it in perspective 100 000 requests with the GPT 3.5 turbo model would have cost you 400 and the exact same hundred thousand requests with gpt4 will cost you seven thousand five hundred dollars this balloons to fifteen thousand dollars for that same hundred thousand if you're using a 32k context window version of gpt4 now as I mentioned before I think this will come down to that sort of two different use cases things like GPT 3.5 turbo which are so cheap and so fast will have their own use cases by people who are relying on the accuracy and knowledge and increased ability to understand things and give accurate responses of gpt4 will be willing to pay that extra price to build applications like medicinal medicinal industry or a legal industry where you need that accuracy and of course over time you would expect the cost of these models to come down and thirdly limited internet access and training data unfortunately open AI has not built gpd4 to have the ability to search the internet to get recent information as we probably all hope if you're talking to it through chat CPT Pro you'll see that it has basically the same knowledge cut off as the previous model so it doesn't have all the more recent information from sort of 2022 and 2023 so far which is a pretty significant limitation I'd say now if we're looking at something like Bing chat which I believe has been confirmed recently that it is actually and has been using GPT for the whole time being able to query something and get the latest information that has actually come from sources on the web is a huge huge factor and I think might be a little deal that Microsoft has with openai look please don't add this feature into your publicly available API access we want to keep it sort of locked away in this being a Bing chat version that we have will we get access to the search functionality eventually I think so but as of the current version we don't so does gpt4 really live up to the hype I would say what we've seen here is more of an incremental step from GPT 3.5 to gpt4 not on the level that we saw from gpt2 to gpt3 in my interview with Paul Jacobian the founder of copy AI which I'll link up here great interview if you haven't seen it when Paul was playing around with gpt2 he said that one in 10 answers would be good and he said the jump from gpt2 to gpt3 was from 1 in 10 answers being good to 9 and 10 ounces being good so that is a a huge jump we saw from gpt2 to gpt3 have we seen that kind of massive jump from GPT theater gpt4 I don't think so I think what we have is a more polished version of GPT 3.5 fixing these sort of factual issues in the hallucinations and then also fixing a lot of the issues where you were able to get like the do anything now problems and sort of jailbreak it and get around it I think it's a more a sterilized version but definitely better for certain purposes now as always on this channel we like to bring things back to entrepreneurship and how you can build businesses with this I think the multimodal ability is going to open up so many doors for entrepreneurs looking to make money and build businesses in AI we are already seeing huge businesses like Duolingo stripe and Khan Academy integrate gpt4 technology into their products and I think this is a trend we're going to see uh continue to increase over time I'm currently working on a ton of interesting projects using gpt4 that I can't wait to show you guys so if you don't want to miss out on that please hit down below and subscribe to the channel hit the Bell if you're interested in AI entrepreneurship content that's all we talk about here on this channel so thank you so much for watching please leave a like if you've enjoyed and hit down below to the comment section and let me know your thoughts on gpt4 where do you see this going what are the interesting use cases that you have how are you planning to build businesses with this I want to hear all of it and the community does too so head down there leave a comment and that's all for today thank you so much for watching and I'll see you in the next one