This Is What Happens When Your PC OVERHEATS! 🔥

I recently set up a custom water-cooled machine using a pump from my Corsair fan failure experience, but to my surprise, it failed in just three minutes, causing the CPU temperature to skyrocket over 90 degrees Celsius. This was unexpected, as I had anticipated that the system would behave similarly to how Corsair fans do when they fail, with a gradual increase in temperature before shutting down. However, this pump failure led to the processor throttling itself down to 550 megahertz, effectively turning off, which is a far cry from the expected behavior.

The GPU, on the other hand, remained at a safe temperature of around 50 degrees Celsius during this idle period, which suggests that the water-cooled system can indeed manage heat, but may not be able to sustain high loads for an extended period. This experience highlights the importance of having multiple pumps in the system, as even one failure can bring down the entire cooling setup. The fact that the GPU was still usable and playable after 20 minutes of gameplay, albeit with a power limit instead of temperature limit, suggests that this water-cooled system has some limitations when it comes to handling high temperatures.

To test the limits of my custom water-cooled machine, I decided to cover all intake areas to simulate how dust or debris might accumulate over time, effectively disabling fans. To my surprise, the GPU frequency dropped significantly, and the system remained usable and playable even after 20 minutes of gameplay. The GPU temperature was still under 80 degrees Celsius, but it was power-limited instead of being temperature-limited. This shows that while the system can handle some level of heat buildup, it has limitations when it comes to sustained high temperatures.

In another experiment, I covered all intake areas and taped them up, effectively reducing airflow through the machine. However, this did not seem to have a significant impact on the GPU performance or temperature, which remained under 80 degrees Celsius throughout gameplay. This suggests that while some level of airflow is necessary for optimal cooling, it may not be as critical as I initially thought.

Finally, I decided to investigate how power draw and consumption would change when the system gets hot, specifically by blasting the heater inside my machine. Immediately after doing so, the power draw decreased significantly from around 330 watts to under 300 watts due to throttling of the GPU and CPU, which is a logical response given that these components were too hot to operate effectively at their maximum capacity.

However, when I repeated this process while keeping the system cool, I was surprised to find that there was only a 10-watt increase in power draw. This may seem insignificant, but it highlights just how efficient my current power supply is, with an overall efficiency rating of platinum. While I expected more variation in power draw based on temperature, the difference was not substantial enough to be considered significant.

These experiments have taught me several valuable lessons about custom water-cooling systems. Firstly, I've learned that GPU cores can tolerate high temperatures without compromising their performance. This is evident from my experience with both the Corsair fan failure and the Razor Blade Pro 17, where even when the temperature soared, the system remained usable and functional.

Secondly, a dead pump is extremely detrimental to a water-cooled machine, as evidenced by the sudden failure of my custom setup in just three minutes. This emphasizes the importance of having multiple pumps in place to ensure that if one fails, the system can continue to function without significant damage or downtime.

Lastly, I was surprised to find that heat had relatively little impact on my power supply's efficiency, with only a 10-watt increase in power draw observed when the system remained cool. This highlights just how efficient modern power supplies are and underscores the importance of selecting one that meets your specific power needs. While this is based on just one experiment, it suggests that the relationship between temperature and power consumption may not be as straightforward as I initially thought.

In conclusion, these experiments have provided valuable insights into the behavior of custom water-cooled machines under various conditions. By testing different scenarios, I've learned about the importance of pump redundancy, GPU thermal limits, airflow management, and power supply efficiency in maintaining optimal system performance and lifespan.

"WEBVTTKind: captionsLanguage: entoday my good people I want to see what happens when my computer is overheat let's fire things up for science or not science but more like anecdotal observations so my test subjects for this whole thing will be my custom water-cooled thread Rupert machine I love this thing then we have this pre-built Corsair one with hella specs 20 atti and a 9900 k no but I want to include a notebook my razor blade Pro 1728 Emacs Q in here with 9750 h and my other system right behind me which is my usual test temperature configuration with the new graphics card now you might be thinking mitri what a waste of time everybody knows that when systems gets hot frequencies drop performance is compromised fans ramped up but there's a lot more to the story apparently as I've made plenty of small observations throughout my testing let's begin this should be fun right this the new fantex p300 enclosure is a great value airflow focused frame thanks to the all new mesh front panel that is a single piece of metal for highest mesh area and durability the inside is nice and simple plus an exhaust fan is included check out the p 308 down below alright so the first thing i want to simulate with my coarser one is total fan failure and this is an interesting one because the entire system is cooled by the single 140mm exhaust fan up top and there is a fan on the GPU side that cools the PCB but what happens if that fan on the top fails so I fired up I 264 and msi combustor for an extreme system torture test so both the CPU and GPU are completely bombarded by processing power at the hundred percent load and as expected without any active cooling the CPU reached a hundred degrees Celsius on all cores and drop to 3.6 gigahertz so by design the system was throttled to not nuke itself but I notice two things so number one it took almost 23 minutes for the CPU frequency to stabilize around 3.6 because for the first like 10 minutes of the CPU being at the hundred degree Celsius it was still trying to boost to like 4.5 gigahertz which is crazy and this is running a default BIOS without any overclocking preset supplied just on XMPP enabled and that's it and number two the graphics card was so much cooler at 88 degrees Celsius add full load as well I mean the frequency was lower and it was the 88 degrees is its temperature limit but it was still functionally perfectly fine without any active cooling aside for the tiny fan that was cooling the PCB and frequency only dropped by 18% which is pretty impressive for a non cooled 20 atti inside a tiny package I did notice a few weird things the smell it almost felt like the heat was in the air and it smelled like something was melting inside the machine the front LEDs in the case started to flash red I think it's an indication that the system was overheating and the entire front and the CPU side was super hot to the touch and I could feel the heat rising from this tiny enclosure there was so much concentration of heat inside of it that I could literally warm up my hands by just hovering above the enclosure and that's kind of crazy hot box hot box I was so excited to plug the fan back in and I'm sure the hardware was too and finally around the 26 minute mark the system BSO deed which took way too long I was expecting that to happen earlier but I'm not sure it was either because of the CPU or the drives both SSD and the hard drive were super super hot and the system could have shut down because of that as well I was expecting it to crash but did not expect it to last 26 minutes nice machine good hardware I then fired up control just to see what would happen in a real world gaming scenario and of course CPU stay that four point seven million Hertz because not all cores are completely loaded which is kind of crazy and the CPU was harder but was still clocking itself to four point seven gigahertz while the GPU obviously down clock just like the lower frequency while maintaining that 88 degrees temperature limit so if the fan would die in the gaming session you would notice that by having an FPS hit and also lowering frequencies make sense all right so moving on from Corsair one let's see what would happen if a pump on the custom water cooled machine would fail and this was a complete surprise to me because I was expecting it to behave like the Corsair one does with fan failure but it does not in just three minutes the CPU reached over ninety degrees Celsius which for thread Ripper 2950 x is a little bit too high and the processor down clock's itself to 550 megahertz which what is that frequency even mean you might as well just shut down and this is an idle - without any load applied and I guess that makes sense since the fluid is not taking away any heat but I thought we would have more time before the BSOD the GPU on the other hand remained around 50 degrees Celsius and this is an idle as well but that shows you that four really crucial water-cooled systems perhaps dual pumps is important in case one fails and you don't nuke your system and your CPU and now let's move on to the razor blade Pro 17 and see what would happen with the system if I completely cover and choke all the intake areas to simulate if you let's say place this on a blanket or dust over time kills the fans or covers up all that intake spots so I taped up all the intake and kept the exhaust open and just to my surprise the GPU frequency dropped and the system just got hotter but totally usable and totally playable even after 20 minutes of gameplay the GPU was still under 80 degree Celsius and it was power limited instead of being temperature limited so it was not throttling which is a good thing kind of impressive - but it was just not getting enough power but that is just the notebook design not because of the hardware you of course lose some fps and the middle of the machine got super hot right above the keyboard but you don't really touch there anyway but one interesting observation I made was how much quieter the system was with all those intake areas completely taped up and lastly I want to experiment with some power supply stuff and to see what would happen with power draw when the system gets hot and what would happen with the power current and wattage consumption of the power supply when the power supply gets hot so first blasting the heater inside my machine immediately decreased power draw and that is because of throttling so GPU and the CPU immediately were too hot and down clocked themselves and so we went from around 330 watts to under 300 and that makes sense the system cannot push higher clocks therefore it's consuming less power but what would happen if the power supply gets hot independently while the system remains cool in theory the power supply I should become less efficient when it is hot because more of that energy is lost through heat so it should be pulling more wattage from the wall then it is actually needed for the system actually we've done a full power supply tutorial on how to choose the right power supply for you check it out over here but it is exactly what happened under the cool scenario at load we are pooling around 335 Watts from the wall while when the power supply is blasted with heat but the system remains cool we're pooling about 10 watts more from the wall so it is higher but the difference isn't very significant and that is because the power supply is extremely efficient this thing is platinum efficiency and it's 1200 watts so you have the most efficiency in the curve that I think in the middle and we're about like a third way through so we're not even at the most efficient point of the curve but if the power supply was less efficient we would be pulling more watts from the wall than is required for the system so all these heating experiments have taught me a few things number one the GPU core can take a lot of heat yet still be powerful the example with Corsair one and my razor blade pro 17 of course the frequency is dropped but we still maintain a perfectly operational non glitchy known artifact II hardware and I guess it goes without saying that hitting those high temperatures on the consistent basis will definitely diminish the lifespan of your components and your hardware number two a dead pump is totally fatal for a water-cooled machine in three minutes we reached 100 degrees Celsius that is scary I rather have a fan dying here or be in a hot environment than to have a pump fail and for number three I did not realize how little impact heat had on my power supply I mean this is one sample size so this isn't like a generalization but I was expecting you for it to draw more power from the wall as the power spike got hot because I was blasting it for 25 minutes and they made no difference I mean 10 watts of a difference is not significant and could be instrument error but regardless keep your systems cool I hope you enjoyed this little experimental piece time to open some windows and let this place cool thanks for watching Hammond mitri have a good one people stay safe out there that Jacktoday my good people I want to see what happens when my computer is overheat let's fire things up for science or not science but more like anecdotal observations so my test subjects for this whole thing will be my custom water-cooled thread Rupert machine I love this thing then we have this pre-built Corsair one with hella specs 20 atti and a 9900 k no but I want to include a notebook my razor blade Pro 1728 Emacs Q in here with 9750 h and my other system right behind me which is my usual test temperature configuration with the new graphics card now you might be thinking mitri what a waste of time everybody knows that when systems gets hot frequencies drop performance is compromised fans ramped up but there's a lot more to the story apparently as I've made plenty of small observations throughout my testing let's begin this should be fun right this the new fantex p300 enclosure is a great value airflow focused frame thanks to the all new mesh front panel that is a single piece of metal for highest mesh area and durability the inside is nice and simple plus an exhaust fan is included check out the p 308 down below alright so the first thing i want to simulate with my coarser one is total fan failure and this is an interesting one because the entire system is cooled by the single 140mm exhaust fan up top and there is a fan on the GPU side that cools the PCB but what happens if that fan on the top fails so I fired up I 264 and msi combustor for an extreme system torture test so both the CPU and GPU are completely bombarded by processing power at the hundred percent load and as expected without any active cooling the CPU reached a hundred degrees Celsius on all cores and drop to 3.6 gigahertz so by design the system was throttled to not nuke itself but I notice two things so number one it took almost 23 minutes for the CPU frequency to stabilize around 3.6 because for the first like 10 minutes of the CPU being at the hundred degree Celsius it was still trying to boost to like 4.5 gigahertz which is crazy and this is running a default BIOS without any overclocking preset supplied just on XMPP enabled and that's it and number two the graphics card was so much cooler at 88 degrees Celsius add full load as well I mean the frequency was lower and it was the 88 degrees is its temperature limit but it was still functionally perfectly fine without any active cooling aside for the tiny fan that was cooling the PCB and frequency only dropped by 18% which is pretty impressive for a non cooled 20 atti inside a tiny package I did notice a few weird things the smell it almost felt like the heat was in the air and it smelled like something was melting inside the machine the front LEDs in the case started to flash red I think it's an indication that the system was overheating and the entire front and the CPU side was super hot to the touch and I could feel the heat rising from this tiny enclosure there was so much concentration of heat inside of it that I could literally warm up my hands by just hovering above the enclosure and that's kind of crazy hot box hot box I was so excited to plug the fan back in and I'm sure the hardware was too and finally around the 26 minute mark the system BSO deed which took way too long I was expecting that to happen earlier but I'm not sure it was either because of the CPU or the drives both SSD and the hard drive were super super hot and the system could have shut down because of that as well I was expecting it to crash but did not expect it to last 26 minutes nice machine good hardware I then fired up control just to see what would happen in a real world gaming scenario and of course CPU stay that four point seven million Hertz because not all cores are completely loaded which is kind of crazy and the CPU was harder but was still clocking itself to four point seven gigahertz while the GPU obviously down clock just like the lower frequency while maintaining that 88 degrees temperature limit so if the fan would die in the gaming session you would notice that by having an FPS hit and also lowering frequencies make sense all right so moving on from Corsair one let's see what would happen if a pump on the custom water cooled machine would fail and this was a complete surprise to me because I was expecting it to behave like the Corsair one does with fan failure but it does not in just three minutes the CPU reached over ninety degrees Celsius which for thread Ripper 2950 x is a little bit too high and the processor down clock's itself to 550 megahertz which what is that frequency even mean you might as well just shut down and this is an idle - without any load applied and I guess that makes sense since the fluid is not taking away any heat but I thought we would have more time before the BSOD the GPU on the other hand remained around 50 degrees Celsius and this is an idle as well but that shows you that four really crucial water-cooled systems perhaps dual pumps is important in case one fails and you don't nuke your system and your CPU and now let's move on to the razor blade Pro 17 and see what would happen with the system if I completely cover and choke all the intake areas to simulate if you let's say place this on a blanket or dust over time kills the fans or covers up all that intake spots so I taped up all the intake and kept the exhaust open and just to my surprise the GPU frequency dropped and the system just got hotter but totally usable and totally playable even after 20 minutes of gameplay the GPU was still under 80 degree Celsius and it was power limited instead of being temperature limited so it was not throttling which is a good thing kind of impressive - but it was just not getting enough power but that is just the notebook design not because of the hardware you of course lose some fps and the middle of the machine got super hot right above the keyboard but you don't really touch there anyway but one interesting observation I made was how much quieter the system was with all those intake areas completely taped up and lastly I want to experiment with some power supply stuff and to see what would happen with power draw when the system gets hot and what would happen with the power current and wattage consumption of the power supply when the power supply gets hot so first blasting the heater inside my machine immediately decreased power draw and that is because of throttling so GPU and the CPU immediately were too hot and down clocked themselves and so we went from around 330 watts to under 300 and that makes sense the system cannot push higher clocks therefore it's consuming less power but what would happen if the power supply gets hot independently while the system remains cool in theory the power supply I should become less efficient when it is hot because more of that energy is lost through heat so it should be pulling more wattage from the wall then it is actually needed for the system actually we've done a full power supply tutorial on how to choose the right power supply for you check it out over here but it is exactly what happened under the cool scenario at load we are pooling around 335 Watts from the wall while when the power supply is blasted with heat but the system remains cool we're pooling about 10 watts more from the wall so it is higher but the difference isn't very significant and that is because the power supply is extremely efficient this thing is platinum efficiency and it's 1200 watts so you have the most efficiency in the curve that I think in the middle and we're about like a third way through so we're not even at the most efficient point of the curve but if the power supply was less efficient we would be pulling more watts from the wall than is required for the system so all these heating experiments have taught me a few things number one the GPU core can take a lot of heat yet still be powerful the example with Corsair one and my razor blade pro 17 of course the frequency is dropped but we still maintain a perfectly operational non glitchy known artifact II hardware and I guess it goes without saying that hitting those high temperatures on the consistent basis will definitely diminish the lifespan of your components and your hardware number two a dead pump is totally fatal for a water-cooled machine in three minutes we reached 100 degrees Celsius that is scary I rather have a fan dying here or be in a hot environment than to have a pump fail and for number three I did not realize how little impact heat had on my power supply I mean this is one sample size so this isn't like a generalization but I was expecting you for it to draw more power from the wall as the power spike got hot because I was blasting it for 25 minutes and they made no difference I mean 10 watts of a difference is not significant and could be instrument error but regardless keep your systems cool I hope you enjoyed this little experimental piece time to open some windows and let this place cool thanks for watching Hammond mitri have a good one people stay safe out there that Jack\n"