AMD Ryzen vs. Intel System Latency Benchmark - Best Gaming CPUs for Fortnite, CSGO, etc.
**The Quest for System Latency: A CPU Showdown**
In our latest endeavor to test and compare the performance of various CPUs, we set out to explore the realm of system latency. Specifically, we were interested in determining whether there is a significant difference between certain CPU models that could impact the overall gaming experience. Our testing methodology involved a range of scenarios, from 3D rendering to gameplay simulations, with a focus on understanding how different processors handle input latency and performance.
**The GPU-Driven Test**
As our first step, we decided to start by testing games that are heavily reliant on the GPU for processing, such as Sniper Elite Four. This title is known for its robust optimization, making it an ideal candidate for evaluating the relationship between CPU performance and system latency. We used a 100 resolution scaling at 1080p and the high preset, which effectively limited the CPU to handle only a small portion of the workload, ensuring that we were truly GPU-bound. The results showed minimal differences between the tested devices, with some variation in FPS emerging primarily due to differences in latency.
**The System Latency Benchmark**
For our system latency benchmark, we used a mix of games and simulations that pushed the CPU to its limits. Our test setup involved a 10-series CPU competing against a Ryzen 3000-series processor, as well as a 10700K CPU. We also tested these devices at different resolutions and with varying levels of load to see how they fared. The results indicated that while there may be some minor variations in system latency between the different CPUs, these differences were largely negligible.
**A Closer Look at Competitive Titles**
One area where we observed more pronounced differences was in competitive titles like Fortnite. We set up our test environment in the Battle Lab mode, which allows for a more controlled and intense simulation of online gaming. By adjusting various settings and tweaking parameters, we were able to elicit more significant variations in system latency between different CPUs. These findings suggest that in games with high CPU load, such as Fortnite, the difference between competing processors can be more pronounced.
**The Limitations of Our Testing Methodology**
While our testing methodology provided valuable insights into the performance differences between various CPUs, there are some limitations to consider. In particular, we found that the fluctuations in system latency were largely due to variations in FPS, which made it challenging to draw meaningful conclusions from our data. Furthermore, the 100 GPU load scenario was not as effective at revealing differences between the devices, with most results falling within a narrow range.
**The Future of System Latency Testing**
As we continue to refine and expand our testing methodology, there are opportunities for growth and improvement. We plan to explore more scenarios that can reveal subtle differences in system latency, such as games with higher CPU load or those that utilize asynchronous compute. By pushing the boundaries of what is possible, we hope to uncover more nuanced insights into the performance characteristics of different CPUs.
**Conclusion**
In conclusion, our latest testing endeavor has provided valuable insights into the relationship between CPU performance and system latency. While significant differences were not observed in most cases, there are scenarios where variations can become more pronounced. As we continue to refine our testing methodology and explore new scenarios, we aim to provide readers with a deeper understanding of the complex relationships at play in the world of system latency.
**What's Next?**
We would love to hear from you! Are there any games or titles that you think might reveal differences between competing CPUs? Please let us know by sharing your thoughts in the comments section below. We will continue to push the boundaries of what is possible and strive to provide our readers with the most informative and engaging content.
**Support Us**
If you enjoyed this article and would like to support our ongoing efforts, please consider visiting our Patreon page or store at gamersnexus.net. Your contributions help us afford the equipment and resources necessary for testing and creating high-quality content for our community.
"WEBVTTKind: captionsLanguage: enat the request of the community today we're looking at total system latency so what some people call input latency for amd versus intel this is the end-to-end latency we're looking at the response time in milliseconds from amd cpus to intel cpus at both the sort of higher end gaming range and the lower end gaming range and in this testing what we're going to do is compare competitive titles like cs go rocket league and things like that rather than our usual suite which is more explicitly intensive for the cpus to run this is the first time we've ever done content at this scope for latency testing typically we do maybe five to ten test passes this time we purchased a much higher fps camera and we ran 80 to 90 test passes per cpu per game sorry patrick he had to run them all manually for the last week and a half or so before that this video is brought to you by squarespace squarespace is what we've been using for years to manage our own gamer's nexus store and we've been incredibly happy with the choice squarespace makes e-commerce easy for those interested in starting stores but it also has powerful tools to build all types of websites photo galleries for photographers resume and portfolio sites and small business sites are all easily done through squarespace having built a lot of client websites the old way before running gn full-time we can easily recommend squarespace as a powerful fast solution go to squarespace.com gamersnexus to get 10 off your first purchase with squarespace and now because of the comments on the previous video we're looking into another request of the community some of the comments are saying well okay but what about total system latency or the input latency of one cpu versus the other is there an advantage there is it placebo or am i actually feeling a real difference in the response of clicking the mouse button and seeing an action on the screen on one cpu versus another this is an interesting test anyway but specifically there were some theories that we saw where some of our commenters suggested that maybe because of the way the io die is configured in ryzen's architecture how there is an i o die on 2000 3000 series cpus and so forth not the mobile ones there might be a difference in how the input is processed and that's definitely really interesting so we're looking into that frame rate or fps is the number most people can relate to in games but there's also frame times one rate is derived from time rate is an abstraction from the base metric of time measured in milliseconds then of course there's input in milliseconds as well so frame rate doesn't tell the whole story frame time also doesn't tell the whole story we need all three of these things to really put the picture together so we're starting off with a head-to-head between the 10 600k at about 300 and the comparable 300 part from amd which is the r7 3700x we're using the same everything else in the system from our cpu test benches and reviews the only difference is the i310 100 will be running slower memory we'll talk about that more in a moment and why we did that these tests are strictly cpu versus cpu comparisons so the test scenarios aren't intended to directly simulate gameplay if we add latency testing to cpu review or game optimization guides we'll develop a new methodology since latency testing requires timing response to an input it's really difficult to use baked in benchmarks or replays for any of these tests and there's also a lot of manual work involved in counting the frames one by one from a high speed or high fps camera against as well a high refresh monitor before we start we are aware that capping frame rate may reduce latency in some instances even though that's not the way in which things should logically work we didn't cap frame rate for any of these games except overwatch in overwatch that was mostly unavoidable and we'll talk about that later in rocket league and cs go we uncapped the frame rate and ran with the highest frame rate that came out of it vsync and g-sync were also disabled and then we also have some buffering tests in here for overwatch just because we were personally curious all tests were done at 1080p with 100 resolution scaling on a 1080p 240hz monitor with a 1000 fps camera pointed at the screen which patrick then ran 80 to 90 times per cpu per game it it cost a lot of time and then counted those frames to get the results it's similar to what we do for stadia testing except even more accurate than it was we got to refine things a bit for both systems the mouse the input device being measured was plugged into the usb port designated for bios flashing in an effort to minimize latency that means it's going through the cpu not the chipset on the gigabyte x570 master this port is explicitly described as the integrated in the cpu port rather than the chipset three of the cpus were tested with our standard kit of g skill trident z 3200 megahertz cl 14 memory including the additional timing tuning we do while the i3 10 100 was tested with a memory that's technically 3 000 megahertz cl 15 except it's down clocked to 26 66 and then we set the timings to where they make sense based on our previous reviews so that's because the i3 is almost always going to be paired with a motherboard incapable of running memory faster than 2666 and hence we set it to that speed measurements were taken using an led hooked into a mouse that we sought it in previously and we showed this before in our stadium benchmarks but we've upgraded the camera significantly to a 1000 fps camera so we can now see down to the millisecond aside from the monitor's 240 hertz limitations that by the way was also an upgrade we explicitly used competitive games here although we could use more intensive games like total war or red dead two we thought most people who care about end to end system latency would be more in the camp of cs go fortnite overwatch or rocket league and because we did a minimum of 80 test passes for each cpu for each game we had to limit somewhere how many games we were adding to the benchmarks we added sniper 4 just because we thought it'd make an interesting study of technology since the game is so well optimized and it's more of a gpu load cb load is going to be a challenge for some of these but we'll talk about that as we go first we have counter-strike global offensive a competitive shooter with a community that cares deeply about latency and things like tick rate for this one we have 80 test passes per cpu per game configuration we did two single player tests on de dust2 one without any bots at all and one with harmless bots enabled we used the in-game console to set the frame rate cap to 999 for all cs go tests we stock fairly close to the original default settings but set global shadow quality to very low in an effort to remove one of the most obvious gpu dependent factors also because the default settings change based on the hardware at play we obviously fixed those in place for all devices our flag for measuring latency in this test was aiming down the scope of the op firing and checking for the first frame where the mask around the scope began to disappear we may pick a different flag in the future testing with the in-game fps counter shows that the frame rate increases significantly when the scope is used since only a small area of the screen is visible let's start with a histogram for the cpus for the 10600k this is what the data looks like most of the results clustered around the 22.5 to 25 millisecond range with 16 entries here the next highest clusters were 12 and a half to 15 milliseconds and 17.5 to 20 milliseconds we consider this range to be fairly wide it's a more variable game than most even without bots and so we see an 18 millisecond top to bottom data range which is again a pretty wide net to cast this is the same for the 3700x which we'll look at in a moment overall the average worked out to 19.9 to 20 milliseconds for the 10600k here's the 3700x histogram the 3700x is overall about the same it has a few more tests in the 15 to 18 millisecond results range but also more in the 18 to 23 range the end result is an average that's right on top of the 10 600 k and is within error and variance let's look at the comparative data for a fuller understanding without bots at all average latency on the 3700x system was 19.16 milliseconds with a fairly high standard deviation of 4.6 while intel's was 19.99 average for the 10-600k with a slightly higher deviation at 4.97 keep in mind when we say the word latency once again as a reminder it means total system latency not any one aspect of the chain this is all well within error and is not a difference we can call in either direction especially when considering the monitor has its own limitations the lower end cpus were fairly close to their more expensive counterparts and latency for this game the 3300x averaged 20.3 milliseconds and the 10-100 averaged 20.8 milliseconds there's actually almost no meaningful difference between any of these even with the lower frequency ram on the 10 100 here's the max and the min charts against the 80 to 90 test passes results were more variable in counter-strike than in some of the other titles we tested with results for both cpus ranging between about 11 milliseconds at the low end and 29 at the high end the minimum values ran 11.4 milliseconds on the 3700x 11.18 milliseconds on the 10600 k which is within error the max values were 27.6 and 28.8 which is again error or variance for the lower end cpus we saw a worst case of 31.1 on the 10 100 and 29.3 on the 3300x knowing that we were going to do a test with bots made consistent testing much more difficult as we had to choose a test location where the bots couldn't walk directly in front of our scope a character model blocking the whole screen or a large part of it raises the frame rate and therefore lowers the latency in a considerable fashion practice servers are hosted locally including the ai players that populate the server which explains why frame rate averages dipped dramatically down to a range of 337 to 352 fps average for the 3700x and 10 600k the end result was an average end-to-end system latency of 22.37 milliseconds on the 3700x and 22.45 on the 10-6 which is within error and functionally identical the 10-100 ran 23.73 milliseconds at 26.66 megahertz with the 3300x at 22.62 milliseconds these are all clustered tightly together so for a competitive game that's less graphics intensive you do benefit from the higher average frame rate that enables these response times there's no magical architecture advantage yet that seems to be plotting one higher than the other just the anticipated association of higher fps to lower latency or variance once they're equal in this chart of maximums and minimums now we see the limitations of data collection on devices that are so close together the 10 100 technically had the lowest maximum value but with a standard deviation of about 4.7 to 5.3 on these cpus that's again going to fall within run to run variation the 3700x had a 34 millisecond max and 13.7 millisecond minimum the 10-6 plotted 32.7 and 14.7 and the 3300x plotted 33.7 and 13.2 overall we're really not able to write some big story about any of these numbers here's a histogram of the results for the 10600k and 3700x they're almost identical in behaviors with a few differences the bucket size is three milliseconds here at 13 to 16 the 3700x had three of the 80 entries against the 10 600 k's eight entries so 10 or so of the 10 600 k's results were within this range at 16 to 19 milliseconds they were tied at 19-22 the 3700x had 23 entries to the 14 of the 10-600k and so on there were no major deviations between the buckets if you're curious about the 3300x and the 10 100 it's similar the 3300x had more entries in the 13 to 16 range at 10 to 3 but after that they mostly leveled out fortnite is next we used the directx 12 version of fortnite at the medium preset 1080p using 100 resolution scaling multi-threaded rendering was enabled vsync was disabled and the in-game frame rate was set to unlimited we entered the battle lab alone under controlled conditions to do our testing these results are for comparison between cpus so adding in players muddies the data to the point that it's actually useless we could have tested with players anyway and just not told anyone how absolutely useless the resulting data is and avoided people complaining about realistic test cases but at the end of the day if you want real data that's actually useful you come to us for it so we did it properly we're assuming a baseline performance level which is equivalent in other words under equivalent load conditions in a match the delta should be about the same because we can't recreate 100 equal loads with a multiplayer game like this one we have to test it this way to retain accuracy and recreate the conditions for each test here's the average latency chart the i5 10 600k ran an average against 90 passes of 15.3 milliseconds with standard deviation at 1.3 the r7 3700x plotted at 15.9 and 1.7 so these two are once again functionally the same the i3 10 100 at 26.66 megahertz actually produces a somewhat wide difference this time at 18.2 milliseconds average end-to-end latency this might matter to a literal professional who's making money on the game where an extra few hundred bucks for a cpu is paid for one play that executes five milliseconds faster but it's unlikely to be too noticeable to most players this will compound with load in theory and increasing cpu load in the game could stand to position the 10600k and potentially 3700x and more significantly advantaged positions against the lower end parts on this max min chart the hierarchy remains the same the 10600k has the lowest maximum total system latency at 18.2 milliseconds followed by the 3700x 3300x and 10 100. ultimately the higher fps part is producing the lower latency number which should not be news to anybody here's a histogram of the 3700x and 10 600k with a bucket size of two the 10600k had more entries between 12 milliseconds and 14 milliseconds more entries and 14 to 16 and the 3700x had more entries in 16 to 18 than more in 18 to 20. overall the 10 600k objectively plotted lower in total system latency but again the value of that is entering sort of subjective voodoo magic fps gamer territory where the numbers at this degree of lowness really become questionable how much you can actually notice the differences the 3300x versus 10 100 histogram shows what we'd expect given the previous results the 3300x has more 12 to 14 millisecond results far more 14 to 16 millisecond results at 25 versus 5 and more 16 to 18 millisecond results at 49 to 36. the 10 100 brings up the slower end of the scale offering more entries between 18 and 22 milliseconds this is repeatable and is a real difference in a test lab as in hours not necessarily just the games and that's in the very least what we can take away in an objective sense overwatch is next overwatch's in-game fps slider maxes out at 300 and it seems like even modifying the config files can only increase that limit to 400. we decided not to go that far so these tests are unique in that both cpus were running right up against the fps limit and were engine limited that makes it interesting for another reason which is that we can now observe performance when constrained by external force we entered the training grounds as mccree for testing rather than trying to wrangle a team of bots and then we used the high preset but disabled both triple buffering and reduced buffering to start with we then much to the chagrin of patrick who is manually running all these tests did an identical round of tests with reduced buffering enabled because of the limit hardware info showed a low and constant gpu core load during our pre-testing but a high usage across multiple cpu cores all four cpus easily achieved 300 fps average with these settings although the 10600k scored the highest and one percent and 0.1 lows the 10-100 scored the lowest this echoes the performance deltas that we saw in other uncapped tests here's the latency chart the 10600k is the only cpu that really differentiated itself in latency with an average of 19.33 milliseconds total system latency but it had a relatively high standard deviation for this test as did they all which indicates that this may be within run to run variants the 3700x ran an average total system latency of 20.7 milliseconds about tied with a 10 100 and 3300x ultimately once again these are all mostly the same although the 10600k had a technical lead it's a 1.4 millisecond difference in averages against a standard deviation of 5 milliseconds counted against 80 passes so we feel comfortable calling these the same from the player's standpoint this histogram shows the 10600 ks plotting more 10 to 14 millisecond results with the two cpus mostly even between 14 and 18. the r7 3700x has more 24 to 26 millisecond spikes and 28 to 30 millisecond spikes which caused it its lower rank let's move on this average latency chart is with reduced buffering enabled which didn't significantly affect performance we'd expect this setting to have more of an impact if the game were running at a lower frame rate rocket league is next this one has an in-game fps cap of 250 but this can be circumvented by modifying config files allowing us to log fps averages well in excess of 700 with correspondingly low system latency we used the high quality setting for render quality and the quality setting for render detail and then controlled them for each device logging with hardware info prior to testing showed high usage on just one or two cpu cores with a high gpu core load percentage that still never entered the 90s averages for the four cpus were close together but the variation between results was low enough that the comparison is still possible the 3700x performed best by a thin margin at 10.19 milliseconds average followed closely by the 10600k at 10.59 milliseconds and then the 3300x at 10.63 with the i3 10 100 at the bottom at an average at latency of 11.09 the maximum and minimum latency charts keep the results closely packed with the 10600k and 3300x tied for having the best worst results so to speak although the differences at this point are again basically irrelevant here's a histogram of the 10600k and 3700x for these with 90 results logged for this one for each the 3700x had about two times as many entries at eight milliseconds uh then the 10600k did and then it had a few more entries at nine while the 10600k took over after that the slowest result of the two belongs to the 3700x finally we threw sniper elite four into the mix as well this has long been one of the best developed titles from a gpu optimization standpoint and that's as tested in our gpu review suite so we wanted to test it here as well in this one we're actually gpu bound with dx12 asynchronous compute which is why we use this 100 resolution scaling at 1080p and the high preset we're within variance here with differences primarily emerging contingent upon the fps for each test pass because this is bound by the gpu there's a bit more fluctuation in fps than in a cpu bind and so we end up with less reliable data for a cpu total system latency benchmark but interesting data nonetheless it does show you where it stops mattering quite as much though when you're choosing a cpu for this and that's with 100 gpu load so these numbers again aren't different in a way which really matters nor is it one that we can present as being a meaningful difference it's all just within run to run variation especially with the 100 gpu load so that's it then for the games we've tested so far we're not really seeing any meaningful difference between two similarly priced parts the 10 100 was the most consistently somewhat more distant from its competitor the 3300 x but the 10 6 and the 37 were basically right next to each other and indistinguishable in nearly every test now there's a few times where this might start to matter more that we need to consider and talk about from a testing methodology standpoint first of all if we were testing say a game like total war three kingdoms where there's significantly more load on the cpu and it's more distributed across the cpu there might be a scenario where the input latency as we'll call it it's not really the exactly the correct term but latency the end-to-end latency could be more disparate between the tested devices but again hopefully the viewership here respects the amount of time required to do even what we've done now and the most interest we saw was in those higher competitive titles just because it makes sense those are people who are often as someone who used to to play games like source and uh and other fps games those are often the people who are the first to blame anything in the system other than themselves for a death in the game and so it made sense to test those for the response of the mouse versus the uh the outcome so we could test more cpu intensive games and maybe see different results but we've got to cap it somewhere and this was it for the starter another place you might see differences emerge well not differences but you might see the differences that exist become more exaggerated would be in something like a in fortnight with all the players loaded in in a more intense scenario but because that's not reproducible and because you start interfering what you get network interference in there where now you have a whole different link in the chain to consider for your latency it's just not something we could realistically test and produce data we're confident in as being actually different we played around with it but it was too unpredictable and so we decided that in the spirit of capturing data which is actually meaningful we'd drop it so um what we would expect is that the difference is seen in fortnite in the battle lab would be able to you'd be able to extrapolate those and maybe expand them a little bit in multiplayer so you end up probably with about the same product hierarchy but maybe an extra difference somewhere in in terms of milliseconds in the results but not something we can realistically test today maybe we'll be able to do that at some point in the future overall then not a big difference in the devices for the end-to-end system latency and there may be absolutely maybe a difference a lot of the time what you're probably feeling is if you upgraded from an older cpu or system to a newer one you're experiencing the total system difference not just the cpu to cpu difference but if there is a game where you think maybe this might be a more uh visible or tangible difference between one cpu and the next feel free to let us know below what games you think those might be and we'll look into it and see if we can add it it's again it's it was something like 10 to 14 days of work for patrick to do this one so you'll have to give us some time if we do this again that's it for this one thanks for watching subscribe for more you can go to patreon.comgamersnexus or store.gamersnexus.net if you want to help us out directly those are the ways that we're able to afford the equipment purchases that we make for our testing content and we'll see you all next timeat the request of the community today we're looking at total system latency so what some people call input latency for amd versus intel this is the end-to-end latency we're looking at the response time in milliseconds from amd cpus to intel cpus at both the sort of higher end gaming range and the lower end gaming range and in this testing what we're going to do is compare competitive titles like cs go rocket league and things like that rather than our usual suite which is more explicitly intensive for the cpus to run this is the first time we've ever done content at this scope for latency testing typically we do maybe five to ten test passes this time we purchased a much higher fps camera and we ran 80 to 90 test passes per cpu per game sorry patrick he had to run them all manually for the last week and a half or so before that this video is brought to you by squarespace squarespace is what we've been using for years to manage our own gamer's nexus store and we've been incredibly happy with the choice squarespace makes e-commerce easy for those interested in starting stores but it also has powerful tools to build all types of websites photo galleries for photographers resume and portfolio sites and small business sites are all easily done through squarespace having built a lot of client websites the old way before running gn full-time we can easily recommend squarespace as a powerful fast solution go to squarespace.com gamersnexus to get 10 off your first purchase with squarespace and now because of the comments on the previous video we're looking into another request of the community some of the comments are saying well okay but what about total system latency or the input latency of one cpu versus the other is there an advantage there is it placebo or am i actually feeling a real difference in the response of clicking the mouse button and seeing an action on the screen on one cpu versus another this is an interesting test anyway but specifically there were some theories that we saw where some of our commenters suggested that maybe because of the way the io die is configured in ryzen's architecture how there is an i o die on 2000 3000 series cpus and so forth not the mobile ones there might be a difference in how the input is processed and that's definitely really interesting so we're looking into that frame rate or fps is the number most people can relate to in games but there's also frame times one rate is derived from time rate is an abstraction from the base metric of time measured in milliseconds then of course there's input in milliseconds as well so frame rate doesn't tell the whole story frame time also doesn't tell the whole story we need all three of these things to really put the picture together so we're starting off with a head-to-head between the 10 600k at about 300 and the comparable 300 part from amd which is the r7 3700x we're using the same everything else in the system from our cpu test benches and reviews the only difference is the i310 100 will be running slower memory we'll talk about that more in a moment and why we did that these tests are strictly cpu versus cpu comparisons so the test scenarios aren't intended to directly simulate gameplay if we add latency testing to cpu review or game optimization guides we'll develop a new methodology since latency testing requires timing response to an input it's really difficult to use baked in benchmarks or replays for any of these tests and there's also a lot of manual work involved in counting the frames one by one from a high speed or high fps camera against as well a high refresh monitor before we start we are aware that capping frame rate may reduce latency in some instances even though that's not the way in which things should logically work we didn't cap frame rate for any of these games except overwatch in overwatch that was mostly unavoidable and we'll talk about that later in rocket league and cs go we uncapped the frame rate and ran with the highest frame rate that came out of it vsync and g-sync were also disabled and then we also have some buffering tests in here for overwatch just because we were personally curious all tests were done at 1080p with 100 resolution scaling on a 1080p 240hz monitor with a 1000 fps camera pointed at the screen which patrick then ran 80 to 90 times per cpu per game it it cost a lot of time and then counted those frames to get the results it's similar to what we do for stadia testing except even more accurate than it was we got to refine things a bit for both systems the mouse the input device being measured was plugged into the usb port designated for bios flashing in an effort to minimize latency that means it's going through the cpu not the chipset on the gigabyte x570 master this port is explicitly described as the integrated in the cpu port rather than the chipset three of the cpus were tested with our standard kit of g skill trident z 3200 megahertz cl 14 memory including the additional timing tuning we do while the i3 10 100 was tested with a memory that's technically 3 000 megahertz cl 15 except it's down clocked to 26 66 and then we set the timings to where they make sense based on our previous reviews so that's because the i3 is almost always going to be paired with a motherboard incapable of running memory faster than 2666 and hence we set it to that speed measurements were taken using an led hooked into a mouse that we sought it in previously and we showed this before in our stadium benchmarks but we've upgraded the camera significantly to a 1000 fps camera so we can now see down to the millisecond aside from the monitor's 240 hertz limitations that by the way was also an upgrade we explicitly used competitive games here although we could use more intensive games like total war or red dead two we thought most people who care about end to end system latency would be more in the camp of cs go fortnite overwatch or rocket league and because we did a minimum of 80 test passes for each cpu for each game we had to limit somewhere how many games we were adding to the benchmarks we added sniper 4 just because we thought it'd make an interesting study of technology since the game is so well optimized and it's more of a gpu load cb load is going to be a challenge for some of these but we'll talk about that as we go first we have counter-strike global offensive a competitive shooter with a community that cares deeply about latency and things like tick rate for this one we have 80 test passes per cpu per game configuration we did two single player tests on de dust2 one without any bots at all and one with harmless bots enabled we used the in-game console to set the frame rate cap to 999 for all cs go tests we stock fairly close to the original default settings but set global shadow quality to very low in an effort to remove one of the most obvious gpu dependent factors also because the default settings change based on the hardware at play we obviously fixed those in place for all devices our flag for measuring latency in this test was aiming down the scope of the op firing and checking for the first frame where the mask around the scope began to disappear we may pick a different flag in the future testing with the in-game fps counter shows that the frame rate increases significantly when the scope is used since only a small area of the screen is visible let's start with a histogram for the cpus for the 10600k this is what the data looks like most of the results clustered around the 22.5 to 25 millisecond range with 16 entries here the next highest clusters were 12 and a half to 15 milliseconds and 17.5 to 20 milliseconds we consider this range to be fairly wide it's a more variable game than most even without bots and so we see an 18 millisecond top to bottom data range which is again a pretty wide net to cast this is the same for the 3700x which we'll look at in a moment overall the average worked out to 19.9 to 20 milliseconds for the 10600k here's the 3700x histogram the 3700x is overall about the same it has a few more tests in the 15 to 18 millisecond results range but also more in the 18 to 23 range the end result is an average that's right on top of the 10 600 k and is within error and variance let's look at the comparative data for a fuller understanding without bots at all average latency on the 3700x system was 19.16 milliseconds with a fairly high standard deviation of 4.6 while intel's was 19.99 average for the 10-600k with a slightly higher deviation at 4.97 keep in mind when we say the word latency once again as a reminder it means total system latency not any one aspect of the chain this is all well within error and is not a difference we can call in either direction especially when considering the monitor has its own limitations the lower end cpus were fairly close to their more expensive counterparts and latency for this game the 3300x averaged 20.3 milliseconds and the 10-100 averaged 20.8 milliseconds there's actually almost no meaningful difference between any of these even with the lower frequency ram on the 10 100 here's the max and the min charts against the 80 to 90 test passes results were more variable in counter-strike than in some of the other titles we tested with results for both cpus ranging between about 11 milliseconds at the low end and 29 at the high end the minimum values ran 11.4 milliseconds on the 3700x 11.18 milliseconds on the 10600 k which is within error the max values were 27.6 and 28.8 which is again error or variance for the lower end cpus we saw a worst case of 31.1 on the 10 100 and 29.3 on the 3300x knowing that we were going to do a test with bots made consistent testing much more difficult as we had to choose a test location where the bots couldn't walk directly in front of our scope a character model blocking the whole screen or a large part of it raises the frame rate and therefore lowers the latency in a considerable fashion practice servers are hosted locally including the ai players that populate the server which explains why frame rate averages dipped dramatically down to a range of 337 to 352 fps average for the 3700x and 10 600k the end result was an average end-to-end system latency of 22.37 milliseconds on the 3700x and 22.45 on the 10-6 which is within error and functionally identical the 10-100 ran 23.73 milliseconds at 26.66 megahertz with the 3300x at 22.62 milliseconds these are all clustered tightly together so for a competitive game that's less graphics intensive you do benefit from the higher average frame rate that enables these response times there's no magical architecture advantage yet that seems to be plotting one higher than the other just the anticipated association of higher fps to lower latency or variance once they're equal in this chart of maximums and minimums now we see the limitations of data collection on devices that are so close together the 10 100 technically had the lowest maximum value but with a standard deviation of about 4.7 to 5.3 on these cpus that's again going to fall within run to run variation the 3700x had a 34 millisecond max and 13.7 millisecond minimum the 10-6 plotted 32.7 and 14.7 and the 3300x plotted 33.7 and 13.2 overall we're really not able to write some big story about any of these numbers here's a histogram of the results for the 10600k and 3700x they're almost identical in behaviors with a few differences the bucket size is three milliseconds here at 13 to 16 the 3700x had three of the 80 entries against the 10 600 k's eight entries so 10 or so of the 10 600 k's results were within this range at 16 to 19 milliseconds they were tied at 19-22 the 3700x had 23 entries to the 14 of the 10-600k and so on there were no major deviations between the buckets if you're curious about the 3300x and the 10 100 it's similar the 3300x had more entries in the 13 to 16 range at 10 to 3 but after that they mostly leveled out fortnite is next we used the directx 12 version of fortnite at the medium preset 1080p using 100 resolution scaling multi-threaded rendering was enabled vsync was disabled and the in-game frame rate was set to unlimited we entered the battle lab alone under controlled conditions to do our testing these results are for comparison between cpus so adding in players muddies the data to the point that it's actually useless we could have tested with players anyway and just not told anyone how absolutely useless the resulting data is and avoided people complaining about realistic test cases but at the end of the day if you want real data that's actually useful you come to us for it so we did it properly we're assuming a baseline performance level which is equivalent in other words under equivalent load conditions in a match the delta should be about the same because we can't recreate 100 equal loads with a multiplayer game like this one we have to test it this way to retain accuracy and recreate the conditions for each test here's the average latency chart the i5 10 600k ran an average against 90 passes of 15.3 milliseconds with standard deviation at 1.3 the r7 3700x plotted at 15.9 and 1.7 so these two are once again functionally the same the i3 10 100 at 26.66 megahertz actually produces a somewhat wide difference this time at 18.2 milliseconds average end-to-end latency this might matter to a literal professional who's making money on the game where an extra few hundred bucks for a cpu is paid for one play that executes five milliseconds faster but it's unlikely to be too noticeable to most players this will compound with load in theory and increasing cpu load in the game could stand to position the 10600k and potentially 3700x and more significantly advantaged positions against the lower end parts on this max min chart the hierarchy remains the same the 10600k has the lowest maximum total system latency at 18.2 milliseconds followed by the 3700x 3300x and 10 100. ultimately the higher fps part is producing the lower latency number which should not be news to anybody here's a histogram of the 3700x and 10 600k with a bucket size of two the 10600k had more entries between 12 milliseconds and 14 milliseconds more entries and 14 to 16 and the 3700x had more entries in 16 to 18 than more in 18 to 20. overall the 10 600k objectively plotted lower in total system latency but again the value of that is entering sort of subjective voodoo magic fps gamer territory where the numbers at this degree of lowness really become questionable how much you can actually notice the differences the 3300x versus 10 100 histogram shows what we'd expect given the previous results the 3300x has more 12 to 14 millisecond results far more 14 to 16 millisecond results at 25 versus 5 and more 16 to 18 millisecond results at 49 to 36. the 10 100 brings up the slower end of the scale offering more entries between 18 and 22 milliseconds this is repeatable and is a real difference in a test lab as in hours not necessarily just the games and that's in the very least what we can take away in an objective sense overwatch is next overwatch's in-game fps slider maxes out at 300 and it seems like even modifying the config files can only increase that limit to 400. we decided not to go that far so these tests are unique in that both cpus were running right up against the fps limit and were engine limited that makes it interesting for another reason which is that we can now observe performance when constrained by external force we entered the training grounds as mccree for testing rather than trying to wrangle a team of bots and then we used the high preset but disabled both triple buffering and reduced buffering to start with we then much to the chagrin of patrick who is manually running all these tests did an identical round of tests with reduced buffering enabled because of the limit hardware info showed a low and constant gpu core load during our pre-testing but a high usage across multiple cpu cores all four cpus easily achieved 300 fps average with these settings although the 10600k scored the highest and one percent and 0.1 lows the 10-100 scored the lowest this echoes the performance deltas that we saw in other uncapped tests here's the latency chart the 10600k is the only cpu that really differentiated itself in latency with an average of 19.33 milliseconds total system latency but it had a relatively high standard deviation for this test as did they all which indicates that this may be within run to run variants the 3700x ran an average total system latency of 20.7 milliseconds about tied with a 10 100 and 3300x ultimately once again these are all mostly the same although the 10600k had a technical lead it's a 1.4 millisecond difference in averages against a standard deviation of 5 milliseconds counted against 80 passes so we feel comfortable calling these the same from the player's standpoint this histogram shows the 10600 ks plotting more 10 to 14 millisecond results with the two cpus mostly even between 14 and 18. the r7 3700x has more 24 to 26 millisecond spikes and 28 to 30 millisecond spikes which caused it its lower rank let's move on this average latency chart is with reduced buffering enabled which didn't significantly affect performance we'd expect this setting to have more of an impact if the game were running at a lower frame rate rocket league is next this one has an in-game fps cap of 250 but this can be circumvented by modifying config files allowing us to log fps averages well in excess of 700 with correspondingly low system latency we used the high quality setting for render quality and the quality setting for render detail and then controlled them for each device logging with hardware info prior to testing showed high usage on just one or two cpu cores with a high gpu core load percentage that still never entered the 90s averages for the four cpus were close together but the variation between results was low enough that the comparison is still possible the 3700x performed best by a thin margin at 10.19 milliseconds average followed closely by the 10600k at 10.59 milliseconds and then the 3300x at 10.63 with the i3 10 100 at the bottom at an average at latency of 11.09 the maximum and minimum latency charts keep the results closely packed with the 10600k and 3300x tied for having the best worst results so to speak although the differences at this point are again basically irrelevant here's a histogram of the 10600k and 3700x for these with 90 results logged for this one for each the 3700x had about two times as many entries at eight milliseconds uh then the 10600k did and then it had a few more entries at nine while the 10600k took over after that the slowest result of the two belongs to the 3700x finally we threw sniper elite four into the mix as well this has long been one of the best developed titles from a gpu optimization standpoint and that's as tested in our gpu review suite so we wanted to test it here as well in this one we're actually gpu bound with dx12 asynchronous compute which is why we use this 100 resolution scaling at 1080p and the high preset we're within variance here with differences primarily emerging contingent upon the fps for each test pass because this is bound by the gpu there's a bit more fluctuation in fps than in a cpu bind and so we end up with less reliable data for a cpu total system latency benchmark but interesting data nonetheless it does show you where it stops mattering quite as much though when you're choosing a cpu for this and that's with 100 gpu load so these numbers again aren't different in a way which really matters nor is it one that we can present as being a meaningful difference it's all just within run to run variation especially with the 100 gpu load so that's it then for the games we've tested so far we're not really seeing any meaningful difference between two similarly priced parts the 10 100 was the most consistently somewhat more distant from its competitor the 3300 x but the 10 6 and the 37 were basically right next to each other and indistinguishable in nearly every test now there's a few times where this might start to matter more that we need to consider and talk about from a testing methodology standpoint first of all if we were testing say a game like total war three kingdoms where there's significantly more load on the cpu and it's more distributed across the cpu there might be a scenario where the input latency as we'll call it it's not really the exactly the correct term but latency the end-to-end latency could be more disparate between the tested devices but again hopefully the viewership here respects the amount of time required to do even what we've done now and the most interest we saw was in those higher competitive titles just because it makes sense those are people who are often as someone who used to to play games like source and uh and other fps games those are often the people who are the first to blame anything in the system other than themselves for a death in the game and so it made sense to test those for the response of the mouse versus the uh the outcome so we could test more cpu intensive games and maybe see different results but we've got to cap it somewhere and this was it for the starter another place you might see differences emerge well not differences but you might see the differences that exist become more exaggerated would be in something like a in fortnight with all the players loaded in in a more intense scenario but because that's not reproducible and because you start interfering what you get network interference in there where now you have a whole different link in the chain to consider for your latency it's just not something we could realistically test and produce data we're confident in as being actually different we played around with it but it was too unpredictable and so we decided that in the spirit of capturing data which is actually meaningful we'd drop it so um what we would expect is that the difference is seen in fortnite in the battle lab would be able to you'd be able to extrapolate those and maybe expand them a little bit in multiplayer so you end up probably with about the same product hierarchy but maybe an extra difference somewhere in in terms of milliseconds in the results but not something we can realistically test today maybe we'll be able to do that at some point in the future overall then not a big difference in the devices for the end-to-end system latency and there may be absolutely maybe a difference a lot of the time what you're probably feeling is if you upgraded from an older cpu or system to a newer one you're experiencing the total system difference not just the cpu to cpu difference but if there is a game where you think maybe this might be a more uh visible or tangible difference between one cpu and the next feel free to let us know below what games you think those might be and we'll look into it and see if we can add it it's again it's it was something like 10 to 14 days of work for patrick to do this one so you'll have to give us some time if we do this again that's it for this one thanks for watching subscribe for more you can go to patreon.comgamersnexus or store.gamersnexus.net if you want to help us out directly those are the ways that we're able to afford the equipment purchases that we make for our testing content and we'll see you all next time\n"