**The Challenges of Multitasking Benchmarks**
When it comes to benchmarking performance, one aspect that often gets overlooked is multitasking. While some benchmarks may focus on single-tasking scenarios, others attempt to simulate the real-world experience of using multiple applications at once. However, as we learned from our recent experiment with Patrick, these types of tests can be incredibly difficult to set up and execute accurately.
We wanted to log framerate of both the video and the game, as many issues people have when multitasking are not just with the game but also with watching a video if the video is playing back badly where the audio is skipping. This added complexity made it exponentially more difficult to benchmark. Additionally, we had to consider the potential conflicts between different applications, such as fraps and presentmon, which can make it challenging to accurately measure performance.
**The Struggles of Monitoring Video Playback**
One of the biggest challenges we faced was monitoring video playback accurately. We used VLC to test our setup, but even with its built-in feature to display dropped frames, we encountered issues when the video wasn't the active window. The playback would stutter, but not be reflected in the dropped frame counter. This led us to drop VLC as an option, and we're now left wondering if there are other video players that can accurately log their performance.
**The Uncertainty of Battlenet Benchmarks**
Another area where multitasking benchmarks can be unreliable is with games like battlenet, which relies on internet connections. We saw a 30% performance difference between the two systems, but we're still unsure what caused this discrepancy. The unpredictability of these tests makes them difficult to test for and account for, as well as reproduce.
**The Trustworthiness of Multitasking Tests**
As it turns out, multitasking benchmarks are not as straightforward as they seem. Unpredictable behaviors and weird, unpredictable behaviors are not lab-friendly. They're hard to test for, hard to account for, and hard to reproduce. This is why we're hesitant to publish results for these types of tests, as we want to ensure the accuracy of our findings.
**The Importance of Reliable Testing Methods**
For now, we'll stick to more reliable testing methods, such as using Excel to enumerate some formulas. While it may seem trivial, this approach allows us to isolate the software from the network, OS, and hardware. However, even with these precautions, there's still a risk that certain variables can affect the results.
**The Limitations of Multitasking Testing**
Ultimately, multitasking tests are not necessary for every type of system. If your form of multitasking is limited to simple tasks like Discord, YouTube, or playing a game, it won't matter which CPU you buy. However, if you're looking at more complex scenarios involving multiple applications and systems, then the differences between CPUs may become apparent.
**Conclusion**
While our experiment with Patrick showed us that multitasking benchmarks can be challenging to set up and execute accurately, we still value these types of tests. We believe that they can provide valuable insights into system performance under real-world conditions. For now, we'll continue to explore alternative methods for testing and benchmarking performance.
**Application Suggestions**
If you have suggestions for video players or other applications that are good at logging their performance, please leave them in the comments below. We're always looking for new ideas and approaches to improve our testing methods.
**Stay Tuned**
Don't forget to subscribe to our channel and support us on Patreon (patreon.com/gamersNexus). With your help, we can continue to bring you high-quality content and accurate system benchmarks.