How Gigahertz Really Matters (or Doesn't)
For years, people have been reading comments from individuals who believe that the faster the gigahertz, the faster the CPU. And why shouldn't they believe that? Gigahertz, also referred to as clock speed or frequency, is quite literally a measure of how fast the transistors in a processor switch. So all else being equal, more gigahertz should be better. But all else is not equal.
In today's video, we're going to dive into what those unequal things are and just how unequal they can be. To make sure that our tests are as fair as possible, both of our CPUs used identical test benches: ASUS TUF B550-PLUS motherboards, Noctua NH-D14 coolers, 16 gigs of dual-channel 3600 MHz C14 memory, a Crucial P5 NVMe SSD, and an RTX 3060 XC from EVGA. We're going to have all these parts in our affiliate links down below.
Well, most of them. GPUs can be kind of hard to find. Now for the CPUs. To keep politics out of the conversation, we're going to be using only AMD branded processors, but these principles can be applied to any other situation where CPUs are being compared. Naturally, we started with a full run of our benchmark suite at out of the box speeds, so we can see how the higher-gigahertz 3600XT fared against the 5600X.
Remember that both of these CPU have exactly the same number of cores and threads. Somewhat intuitively, the newer processor does outperform the older one, and sometimes by a considerable margin, but why? Many modern processors are capable of dynamically boosting their clock speed under favorable conditions. Say, for example, when they have a really good cooler installed. Maybe our 5600X is just a mad CPU frequency boosting machine.
Let's try reining it in, and seeing what happens then. At our locked clock speed of 3.4 GHz, the 5600X still outperforms the 3600XT. The simple answer to why this is the case is that IPC, or instructions per clock, plays a much bigger role than clock speed alone.
To continue with the mine analogy, if we think of a CPU like a miner and each core like a miner running back and forth doing work, the clock speed is how many times our miner can run back and forth per second. While the IPC is how much they can carry on each load. Look at the Apple M1, for example. Joe Average gamer might laugh at its meager 3.2 GHz clock speed. But when it comes to the real world, it performs pretty damn well.
What that tells us about it is that it has better IPC than a CPU that runs at a higher frequency but performs the same. The problem, though, is IPC sounds a lot simpler than it is. You can't just add more instructions to each clock cycle. Let's go back to our mine analogy. The problem is that our mine contains every single possible type of mineral or rock. And let's say those represent different apps or programs, and each of them requires specialized equipment.
So let's say you level up your miner by adding more points to their shovel, and suddenly there's a boost to your coal gathering. But sifting for gold, well the shovel doesn't help you with that, so performance is entirely unaffected. That's how you can see a new generation of CPU come out that absolutely crushes Cinebench but gets the same FPS in games.
IPC is problematic. Along with clock speeds and core counts, it's one of the most important ways to predict a processor's performance. And yet, unlike those other attributes, nobody can agree on a fair and objective way to measure it. The way that we enthusiasts use the term, saying things like this new CPU has 20% higher IPC than the old one, can be misleading. A manufacturer could easily spend all their time tuning performance for a single commonly-benchmarked program, like Geekbench or Cinebench, when that wouldn't be representative of the real-world experience of using it.
Though AMD and Intel also throw the term around in this way when it suits them. I blame them. So what have they changed to really push forward single-core performance? The simple answer is IPC. But let's get back to our mine analogy. Our miner can only carry so much coal, or in this case, how many instructions can we fit into each clock cycle.
The problem is that we're not just talking about fitting more coal into the same-sized bucket. We're talking about carrying different types of coal. So when we say that a CPU has better IPC, it means they're carrying more valuable coal than their competitor's miner. But does this really translate to real-world performance?
"WEBVTTKind: captionsLanguage: en- How can thisbe faster than this,when it should clearlybe the other way around?For years, I've beenreading comments from peoplewho believe that the faster the gigahertz,the faster the CPU.And why shouldn't they believe that?Gigahertz, also referred toas clock speed or frequency,is quite literally a measure of how fastthe transistors in a processor switch.So all else being equal,more gigahertz should be more better.But all else is not equal.And in today's video,we're going to dive into whatthose unequal things are,and just how unequal they can be.We're also going to diveinto today's sponsor: Arozzi.Thanks Arozzi, for sponsoring this video.Arozzi's new Occhio webcamsare privacy focused,so you can be seen and heardonly when you want to be.Get your Occhio webcam withor without a ring lightat the link down below.(bright electronic music)To make sure that our testis as fair as possible,both of our CPUs usedidentical test benches:ASUS TUF B550-PLUS motherboards,Noctua NH-D14 coolers,16 gigs of dual-channel3600 MHz C14 memory,a Crucial P5 NVMe SSD,and an RTX 3060 XC from EVGA.We're going to have all these partsin our affiliate links down below.Well, most of them.GPUs can be kind of hard to find.(baby crying)Now for the CPUs.To keep politics out of the conversation,we're going to be usingonly AMD branded processors,but these principles can beapplied to any other situationwhere CPUs are being compared.Naturally, we started with afull run of our benchmark suiteat out of the box speeds,so we can see how thehigher-gigahertz 3600XTfared against the 5600X.Remember that both of these CPUshave exactly the samenumber of cores and threads.Somewhat intuitively, the newer processordoes outperform the older one,and sometimes by a considerable margin,but why?Well, many modern processorsare capable of dynamicallyboosting their clock speedunder favorable conditions.Say, for example,when they have a reallygood cooler installed.Maybe our 5600X is justa mad CPU frequency boosting machine.Let's try reining it in,and seeing what happens then.At our locked clock speed of 3.4 GHz,the 5600XT still winsin every single test.So clearly then,gigahertz is not theonly determining factorfor CPU performance.But these numbers aren'tenough to tell the whole story.Let's look at gaming.If I only measured average FPSin Shadow of the Tomb Raiderand Grand Theft Auto V,I might think that a 5600Xis only about 5% fasterthan a 3600XT in the real world.But take something moreCPU-bound like CS:GO,and these two CPUs,with the same core countsrunning at the same frequencies,are nowhere near each other.But then, dropping the clockfrequency even further,to 2.4 GHz,it's clear that the lower the clock goes,the slower our CPUs get.So what is it?Does gigahertz matter, or not?There are a couple of takeaways here.Starting with that, yes,gigahertz absolutely matters.Which raises the question, then:Why don't CPU manufacturersjust run their chipsat higher clock speeds?I mean, bring on the 10GHz CPUs, am I right?Well, that was the plan actually,but higher clock speeds come at the costof more power consumption,which tends to resultin hotter-running chips.Thankfully though,there are a lot of other leversthat CPU designers can pullto improve performance,which leads us to our second takeaway.CPUs, or any kind ofprocessor for that matter --GPUs, phone SoCs, anything --should never be comparedusing gigahertz alone.It is clearly an important spec,and manufacturers do need to disclose itbecause it enables us to compare productswithin their own families.But if you want to talk about an M1 Macversus an Intel Mac,or an AMD GPU versus an NVIDIA one,don't even bring it up.You would only be revealingyour ignorance on the subject.Let's talk then about some of the waysa CPU can differ, aside from gigahertz.An obvious one is thatthey can be designedto process more threadsor tasks in parallel.Intel was the first toprocess two concurrent threadson a consumer chip withhyperthreading, or SMT,while AMD was the first tobuild a truly multi-core CPU,with their X2 series dual-coresthat were capable of doingnearly double the workunder ideal conditions.The only drawback to additional coresis that they increase die size,meaning cost and power consumption,and they can't be used to acceleratesingle-threaded workloads.So in many consumerapplications like games,they're only helpful up to a point.Currently AMD and Intel'smainstream lineupstop out at 16 and eightcores, respectively.So we can't keep pushingcore counts foreverand expect consumer applications to scale.And clock speeds have beenlocked in the same rangefor over 15 years.Then, what have they changedto really push forwardsingle-core performance?The simple answer is IPC,or instructions per clock.If we think of a CPU like a mine,and each core like a minerrunning back and forth doing work,the clock speed is how many timesour miner can run backand forth per second,while the IPC is how muchthey can carry on each load.Look at the Apple M1, for example.Joe Average gamer might laughat its meager 3.2 GHz clock speed.But when it comes to the real world,it performs pretty damn well,like this sexy retro GPUT-shirt from lttstore.com.What that tells us about itis that it has better IPCthan a CPU that runsat a higher frequency,but performs the same.The problem, though, is IPCsounds a lot simpler than it is.You can't just add moreinstructions to each clock cycle.Let's go back to our mine analogy.The problem is that our mine containsevery single possibletype of mineral or rock.And let's say those representdifferent apps or programs,and each of them requiresspecialized equipment.So let's say you level up your minerby adding more points to their shovel,and suddenly there's a boostto your coal gathering.But sifting for gold,well the shovel doesn'thelp you with that,so performance is entirely unaffected.That's how you can see a newgeneration of CPU come outthat absolutely crushes Cinebench,but gets the same FPS in games.So IPC is problematic.Along with clock speeds and core counts,it's one of the most important waysto predict a processor's performance.And yet, unlike those other attributes,nobody can agree on a fair andobjective way to measure it.The way that we enthusiasts use the term,saying things like: this new CPUhas 20% higher IPC than the old one,can be misleading.A manufacturer couldeasily spend all their timetuning performance for a singlecommonly-benchmarked programlike Geekbench or Cinebench,when that wouldn't be representativeof the real-world experience of using it.Though AMD and Intelalso throw the term around inthis way when it suits them.So, I blame them.Now, there are major CPU design factorsthat can cripple thereal-world performanceof a \"high IPC\" CPU that's tunedfor a particular benchmark.Let's talk about waste.Going back to our mine analogy,adding cache to a CPUis kind of like making easypiles of our minerals, or data,that can be shoveledand carted out of the mine more quickly.The bigger the pile, the more likely it isthat you can just fill up yourwheelbarrow, and off you go.On the other hand, ifthere's nothing in the pile,the miner has to go deeper into the mine,or to the system memory, to retrieve it.That's going to take longer.Then there's the branch predictor.It is kind of like mine supervisors,who attempt to proactively communicatewhich minerals are going tobe needed in the near future,rather than just having theminers wait around for an order.CPU designers can dramaticallyimprove performancewith accurate branch prediction,but the logic for ittakes up space on the CPUthat could also just beused to add more miners.So it ends up being adelicate balancing act.Speaking of the physicallayout of the cores,imagine if our minerparked their wheelbarrowright next to the mineral heapinstead of five steps away,and carried it like that.CPU designers are always looking for waysto make each load more efficient.And sometimes the actual physicalproximity of CPU elementscan be a big difference maker.So an obvious solutionto this problem, then,is to stop using gigahertz,stop using IPC,and rather use a broadindustry standard setof real world tests.The problem with that is,if we're looking at real-world benchmarks,we end up with real-world messiness,including politicsbetween competing brands,who would each, naturally, prefer teststhat favor their own products.This is why to this day,we still need reviewers,lots of them,so that you can see a wide varietyof different methodologiesand test suites,and how the product thatyou're considering stacks up.And so that you can learnabout sponsors, like NordPass.NordPass wants to help youkeep your private information safe.The NordPass passwordmanager stores your passwordsin a single placeand recognizes your favorite websites,so it can automaticallyfill in your login details.You can create new complexand secure passwordswith the built-in password generator,and then access thosecredentials on any device,even when you're offline.They offer unlimited password,note, and credit card storage,and NordPass Premium startsat just $2.50 a month.It comes with additional featureslike password healthreports, data breach alerts,and up to six active devices.For NordPass's back-to-schoolsale, for a limited time,you can get 74% off a two-yearNordPass Premium plan,with an extra four months for free.So start protecting your passwords todayat nordpass.com/linus, and use code LINUS.As always, thanks for listening, folks.I hope it helps you make the right choicenext time you're looking to upgrade.If you enjoyed this video, hey,give a thumbs up andmake sure to check out\"Is four-core still enough?\".You might be surprised by the results.They're only helpful-(sound of hitting the floor)\n"