1u Servers are DEAD! Long Live 2u Servers! But Why -Ft. Supermicro AS -2114GT-DNR

**The Engineering Marvels of Super Micro's 2114 DNR Server**

As I gazed upon the Super Micro's 2114 DNR server, I couldn't help but marvel at the engineering prowess that went into designing this compact powerhouse. The fact that two full-height GPUs can be installed in a single U space is a testament to the company's commitment to innovation and efficiency.

From an engineering perspective, putting two 1u nodes in a chassis like this really helps on efficiency. Really helps on cooling because you can use a cooling solution like this instead of absurdly tiny 1u fans. You still pretty much tick all the boxes in terms of density I mean you can deploy GPUs like this these are these are full-height GPUs in a in the context of a 1u server but even using you know four of these fans at the front here to try to move air through these GPUs these little fans are going to struggle to build up enough airflow and air pressure in order to move all of the air through both of these GPUs carrying a sufficient amount of heat with it that's just the physics of the situation.

Data center customers are getting sort of clued into this, you know in the past if you ran say a hosting facility there was a lot of sense in deploying a whole bunch of 1u servers because you never know how a customer might come along and say I want this I want that I want whatever you want to pack and as many customers as physically possible. This design still lets you do that but it gives you some power efficiency because you physically have fewer power supplies and that's more efficient.

But what really impresses me is the level of redundancy built into this system. You've got two power supplies, every two nodes in your rack configuration, and you've got maximum density because this is a pretty good amount of real estate that you have for your motherboard this is a pretty good amount of real estate that you have to have dimms in your processor and you've got three full-height full-length expansion slots all crammed in one U it's very, very impressive. Super Micro has done an incredible amount of engineering for this.

Even at the front, we've got four U.2 U.3 drives, one for each node, so if you just wanted to swap a node your storage still going to be there although we do have onboard M.2 now. I did a full teardown of this chassis with Steve, check it out on the Gamers Nexus Channel, I'll try to link that below if you want to see more about the entrance and how this works.

**Racking the 2114 DNR Server**

When it comes to racking one of these servers, there are some interesting considerations. Pro tip when you're racking one of these, you can take out the nodes but yeah Steve said that's definitely something you'll find appealing. We do that here, so this is Nema 6-220 volts, this is a Twist lock connector, it doesn't have to be twist locked but you know Nema 6 is up to 220 Nema 5 is or yeah 220, 240.

This is an older power cord, but hey what are you gonna do? It's fine, it's going to plug in there. Our Eaton 9000 it's a 220-volt UPS uninterruptable power supply, it's about the only uninterruptable power supply I have that can handle this system and it's just the one system on this UPS so nice. And then wait, let me get some more stuff for all right.

**The Future of Server Design**

One thing that's clear is that traditional server designs are going away, at least in terms of density. One of you servers are basically going away for any kind of remotely dense installation there's going to be so few people buying one rack server in volume that they'll probably just go away except for specialty like half-depth communications type applications if you notice this one rack machine from Super Micro also not super deep and they actually have one that's even shorter. I mean we're talking a depth of like 12, 14, 18 inches that's just used for communications.

You don't even have physical drive storage, I mean these are nice one-use servers don't get me wrong but you're not going to be packing in a 300-watt GPU and a 300-watt CPU and another 300-watt network interface card in something like this. It's just you're gonna need the full depth first of all if you need the full depth I want to just go up another rack unit it'll be a little bit easier from an engineering perspective.

**Conclusion**

That's my quick look at the 2114 DNR from Super Micro, linked below please click the link and check it out. It helps me out if you have any questions or you're thinking about deploying a whole gaggle of these, a whole murder of servers, uh School of servers what would be a fun plural of servers? You can let me know in the comments below.

"WEBVTTKind: captionsLanguage: enokay I've got something really special for you today this is basically part of the frontier supercomputer not really not exactly but this is like half a node of the frontier super computer in terms of compute capability and this chassis is from Super Micro we're going to do a deep dive take a look at our setup I've got it on the table here normally this is going to be in a data center we'll get to that but I did a full tear down of this with Gamer's Nexus to show the hardware this is two nodes into you this is a full self-contained compute node with three Instinct mi210 gpus dual 100 Gig ethernet dual 10 gig ethernet for the interface up to a 64 core and two terabytes of memory CPU AMD epic configuration this is a monster monster machine so if it's two nodes and two you the question is and sort of hinted at from the title of this video is one U is dead but I just pulled this note out there's another node just like this one in this chassis this looks like a one new server to me it doesn't look to me like one you is Dad at all and yet this doesn't have a redundant power supply on its own it doesn't have any cooling fans whatsoever those are in the front thank you our fan modules look something like this this is two rack units of height one U versus two U this has to do with density this is the changing face of the data center one new servers especially for this kind of density are basically dead counter example this is a 1u routing platform it'll hold four three and a half inch drives this is also from Super Micro we have redundant power supplies in the back the power supplies are 400 watts the power supplies in our 2u2 node super micro system are 2600 Watts 220 only this will run at 110 or 220. really the reason that you do two nodes into you like this has to do with efficiency Cooling and power if we built a version of this one node super computer component that would work in one U we're going to give up a lot of space to the power supplies because you have to have a redundant power supply in one U as well as storage and everything else there's not really physically enough room in our 2u2 node configuration we still have redundant power supplies but each power supply Powers two nodes instead of having four 1200 watt power supplies we have two 2600 watt power supplies and believe it or not that's actually more efficient cooling is the other big one and cooling is really one of the things that I wanted to talk about you can see that we've got all these tiny tiny tiny fans this is kind of a stand-in for one of our fans not really super big it's hard to get these to move a reasonable volume of air they're just not physically large enough they have to spin at such absurd RPMs they tend not to be very efficient even with these these fans are going to be uncomfortably loud as you'll see in a minute but this design is necessary to cool back to back high performance 300 watt gpus doing something like this compute node in one U borders on engineering and possibility or a I mean it's possible but it's not worth the expense and headache and let's be honest for something like this you're not deploying this system for maximum density this is not a dense system meaning that you're constrained on physical space this system is also probably not a system that you would buy unless you are constrained on physical space you don't want your machine to take up a lot of room it's also true that the world has kind of moved on over the last 10 years in terms of software software has gotten very very good about being able to run on distributed systems you don't really have a single Mission critical server anymore there are some cases in the Enterprise where you might have a mission critical appliance which is from a specific vendor but generally when I'm talking about rackmount servers that have been configured for your application or your particular setup they're cattle they're not really special and so you don't really give up much having two nodes in a single U like this even if both of these nodes are for different customers something goes wrong look I can pull it out while the system is running and do any kind of service or maintenance that I need to this node and then I can reinsert it into the chassis you don't really give up anything if one of the power supplies die it's going to keep both nodes running just fine a rack mount 2600 super micro power supply also hot swap as I'm swapping the power supply my neighbor in the neighboring node is also going to lose its redundant power but who cares it's only going to lose redundant power for just a moment so from an engineering perspective putting two 1u nodes in a chassis like this really helps on efficiency really helps on the cooling because you can use a cooling solution like this instead of very absurdly tiny 1u fans and you still pretty much tick all the boxes in terms of density I mean you can deploy gpus like this these are these are full height gpus in a in the context of a 1u server but even using you know four of these fans at the front here to try to move air through these gpus these little fans are going to struggle to build up enough airflow and air pressure in order to move all of the air through both of these gpus carrying a sufficient amount of heat with it that's just the physics of the situation and I think data center customers are getting sort of clued into that you know in the past if you ran say a hosting facility there was a lot of sense in deploying a whole bunch of 1u servers because you never know how a customer might come along and say I want this I want that I want whatever you want to pack and as many customers as physically possible this design still lets you do that but it gives you some power efficiency because you physically have fewer power supplies and that's more efficient but you still have a lot of redundancy because you've got two power supplies every two you in your rack configuration and you've got maximum density because this is a pretty good amount of real estate that you have for your motherboard this is a pretty good amount of real estate that you have to have dimms in your processor and you've got three full height full length expansion slots all crammed in one U it's very very impressive super micro has done an incredible amount of engineering for this and even at the front we've got four u.2 u.3 drives one for each node so if you just wanted to swap a node your storage still going to be there although we do have onboard m.2 now I did a full tear down of this chassis with Steve check that out on the gamers Nexus Channel I'll try to link that below if you want to see more about the entrance and how this works for now let's go get this racked also Pro tip when you're racking one of these you can take out the nodes but yeah Steve stare down definitely think you'll find that appealing we do do that here so this is Nema 6. 220 volts this is a Twist lock connector it doesn't have to be twist locked but you know Nema 6 is up 220 Nema 5 is or yeah 220 240. Nema 5 is you know 110. so this is what we need to connect our system this is an older an older power cord but hey what are you gonna do it's fine it's going to plug in there our Eaton 9000 it's a 220 volt UPS uninterruptable power supply it's about the only uninterruptable power supply I have that can handle this system and it's just the one system on this UPS so nice it's over nine thousand dig all the way up there right there right here yeah and then wait let me get some more stuff foreign all right and that is why I think one of you servers are basically going away for any kind of remotely dense installation there's going to be so few people buying one you rack servers in volume that they'll probably just go away except for specialty like half depth Communications type applications if you notice this one you rack machine from Super Micro also not super deep and they actually have one that's even shorter I mean we're talking a depth of like 12 14 18 inches that's just used for communications you don't even have physical drive storage I mean these are nice one-use servers don't get me wrong but you're not going to be packing in a 300 watt GPU and a 300 watt CPU and another 300 watt GPU and 100 watt network interface card in something like this it's just you're gonna need the full depth first of all if you need the full depth I want to just go up another rack unit it'll be a little bit easier from an engineering perspective I'm Wendell this is level one this has been a quick look at the 2114 DNR from Super Micro linked below please click the link and check it out it helps me out if you have any questions or you're thinking about deploying a whole a whole gaggle of these a whole murder of servers a gaggle of servers uh School of servers what would be a fun plural of servers you can let me know in the forums level one text I'm signing out and I'll see you thereokay I've got something really special for you today this is basically part of the frontier supercomputer not really not exactly but this is like half a node of the frontier super computer in terms of compute capability and this chassis is from Super Micro we're going to do a deep dive take a look at our setup I've got it on the table here normally this is going to be in a data center we'll get to that but I did a full tear down of this with Gamer's Nexus to show the hardware this is two nodes into you this is a full self-contained compute node with three Instinct mi210 gpus dual 100 Gig ethernet dual 10 gig ethernet for the interface up to a 64 core and two terabytes of memory CPU AMD epic configuration this is a monster monster machine so if it's two nodes and two you the question is and sort of hinted at from the title of this video is one U is dead but I just pulled this note out there's another node just like this one in this chassis this looks like a one new server to me it doesn't look to me like one you is Dad at all and yet this doesn't have a redundant power supply on its own it doesn't have any cooling fans whatsoever those are in the front thank you our fan modules look something like this this is two rack units of height one U versus two U this has to do with density this is the changing face of the data center one new servers especially for this kind of density are basically dead counter example this is a 1u routing platform it'll hold four three and a half inch drives this is also from Super Micro we have redundant power supplies in the back the power supplies are 400 watts the power supplies in our 2u2 node super micro system are 2600 Watts 220 only this will run at 110 or 220. really the reason that you do two nodes into you like this has to do with efficiency Cooling and power if we built a version of this one node super computer component that would work in one U we're going to give up a lot of space to the power supplies because you have to have a redundant power supply in one U as well as storage and everything else there's not really physically enough room in our 2u2 node configuration we still have redundant power supplies but each power supply Powers two nodes instead of having four 1200 watt power supplies we have two 2600 watt power supplies and believe it or not that's actually more efficient cooling is the other big one and cooling is really one of the things that I wanted to talk about you can see that we've got all these tiny tiny tiny fans this is kind of a stand-in for one of our fans not really super big it's hard to get these to move a reasonable volume of air they're just not physically large enough they have to spin at such absurd RPMs they tend not to be very efficient even with these these fans are going to be uncomfortably loud as you'll see in a minute but this design is necessary to cool back to back high performance 300 watt gpus doing something like this compute node in one U borders on engineering and possibility or a I mean it's possible but it's not worth the expense and headache and let's be honest for something like this you're not deploying this system for maximum density this is not a dense system meaning that you're constrained on physical space this system is also probably not a system that you would buy unless you are constrained on physical space you don't want your machine to take up a lot of room it's also true that the world has kind of moved on over the last 10 years in terms of software software has gotten very very good about being able to run on distributed systems you don't really have a single Mission critical server anymore there are some cases in the Enterprise where you might have a mission critical appliance which is from a specific vendor but generally when I'm talking about rackmount servers that have been configured for your application or your particular setup they're cattle they're not really special and so you don't really give up much having two nodes in a single U like this even if both of these nodes are for different customers something goes wrong look I can pull it out while the system is running and do any kind of service or maintenance that I need to this node and then I can reinsert it into the chassis you don't really give up anything if one of the power supplies die it's going to keep both nodes running just fine a rack mount 2600 super micro power supply also hot swap as I'm swapping the power supply my neighbor in the neighboring node is also going to lose its redundant power but who cares it's only going to lose redundant power for just a moment so from an engineering perspective putting two 1u nodes in a chassis like this really helps on efficiency really helps on the cooling because you can use a cooling solution like this instead of very absurdly tiny 1u fans and you still pretty much tick all the boxes in terms of density I mean you can deploy gpus like this these are these are full height gpus in a in the context of a 1u server but even using you know four of these fans at the front here to try to move air through these gpus these little fans are going to struggle to build up enough airflow and air pressure in order to move all of the air through both of these gpus carrying a sufficient amount of heat with it that's just the physics of the situation and I think data center customers are getting sort of clued into that you know in the past if you ran say a hosting facility there was a lot of sense in deploying a whole bunch of 1u servers because you never know how a customer might come along and say I want this I want that I want whatever you want to pack and as many customers as physically possible this design still lets you do that but it gives you some power efficiency because you physically have fewer power supplies and that's more efficient but you still have a lot of redundancy because you've got two power supplies every two you in your rack configuration and you've got maximum density because this is a pretty good amount of real estate that you have for your motherboard this is a pretty good amount of real estate that you have to have dimms in your processor and you've got three full height full length expansion slots all crammed in one U it's very very impressive super micro has done an incredible amount of engineering for this and even at the front we've got four u.2 u.3 drives one for each node so if you just wanted to swap a node your storage still going to be there although we do have onboard m.2 now I did a full tear down of this chassis with Steve check that out on the gamers Nexus Channel I'll try to link that below if you want to see more about the entrance and how this works for now let's go get this racked also Pro tip when you're racking one of these you can take out the nodes but yeah Steve stare down definitely think you'll find that appealing we do do that here so this is Nema 6. 220 volts this is a Twist lock connector it doesn't have to be twist locked but you know Nema 6 is up 220 Nema 5 is or yeah 220 240. Nema 5 is you know 110. so this is what we need to connect our system this is an older an older power cord but hey what are you gonna do it's fine it's going to plug in there our Eaton 9000 it's a 220 volt UPS uninterruptable power supply it's about the only uninterruptable power supply I have that can handle this system and it's just the one system on this UPS so nice it's over nine thousand dig all the way up there right there right here yeah and then wait let me get some more stuff foreign all right and that is why I think one of you servers are basically going away for any kind of remotely dense installation there's going to be so few people buying one you rack servers in volume that they'll probably just go away except for specialty like half depth Communications type applications if you notice this one you rack machine from Super Micro also not super deep and they actually have one that's even shorter I mean we're talking a depth of like 12 14 18 inches that's just used for communications you don't even have physical drive storage I mean these are nice one-use servers don't get me wrong but you're not going to be packing in a 300 watt GPU and a 300 watt CPU and another 300 watt GPU and 100 watt network interface card in something like this it's just you're gonna need the full depth first of all if you need the full depth I want to just go up another rack unit it'll be a little bit easier from an engineering perspective I'm Wendell this is level one this has been a quick look at the 2114 DNR from Super Micro linked below please click the link and check it out it helps me out if you have any questions or you're thinking about deploying a whole a whole gaggle of these a whole murder of servers a gaggle of servers uh School of servers what would be a fun plural of servers you can let me know in the forums level one text I'm signing out and I'll see you there\n"