The Evolution Of Memory Bandwidth Per Socket And Per Core
ServeTheHome have put together an article which should interest anyone shopping for a server, or that is interested in how memory bandwidth has changed at Intel and AMD in the recent past. They trace the performance from 2013 on to the present, with some theoretical results for AMD’s Bergamo in a number of ways, to give a great overview of how the two companies have changed over the years. It also demonstrates why AMD’s EPYC has succeeded in eating quite a bit of the market once dominated by Intel’s Xeons.
They first examined memory channels per socket times memory bandwidth per DIMM, and you can see small jumps as memory frequency increased but the big ones come from channel counts increasing. Going from DDR4 to DDR5 also had a major impact on overall bandwidth, as one might assume. A different picture evolves when you look strictly at memory bandwidth per core, with Intel’s performance showing a flat line since 2019. AMD on the other hand shows a lot of change, due to their focus on core counts. The number of cores in EPYC have surpassed Xeon but the memory bandwidth remains similar.
These findings could impact the decision on which vendor to go with for an upgrade. If your app cares more about memory bandwidth than the number of cores you can toss at it, Xeon remains a solid choice. On the other hand if you are more into processing power and don’t have to worry as much about feeding those cores with a lot of memory intensive work EPYC should be a serious consideration.