* Sponsored Content*
Intel recently made a splash with the introduction of its Sandy Bridge based server processor that is now available as the Xeon E5-2600 series and succeeds the previous Xeon 5600 series.
While both CPU families are manufactured in 32 nm, Sandy Bridge is a substantially upgraded and improved architecture that not only delivers greater performance, but much improved computing efficiency overall and makes the case for a thorough platform refreshment cycle.
Ajay Chandramouly, Cloud Computing and Data Center Industry Engagement Manager at Intel IT, sat down with TG Daily to talk about Intel’s view and expectations in the Xeon E5-2600.
TG Daily: Sandy Bridge was a huge jump for Intel in processor technology that had a significant impact on the performance capability of desktop and mobile computers. What kind of impact do you expect the Xeon E5-2600 CPUs and its underlying architecture to have on servers?
Ajay Chandramouly: There are several features that are very impressive in Sandy Bridge and I believe that the overall benefit we are able to provide is truly remarkable. If you look at all the various factors that affect performance, this new E5-2600 brings improvement in all those areas. We increased the number of cores, we increased the cache, we added more memory, and there is more integration as well as more bandwidth across the platform. I view this architecture as a bandwidth machine.
TG Daily: Describing Sandy Bridge as a “bandwidth machine” refers to a lot more than just two more processing cores the E5-2600 can offer in comparison to the preceding 6-core Xeon 5600 series?
Ajay Chandramouly: Absolutely. As you add more cores to a processor, you also have more data that needs to move from and to the CPU. We now have up to eight cores and each of those cores is hyper-threaded, which effectively delivers 16 cores, or 16 threads. A CPU gets bandwidth from three primary sources. One, from the last level cache; second, from memory bandwidth itself; and three, from access to more memory.
The last level cache is on-die memory with very low latency and very high bandwidth. Sandy Bridge has a ring-topology interconnect, which allows all the cores to access the data simultaneously. Logically, it is one cache. Physically, it is divided into multiple slices. For memory bandwidth, we added an additional memory channel and now have four channels of DDR3, which gives us about 30% more memory bandwidth. Sometimes, a CPU has to access data that is sitting on memory from another CPU, which can lead to increased latency. In Sandy Bridge, we use QPI, or Quick Path Interconnect, which connects all the CPUs together. In the Xeon E5-2600, we have two QPI lanes that connect multiple CPUs in a variety of topologies.
The result is that overall system performance scales nicely, you are adding memory channels, memory bandwidth and I/O bandwidth when you are adding more CPUs. We found that this approach enabled us to decrease overall subsystem I/O latency by 30%.
TG Daily: In the early days of dual-core processors, when there were expectations for the core count to quickly rise into the dozens, Intel cautioned that simply adding more cores to a processor would not be the solution for more performance and that there will be challenges along the way to cope with performance bottlenecks. Is the approach in the Xeon E5-2600 a way for Intel to scale to more than eight physical cores in the future?
Ajay Chandramouly: It definitely allows us to scale more efficiently. You can’t harness all the performance benefits of more cores if you do not have the I/O bandwidth to support them. I do not want to speculate about the future roadmap, but yes, there is the potential for more cores.
TG Daily: An interesting aspect of the most recent processors is the fact that clock speeds are creeping up again and are at about 4 GHz, which was not reached by single-core processors, before Intel launched the first dual-core processors. Should we be paying more attention to clock speed as a performance indicator again?
Ajay Chandramouly: From an IT perspective, we care most about overall system level performance. There are various means to that end and clock speed is just one of them. The other features we implemented in the Xeon E5-2600 have improved performance much more efficiently than clock speed alone. We took a wholistic approach that improved performance, which includes breakthrough I/O innovation, energy efficiency, and security.
TG Daily: What applications do you expect will benefit most from the performance potential of your new CPUs?
Ajay Chandramouly: Based on the results of our early testing of the Xeon E5-2600, we plan to widely deploy and refresh with Xeon E5 across our entire IT environment including silicon design, office and enterprise, and manufacturing. High performance compute environments, in particular where simulation and verification are large parts of the workflow such as computational fluid dynamics in the aeronautical and automobile industries, the life sciences, and oil and gas industries, will also benefit. Sandy Bridge supports AVX, which stands for Advanced Vector Extensions, that can offer up to double the floating point operations per clock cycle and improve software performance by delivering much more agility and great user experiences across a wide range of uses, especially in compute intensive applications. For example, in our silicon design environment, AVX helped us achieve up 55% performance improvement over the prior generation Xeon 5600. While our testing focused on electronic design automation workloads, throughput improvements in other compute intensive applications such as climate modeling, financial analysis, and video creation, can benefit from this technology as well.
TG Daily: For the new E5-2600 series, Intel offers a dedicated workstation model – the E5-2687W – which you did not offer in the previous 5600 series. What are the reasons for offering a specific workstation processor as part of the E5-2600 series?
Ajay Chandramouly: We tested the E5-2600 in high performance workstations in our silicon design environment and found very compelling benefits. In the past, design engineers staggered design tasks due to limitations in processing power and the number of cores available. Now they can create and test designs more quickly using multiple EDA applications concurrently. This allowed for faster design iterations with more demanding design workloads and accelerating product time to market. It also allowed for more validation cycles where we could identify and fix problems earlier in product development and improve product quality. These are some of the reasons why the Xeon E5-2600 will be the standard for Intel IT workstation deployments, including refreshes of older systems.
TG Daily: The absolute numbers of E5 power consumption appear to be slightly higher on the high end and on the low end than the TDP levels of the 5600 series. When your customers replace their systems with Sandy Bridge systems, should they plan for higher power as well?
Ajay Chandramouly: You are right on the absolute power. The power ranges from 60 to 135 watts, versus 40 to 130 watts. But it’s really about energy efficient performance, not just absolute power. The Xeon E5-2600 is about 50% more energy efficient than previous generation Xeon processors when you look at the ratio of power and performance. This is achieved primarily by the scaling of memory, cache and I/O to match core needs. When a core is active, it is designed to scale power depending upon the use of the core.
There are now up to 16 effective threads, compared to just up to 12 in the 5600 series, which means that the E5-2600 delivers a lot more throughput in roughly the same power envelope. The processor can tune interfaces to match the performance and power consumption across 23 different points of control so that systems do not consume power unnecessarily but instead tightly link performance to the amount of energy consumed. As an extension of these chip improvements, the Node Manager and Data Center manager are tools to help IT manage and monitor their power consumption more effectively.
In our own technology refresh, it is critical for us to deliver business value. For example, our proactive refresh strategy has helped Intel IT to save hundreds of millions of dollars in efficiencies. We have benefited from server consolidation rates of up to 20:1 by refreshing with the latest generation Xeon-based servers, which will allow us to further reduce our overall datacenter count by about 35% over the next few years. Whether you’re short on data center space or pushing the power and cooling limits of your facilities, it’s a no-brainer when you can replace 20 old servers with one new E5-2600 system.
TG Daily: Such performance increase may help x86 processors to gain even more traction in an industry that has relied on them for growth for some time now. Can you foresee a future in which x86 will be the only server architecture?
Ajay Chandramouly: We are seeing people migrate off legacy proprietary RISC based architectures as performance and RAS capabilities improve. And yes, there is wide-scale adoption. However, that migration can be very challenging if you consider various software stacks, and the fact that some mission critical applications need to be available 24/7. At Intel IT, we completely migrated from RISC to a combination of x86 and Itanium. In the future, I believe there is still room for other architectures such as Itanium, which is still going strong.
TG Daily: Increased security has been a big deal at Intel. How does the Xeon E5 live up to your promise that processors will become more secure?
Ajay Chandramouly: Security is a top concern. It is critical that users can trust IT. No matter how optimal and personalized a user experience is, you still have to protect a user’s data. We are addressing security with our trusted execution technology, TXT, as well as AESNI, which allows for greater encryption speed and enables us to significantly reduce the performance penalty for data encryption.
Intel TXT addresses the security needs across the deployment of servers especially in virtualized or cloud based environments. It helps to protect your server prior to an OS or hypervisor launch. It ensures that the booted OS is free of malware. Additionally, Intel TXT allows for new use models. For instance, you can create pools of platforms with trusted hypervisors and use the platform trust status to constrain the migration of sensitive virtual machines or workloads. This helps raise the overall protection of all critical data.
TG Daily: The buzz word for server computing today is “cloud.” How do you see the Xeon E5-2600 helping your customers build cloud systems?
Ajay Chandramouly: This processor was really designed with the cloud in mind and to become the heart of modern data centers and cloud environments. From that perspective, the CPU does not only deliver more performance for servers, but also for storage, networking and facilities. Here at Intel, we intend to widely deploy and refresh with the E5-2600 across our entire environment which spans 87 data centers and more than 75,000 servers across the world. But it is not just about compute. Improving storage and networking is also critical to help us meet our goals of 80% effective utilization of our global data center resources. Upgrading to 10GbE allowed us to reduce our networking cost by 25% and our storage optimization and refresh utilizing Xeon based storage solutions allowed us to save about $9.2 million. It’s important to see the wide breadth of improvements across the data center that Sandy Bridge provides, not just in compute servers.
TG Daily: Thank you for your time and the interview.
About Ajay Chandramouly
Ajay has over 13 years of experience in the technology industry with over 10 years of experience at Intel Corporation. Ajay has held a variety of IT, software and hardware engineering positions while at Intel and the Lawrence Livermore National Laboratory. Ajay has spoken at numerous forums worldwide including Computer World’s Storage Networking World and the National Defense Industrial Association.
In addition, Ajay is a highly regarded expert in his field of cloud computing and data center management and has been interviewed and cited in prestigious publications such as Data Center Knowledge, Forbes Magazine, and Business Week among others. In addition, Ajay has co-authored several white papers that can be found on www.intel.com/it. Ajay’s current role is to share Intel IT’s Cloud and Data Center best practices with his senior IT peers across the industry. Ajay holds both an MBA and MSE from UC Davis.
Follow Ajay on Twitter: @ajayc47
Follow Ajay’s blogs here.