Monday 15 July 2013

Data center part 2



CPU and Memory Scalability
                                                                          
  With a single-core CPU, CPU is usually the bottleneck.

 
Without virtualization, multi-cores are often underutilized.






With multi-core and virtualization, memory is often the bottleneck.
 

Today, server processors are multicore, offering multiple processing cores in the same space that was previously used for single-core processors.
In addition, many vendors have created memory controllers that have the ability to address large amounts of RAM into the tens of gigabytes. While blade server sizes have remained the same, the processing and memory has increased significantly, allowing blade servers to run processing and memory-intensive applications.





Virtualization



Virtualization has allowed companies to more easily consolidate servers in the data center. Instead of the standard one-to-one server model, many servers—each running in independent “virtual machines” (VMs), can run on a single physical server.
This is performed by creating individual virtual hardware that can function like a standard server. There are a number of advantages to using virtualization, including better use of computing resources, greater server densities, and seamless server migrations:
          Virtual machine: A virtualized set of hardware that is able to operate in a similar fashion to a physical server
          Virtual server: A virtual set of hardware along with the operating system, applications, and files that is able to operate comparable to a physical server
          Hypervisor layer: A software layer that abstracts the physical hardware and creates individual virtual hardware for each VM.  For example, VMware ESX, Microsoft Hyper-V, Xen, etc.

 


Management Challenges



Workload Portability

Servers have unique identifiers that identify it on various networks.  These identifiers are tied to the hardware. If you change any of these items, the server will potentially lose its ability to access network resources, or even boot an operating system. These identifiers include:
          World wide name (WWN): Hard coded to a Host Bus Adapter (HBA), this identifier is needed for SAN access.
          MAC address: Hard coded to a network interface card (NIC), this identifier is needed for LAN access.
          BIOS: This identifier contains settings that are specific to the server hardware.
          Firmware: This low-level software runs on peripheral device and adapter cards to enable the operating system to interface with the device.
If a server fails and its operating system and application needs to be migrated to another physical server, the operating system/application and the network may require manual configuration changes.  These manual configuration changes lead to longer recovery times and increased application downtime.
The alternative to manual configuration changes is the use of “stateless servers.” 
          Stateless server: A stateless server is a server whose operating system and application personality have no ties to the physical hardware.  One way in which to accomplish this task is to use transportable, virtual, unique identifiers.
          Server personality: A server personality is the operating system configuration and application settings.  This is a fully functional set of programs, files, and settings required to perform the given task of the server.


Server Management





As computing capacity increases within the data center, so does complexity.  Blade servers solve many issues but they also bring an additional point of management into the datacenter per chassis.  Several independent systems must be managed including LAN, SAN, servers, and storage.  These separate resources must be managed at each network layer—access, aggregation, and core.  Typically, these resources are managed by individual teams and may be monitored using proprietary system-monitoring tools and alert aggregators.  In many cases, customers must use multiple monitoring applications to cover all aspects of the data center.







Data Center Network




The data center consists of multiple networks. In addition to the LAN and the WAN, 20 percent to 40 percent of all servers are also connected to a SAN over Fibre Channel. The use of high performance computing is also growing; for example, with financial trading applications that demands very low latency. This leads to more complexity with regard to cabling, power, cooling, and management.
The increase in server quantity has resulted in a similar increase in network requirements.  As application demand expands and new servers are implemented, the networks must grow to meet this demand.  These networks include LAN, SAN, and High Performance Computing (HPC).  This not only results in increased power and cooling needs but also increased cabling costs and management challenges.
Networking needs vary greatly depending on the operating system and application but can reach as high as eight 1-GE ports and four 4-G Fibre Channel ports for a virtualization platform.  This is up to 12 cables per physical server and can add up to 252 cables for a rack filled to capacity with 2U servers.

 





Blade Challenges

Blade Benefits

§  Reduction in redundant equipment costs
§  Power and cooling savings
§  Shared switching could reduce cabling
§  Rapid hardware provisioning

The consolidation of discreet servers into blade chassis provides a number of advantages in the data center:
          Reduced physical footprint
          Shared networking and SAN switching
          Reduced cabling
          Rapid provisioning of additional resources


Blade Challenges


§  Increased physical density may create power and cooling challenges
§  Increased compute density could increase bandwidth and cabling requirements
§  Each chassis and local switching creates additional management points

As the physical density of a blade environment increases relative to rack mounted servers, there may be more power and cooling required in the same physical space.  This can cause challenges in powering and cooling the data center.
The relatively high density of compute resources in a blade environment often requires significantly more bandwidth and cabling to fully utilize the deployment.
Each chassis in a blade environment will typically have a management IP address and require monitoring for system and blade health.  Additionally, each local LAN or SAN switch (if installed) also adds management overhead and overall network design challenges.


Summary of Data center and Data center part 2


§  The data center of today is the product of multiple evolutions in thinking and architecture.
§  The scale-out x86 architecture is currently undergoing another evolution through the use of blades, virtualization, and I/O consolidation.
§  New technologies and architectures can add infrastructure and management complexities.