With the emergence of the digital age, data centres have become a major energy consumer and subsequently a major contributor of greenhouse gasses (GHG's). The energy required to power data centres all over the world is growing almost exponentially as data centres expand; in some cases exceeding the maximum power allocation available to sub-stations requiring expansion and even greater throughput.
This expansion is not necessarily a bad thing, as the ever-growing focus on the Internet means people can now reduce their own carbon footprint by commuting less and communicating more effectively. However, we need to ensure that data centers are run efficiently. By implementing the following simple concepts you can assist in reducing the carbon footprint of your data centre:
Cooling the equipment in a data centre is critical. Servers, switches and UPS's can all easily fail if the data centre is not cooled properly. On average air conditioners chew up as much energy as powering the actual servers.
Localised cooling
Big, energy hungry air conditioning units that push air through dropped ceilings or raised floors remain regular fixtures in data centres. However, you may wish to consider the less intense approach of localised cooling such as an in-row cooling systems. Localised cooling is particularly effective to deal with hotspots in the data centre, specifically for blade servers due to their proximity to the heat loads.
These units function autonomously: temperature-monitoring leads placed directly in front of a heat source ensure that the air remains within a specified temperature range. If a blade chassis starts running hot due to increased load, the in-row unit ramps up its airflow, dropping the air temperature to compensate. In addition the unit ratchets down its cooling activities during idle times, saving even more money.
Note: Localised cooling systems are designed to provide just enough just-in-time cooling. Whether you are rolling out a new energy-efficient data centre or retrofitting one already in place, a comprehensive understanding of your building's environmental systems and the expected heat load of the data centre itself is required before implementing any localised cooling solutions.
Raised Floors
Raised floors allow the cooled air to flow directly where it is needed, into the server cabinets without being disbursed across the entire data centre. They also allow you to run the necessary cabling neatly to the cabinets.
Hot and Cold Aisles
Servers have built-in cooling, and exhausts, these often compete with the air conditioning. You should consider configuring the rows of cabinets so that the exhausts face each other, thus isolating them and allowing you to extract or cool the resultant hot air which minimises this inefficiency.
Pumping Cold Air
If the climate and weather permits, pump cold air into the data centre, rather than using the air conditioning.
Quite often servers are loaded with superfluous or even obsolete applications. This can result in too many servers being employed or server capacity being taken up by needless applications.
Simple steps to optimise:
Occasionally an environment must remain separate e.g. shared hosting facility. However, this does not mean that each environment must be on a separate server. Virtualisation provides the ability to split up a single server into multiple server environments each with their own operating system.
Many UPS devices are inefficient, meaning they lose much of the energy distributed to them. CyberPower Industries produces an Energy Saving UPS device that reduces energy consumption by up to 75% compared to conventional UPS systems.
Server power supplies purchased more than 12 months ago are typically between 55 to 85% efficient, leaving 15 to 45% of incoming power wasted before it is utilised to power the server. New servers, especially those that are 80 Plus certified are guaranteed to be 80% or greater efficient even at the end of their lifecycles. Visit http://www.80plus.org for more information.
Using a Network Attached Storage Device or NAT can also reduce energy costs. An IBM BladeCenter with 56 blades can use as much as 1.2 kilowatts of power; replacing it with a 12 disk Serial Attached SCSI storage array could use less than 300 watts.