Data Centre Design Best Practices

Data Centre Design Best Practices
Data Centre Design Best Practices

A data centre is just a room of computers, right? Wrong.

A data centre is where you store your digital assets. Your company relies on accessing the systems housed in the data centre on a daily basis so it is imperative that care is taken in designing your data centre to ensure reliability and to minimise running costs. Data centre design is all about finding the right cost-benefit balance to provide the services your business needs within a budget that is manageable in the long-term.


One of the first considerations in data centre design is the tier that the data centre is designed for. Tiers encompass a balance of running costs against resilience and direct the design of the entire centre. Most businesses will be happy with a tier-I or tier-II data centre. There may be around a day’s worth of downtime per year, but the costs associated with reducing this downtime don’t make financial sense for most organisations. Very few data centres will be tier-III and tier-IV (the latter expecting only 26.3 minutes of downtime per year), and they will mainly belong to businesses whose entire business consists of providing data services.


All computer systems generate heat as they run, as anyone who has used a laptop on their knee will understand. As computational speed and complexity increases so too does the amount of heat generated, which is why all data centres will need to include some form of cooling.

The most efficient cooling method is to feed cool air to where the servers take in air and draw hot air from the outlets. Data centre best practice, especially for 4D Data Centres, is to have alternating cool and hot aisles integrated with the air-conditioning system. As an alternative, cooling systems may be integrated into the server cabinets themselves.

Power supply

A steady power supply is vital for running any kind of computing equipment and data centres are no exception. All servers should be connected to a UPS (Uninterruptible Power Supply), which is an intelligent rechargeable battery. There is a trade-off between how long the system can run on battery power against the operating costs of more batteries, more equipment (and hence more cooling requirements). Whilst a Tier-IV data centre may have multiple power sources and high capacity UPSs, for a basic data centre a more cost-effective design may be simply providing enough power that the servers can shut down safely, minimising data loss in the event of a blackout.


Servers are usually arranged in long rows, with each row facing the opposite way to its neighbours. This is partly to facilitate the hot aisle-cold aisle system of cooling, and partly to allow staff to quickly and easily check the front panels of the equipment for issues.

Often data centres have raised floors. This allows cabling to be routed underneath the servers and floor panels can be removed for ease of maintenance of the cabling. Ducting for the cold air half of the air-conditioning system may also be present under the floor, depending on the method of cooling installed.


For operational reasons, you may require several independent servers. Running multiple physical units can be costly and virtualisation is the system whereby multiple servers can be implemented on a single physical unit. More powerful computers are required so the balance of virtual servers to physical machines needs careful consideration, especially since a single unit failure will affect multiple virtual systems.

Expert Advice

Designing a data centre is not a trivial matter. Each design decision has implications for the rest of the centre. Data centre design experts have years of experience, coupled with access to the most up-to-date research into design best practice, and are the people to talk to before starting on a data centre deployment project.

This is an article provided by our partners network. It might not necessarily reflect the views or opinions of our editorial team and management.

Contributed content