3. Large Professional Services Company Existing Datacenters in the UK and US ~ 2000 Servers over 4 Datacenters( at the time ) 65% Virtual Multiple environments & AD Forests for Different Lifecycles (DEV/Int , QA ) Systems for Business Continuity also in place (BCP) SituationOverview
4. Lease on non Production DC in the US due for renewal Workload 100% Virtualised systems Quad Socket Rackmountedhosts HP EVA 8000/6000 Series SAN Legacy Environment
5. Migrate to UK Based Secondary DC in Docklands Space at a premium , plenty of power density Initial Design to use multiple C-Class HP Blade Chassis Cost/Performance sweet spot calculations BL460G6 +96Gb RAM + Flex10 + HBA Ideal Design
6. Design presented to Senior management No budget for a redesign aside from seed hardware, it would require a “lift & shift methodology” Servers & Storage to be physically moved. Provision additional storage to handle additional DEV/QA workload Life isn’t Ideal!
7. All Aboard Servers to be shipped from outside Atlanta to London Build out core infrastructure Replicate servers from US using Doubletake Backup Sync up backup repositories Restore vm’s & re-ip 7
8. “Seed” hardware deployed ahead of schedule – deployed with ESXi 4.0 Build out of Key systems while initial hardware was on the boat 2 Complete AD forests with associated “DMZ”(20+ Global Catalogs ) Doubletake Backup recovery assistants & repositories Some delays in system due damage in transit Sewing the Seeds
9. Legacy Datacenter ran NBU based system ( out of maintenance ) UK Datacenter only had BCP systems on it so no backup infrastructure. Had to get backups running in a very short timeframe. Deployed Veeam backup v4 on “spare” hardware Great for initial backups but had to upgrade as environment grew Erm….Backups ?
10. Rate of Data migration faster than the rate of server shipping. Led to some heavy squeeze on RAM (175% overcommit) – no serious problems from a CPU perspective. Lets not forget this is a pre production environment. Stuck in Transit…time to Overcommit.
11. Arrival of more hosts over time boosted to 10 Hosts per cluster. Last seen running ~ 800 VM’s Upgrade of Veeam server to quad socket allowed heavy job concurrency. Deployment of IBM XIV SAN for primary Storage eased heavy contention on legacy HP EVA. Fleshing the Solution out
12. Secondary DC was not originally sized to take so many hosts Rolling RAM upgrades until assets depreciate enough to replace with more space efficient blades. Upgrades to 4.1 along with SAN firmware enhancement to allow VAAI and larger cluster size. An Admins Job is never done.
13. Find out the budget BEFORE you design a solution Where possible migrate the workload , not the hardware. Make sure you start with all the pieces of the puzzle. Fresh start facilitated the transition to ESXi What I learnt on my Datacenter Migration.