Open stack in action  cern _openstack_accelerating_science
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Open stack in action cern _openstack_accelerating_science

on

  • 2,124 vues

 

Statistiques

Vues

Total des vues
2,124
Vues sur SlideShare
2,124
Vues externes
0

Actions

J'aime
1
Téléchargements
51
Commentaires
0

0 Ajouts 0

No embeds

Accessibilité

Catégories

Détails de l'import

Uploaded via as Adobe PDF

Droits d'utilisation

© Tous droits réservés

Report content

Signalé comme inapproprié Signaler comme inapproprié
Signaler comme inapproprié

Indiquez la raison pour laquelle vous avez signalé cette présentation comme n'étant pas appropriée.

Annuler
  • Full Name Full Name Comment goes here.
    Êtes-vous sûr de vouloir
    Votre message apparaîtra ici
    Processing...
Poster un commentaire
Modifier votre commentaire

Open stack in action cern _openstack_accelerating_science Presentation Transcript

  • 1. Accelerating Science with OpenStack Jan van Eldik Jan.van.Eldik@cern.ch OpenStack in Action November 29, 2012
  • 2. What is CERN ?• Conseil Européen pour la Recherche Nucléaire – aka European Laboratory for Particle Physics• Founded in 1954 with an international treaty• Between Geneva and the Jura mountains, straddling the Swiss- French border• Our business is fundamental physics , what is the universe OpenStack in Action - does it made of and how work
  • 3. Answering fundamental questions… • How to explain particles have mass? We have theories and accumulating experimental evidence.. Getting close… • What is 96% of the universe made of ? We can only see 4% of its estimated mass! • Why isn’t there anti-matter in the universe? Nature should be symmetric… • What was the state of matter just after the « Big Bang » ? Travelling back to the earliest instants of the universe would help… OpenStack in Action -
  • 4. The Large Hadron ColliderOpenStack in Action -
  • 5. OpenStack in Action -
  • 6. OpenStack in Action -
  • 7. OpenStack in Action - 3
  • 8. Tier-0 (CERN): • Data recording • Initial data reconstruction • Data distribution Tier-1 (11 centres): • Permanent storage • Re-processing • Analysis Tier-2 (~200 centres): • Simulation • End-user analysis• Data is recorded at CERN and Tier-1s and analysed in the Worldwide LHC Computing Grid• In a normal day, the grid provides 100,000 CPU days executing over 2 million jobs OpenStack in Action -
  • 9. • Data Centre by Numbers – Hardware installation & retirement – ~7,000 hardware movements/year; ~1,800 disk failures/year Racks 828 Disks 64,109 Tape Drives 160 Raw disk capacity Servers 11,728 63,289 Tape Cartridges 45,000 (TiB) Processors 15,694 Memory modules 56,014 Tape slots 56,000 Cores 64,238 Memory capacity Tape Capacity (TiB) 73,000 158 (TiB) HEPSpec06 482,507 RAID controllers 3,749 High Speed Routers 24 (640 Mbps → 2.4 Other; 0% Fujitsu; 3% Tbps) Hitachi; 23% HP; 0% Ethernet Switches 350 Xeon 3GHz; 4% Xeon 5150; 2%5160; 10% Xeon Maxtor; 0% 10 Gbps ports 2,000 Xeon L5520; 33% Xeon E5345; 14% Xeon E5335; 7% WDC; 59% Switching Capacity 4.8 Tbps Seagate; 15% 1 Gbps ports 16,939 Xeon L5420; Xeon E5410; Xeon E5405; 6% 8% 16% 10 Gbps ports 558 IT Power Consumption 2,456 KW Total Power 3,890 KW Consumption OpenStack in Action -
  • 10. Current infrastructure• Around 12k servers – Dedicated compute, dedicated disk server, dedicated service nodes – Majority Scientific Linux (RHEL5/6 clone) – Mostly running on real hardware – Last couple of years, we’ve consolidated some of the service nodes onto Microsoft HyperV – Various other virtualisation projects around• Many diverse applications (”clusters”) – Managed by different teams (CERN IT + experiment groups) …. – … using our own management toolset – Quattor / CDB configuration tool – Lemon computer monitoring
  • 11. New data centre to expand capacity • Data centre in Geneva at the limit of electrical capacity at 3.5MW • New centre chosen in Budapest, Hungary • Additional 2.7MW of usable power • Hands off facility • Deploying from 2013 with 200Gbit/s network to CERNOpenStack in Action -
  • 12. Time to change strategy• Rationale – Need to manage twice the servers as today – No increase in staff numbers – Tools becoming increasingly brittle and will not scale as-is• Approach – CERN is no longer a special case for compute – Adopt an open source tool chain model – Our engineers rapidly iterate – Evaluate solutions in the problem domain – Identify functional gaps and challenge them – Select first choice but be prepared to change in future – Contribute new function back to the communityOpenStack in Action -
  • 13. Prepare the move to the clouds• Improve operational efficiency – Machine ordering, reception and testing – Hardware interventions with long running programs – Multiple operating system demand• Improve resource efficiency – Exploit idle resources, especially waiting for disk and tape I/O – Highly variable load such as interactive or build machines• Enable cloud architectures – Gradual migration to cloud interfaces and workflows• Improve responsiveness – Self-Service with coffee break response timeOpenStack in Action -
  • 14. Public Procurement Purchase ModelStep Time (Days) Elapsed (Days)User expresses requirement 0Market Survey prepared 15 15Market Survey for possible vendors 30 45Specifications prepared 15 60Vendor responses 30 90Test systems evaluated 30 120Offers adjudicated 10 130Finance committee 30 160Hardware delivered 90 250Burn in and acceptance 30 days typical 280 380 worst caseTotal 280+ DaysOpenStack in Action -
  • 15. Service Model • Pets are given names like pussinboots.cern.ch • They are unique, lovingly hand raised and cared for • When they get ill, you nurse them back to health • Cattle are given numbers like vm0042.cern.ch • They are almost identical to other cattle • When they get ill, you get another one • Future application architectures should use Cattle but Pets with strong configuration management are viable and still neededOpenStack in Action -
  • 16. Supporting the Pets with OpenStack• Network – Interfacing with legacy site DNS and IP management – Ensuring Kerberos identity before VM start• Puppet – Ease use of configuration management tools with our users – Exploit mcollective for orchestration/delegation• External Block Storage – Looking to use Cinder with Gluster backing store• Live migration to maximise availability – KVM live migration using GlusterOpenStack in Action -
  • 17. Current Status of OpenStack at CERN• Working on an Essex code base from the EPEL repository – Excellent experience with the Fedora Cloud SIG team – Cloud-init for contextualisation, oz for images (Linux and Windows)• Components – Current focus is on Nova with KVM and Hyper-V – Tests with Swift are ongoing but require significant experiment code changes• Pre-production facility with around 170 Hypervisors, with 2700 VMs integrated with CERN infrastructure, Puppet deployedOpenStack in Action -
  • 18. • Click to edit Master text styles – Second level – Third level • Fourth level – Fifth levelOpenStack in Action -
  • 19. When communities combine…• OpenStack’s many components and options make configuration complex out of the box• Puppet forge module from PuppetLabs does our configuration• The Foreman adds OpenStack provisioning for user kiosk to a configured machine in 15 minutesOpenStack in Action -
  • 20. Foreman to manage Puppetized VMOpenStack in Action -
  • 21. Active Directory Integration• CERN’s Active Directory – Unified identity management across the site – 44,000 users – 29,000 groups – 200 arrivals/departures per month• Full integration with Active Directory via LDAP – Uses the OpenLDAP backend with some particular configuration settings – Aim for minimal changes to Active Directory – 7 patches submitted around hard coded values and additional filtering• Now in use in our pre-production instance – Map project roles (admins, members) to groups – Documentation in the OpenStack wikiOpenStack in Action -
  • 22. Welcome Back Hyper-V!• We currently use Hyper-V/System Centre for our server consolidation activities – But need to scale to 100x current installation size• Choice of hypervisors should be tactical – Performance – Compatibility/Support with integration components – Image migration from legacy environments• CERN is working closely with the Hyper-V OpenStack team – Puppet to configure hypervisors on Windows – Most functions work well but further work on Console, Ceilometer, …OpenStack in Action -
  • 23. Opportunistic Clouds in online experiment farms• The CERN experiments have farms of 1000s of Linux servers close to the detectors to filter the 1PByte/s down to 6GByte/s to be recorded to tape• When the accelerator is not running, these machines are currently idle – Accelerator has regular maintenance slots of several days – Long Shutdown due from March 2013-November 2014• One of the experiments are deploying OpenStack on their farm – Simulation (low I/O, high CPU) – Analysis (high I/O, high CPU, high network)OpenStack in Action -
  • 24. Federated European Clouds• Two significant European projects around Federated Clouds – European Grid Initiative Federated Cloud as a federation of grid sites providing IaaS – HELiX Nebula European Union funded project to create a scientific cloud based on commercial providers EGI Federated Cloud Sites CESGA CESNET INFN SARA Cyfronet FZ Jülich SZTAKI IPHC GRIF GRNET KTH Oxford GWDG IGI TCD IN2P3 STFCOpenStack in Action -
  • 25. Federated Cloud Commonalities• Basic building blocks – Each site gives an IaaS endpoint with an API and common security policy – OCCI? CDMI ? Libcloud ? Jclouds ? – Image stores available across the sites – Federated identity management based on X .509 certificates – Consolidation of accounting information to validate pledges and usage• Multiple cloud technologies – OpenStack – OpenNebula – ProprietaryOpenStack in Action -
  • 26. Next Steps• Deploy into production at the start of 2013 with Folsom running the Grid software on top of OpenStack IaaS• Support multi-site operations with 2nd data centre in Hungary• Exploit new functionality – Ceilometer for metering – Bare metal for non-virtualised use cases such as high I/O servers – X.509 user certificate authentication – Load balancing as a serviceRamping to 15,000 hypervisors with100,000 to 300,000 VMs by 2015OpenStack in Action -
  • 27. What we are looking for…• Best practice for – Monitoring and KPIs as part of core functionality – Guest disaster recovery – Migration between versions of OpenStack• Roles within multi-user projects – VM owner allowed to manage their own resources (start/stop/delete) – Project admins allowed to manage all resources – Other members should not have high rights over other members VMs• Global quota management for non-elastic private cloud – Manage resource prioritisation and allocation centrally – Capacity management / utilisation for planningOpenStack in Action -
  • 28. Conclusions• Production at CERN in next few months on Folsom – Our emphasis will shift to focus on stability – Integrate CERN legacy integrations via formal user exits – Work together with others on scaling improvements• Community is key to shared success – Our problems are often resolved before we raise them – Packaging teams are producing reliable builds promptly• CERN contributes and benefits – Thanks to everyone for their efforts and enthusiasm – Not just code but documentation, tests, blogs, …OpenStack in Action -
  • 29. Backup SlidesOpenStack in Action - 3
  • 30. ReferencesCERN http://public.web.cern.ch/public/Scientific Linux http://www.scientificlinux.org/Worldwide LHC Computing Grid http://lcg.web.cern.ch/lcg/ http://rtm.hep.ph.ic.ac.uk/Jobs http://cern.ch/jobsDetailed Report on Agile Infrastructure http://cern.ch/go/N8wpHELiX Nebula http://helix-nebula.eu/EGI Cloud Taskforce https://wiki.egi.eu/wiki/Fedcloud-tf OpenStack in Action -
  • 31. OpenStack in Action - 3
  • 32. Community collaboration on an international scaleOpenStack in Action -
  • 33. The Large Hadron Collider (LHC) tunnel OpenStack in Action - 3 33
  • 34. Accumulating events in 2009-2011OpenStack in Action -
  • 35. Heavy Ion CollisionsOpenStack in Action -
  • 36. OpenStack in Action -
  • 37. Our Challenges - Data storage • >20 years retention • 6GB/s average • 25GB/s peaks • 30PB/year to recordOpenStack in Action -
  • 38. 45,000 tapes holding 73PB of physics dataOpenStack in Action -
  • 39. Building Blocks mcollective, yum Bamboo Puppet AIMS/PXE Foreman JIRA OpenStack Nova git Koji, Mock Yum repoActive Directory / Pulp LDAP Lemon / Hardware Hadoop database Puppet-DBOpenStack in Action -
  • 40. Training and Support• Buy the book rather than guru mentoring• Follow the mailing lists to learn• Newcomers are rapidly productive (and often know more than us)• Community and Enterprise support means we’re not on our ownOpenStack in Action -