Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.
Scale your
Jenkins pipeline
         FOSDEM 2013
     Anders Nickelsen, PhD
   QA engineer @ Tradeshift
 @anickelsen, ani@...
Tradeshift
Tradeshift
Platform for your business interactions
Core services are free

~3 years old

One product – one production envi...
Slow test is slow
Why scale? Fast feedback
Integration tests (IT): Selenium 2 from Geb (Java/Groovy)
Run by Jenkins on every change

Takes 2...
Release pipeline
                                                    Any branch
                                          ...
The swarm
© Blizzard 2013
What?
12 Jenkins slaves
       – New slaves join swarm on boot
       – Orchestrated by ec2-collective
       – Configured...
Fast tests!
All sets are put into Jenkins build queue
Picked up by any slave
       – throttled to one per slave


12 slav...
Post processing
Join when all sets complete
       – Triggered builds are blocking

Collect test results
       – JUnit, H...
Optimizations
Optimizations
Only on ITCase file-level, file scan

Only on spec level
       – min time = longest running spec


Only sca...
Lessons learned
Parallelization overhead
Swarm setup and tear down
       – node initialization and test result collection


Tests break a...
‘Random failures’
Failures appear probabilistic / random
       – Order dependencies
       – Timing issues
       – Rebui...
Cost optimization
AWS EC2 instances:
       – 1/2 price gave 3/4 performance
       – 4/3 swarm size gave 1/4 price reduct...
Credits
puppetlabs.com
github.com/andersdyekjaerhansen/ec2_collective

Jenkins and plugins
       – Swarm plugin, paramete...
Scaling your Jenkins CI pipeline
Scaling your Jenkins CI pipeline
Scaling your Jenkins CI pipeline
Scaling your Jenkins CI pipeline
Scaling your Jenkins CI pipeline
Scaling your Jenkins CI pipeline
Prochain SlideShare
Chargement dans…5
×

sur

Scaling your Jenkins CI pipeline Slide 1 Scaling your Jenkins CI pipeline Slide 2 Scaling your Jenkins CI pipeline Slide 3 Scaling your Jenkins CI pipeline Slide 4 Scaling your Jenkins CI pipeline Slide 5 Scaling your Jenkins CI pipeline Slide 6 Scaling your Jenkins CI pipeline Slide 7 Scaling your Jenkins CI pipeline Slide 8 Scaling your Jenkins CI pipeline Slide 9 Scaling your Jenkins CI pipeline Slide 10 Scaling your Jenkins CI pipeline Slide 11 Scaling your Jenkins CI pipeline Slide 12 Scaling your Jenkins CI pipeline Slide 13 Scaling your Jenkins CI pipeline Slide 14 Scaling your Jenkins CI pipeline Slide 15 Scaling your Jenkins CI pipeline Slide 16 Scaling your Jenkins CI pipeline Slide 17 Scaling your Jenkins CI pipeline Slide 18 Scaling your Jenkins CI pipeline Slide 19 Scaling your Jenkins CI pipeline Slide 20 Scaling your Jenkins CI pipeline Slide 21 Scaling your Jenkins CI pipeline Slide 22 Scaling your Jenkins CI pipeline Slide 23 Scaling your Jenkins CI pipeline Slide 24
Prochain SlideShare
Scaling Jenkins
Suivant
Télécharger pour lire hors ligne et voir en mode plein écran

6 j’aime

Partager

Télécharger pour lire hors ligne

Scaling your Jenkins CI pipeline

Télécharger pour lire hors ligne

As your product grows the number of tests will grow. At one point, the number of tests overwhelm the number of servers to run those tests, developer feedback becomes slower and bugs can sneak in. To solve this problem, we built parallelization into our Jenkins build pipeline, so we only need to add servers to keep overall test time down. In this talk, I'll cover how we did it and what came out of it in terms of pros and cons.

As your product grows the number of test grows. At Tradeshift, we develop code in a test-driven manner, so every change, be it a new feature or a bug fix, includes a set of functional tests driven by Selenium. At one point, it started taking too long to run all of our tests in sequence, even when we scaled capacity of test-servers to make the individual tests run faster. When tests are slow developers don’t get immediate feedback when they commit a change and more time is wasted switching between tasks and different parts of the code. To solve this problem, we built parallelization into our Jenkins build pipeline, so we only need to add servers to make our test suites run faster and keep overall test time down. In this talk, I'll cover how we did it and what came out of it in terms of pros and cons.

Scaling your Jenkins CI pipeline

  1. 1. Scale your Jenkins pipeline FOSDEM 2013 Anders Nickelsen, PhD QA engineer @ Tradeshift @anickelsen, ani@tradeshift.com
  2. 2. Tradeshift
  3. 3. Tradeshift Platform for your business interactions Core services are free ~3 years old One product – one production environment ~20 developers Copenhagen, DK and San Francisco, CA Feb 2, 2013 FOSDEM 2013
  4. 4. Slow test is slow
  5. 5. Why scale? Fast feedback Integration tests (IT): Selenium 2 from Geb (Java/Groovy) Run by Jenkins on every change Takes 2 hours to run all in sequence 10+ team branches, each trigger IT Verified merges to master, also trigger IT Pipeline is congestion point Feedback not fast enough Feb 2, 2013 FOSDEM 2013
  6. 6. Release pipeline Any branch (unit tested) 10 team branches Production Master = 10 projects Integration tests on changes Feb 2, 2013 FOSDEM 2013
  7. 7. The swarm
  8. 8. © Blizzard 2013
  9. 9. What? 12 Jenkins slaves – New slaves join swarm on boot – Orchestrated by ec2-collective – Configured by Puppet at boot Tests distributed in sets Longest processing time Uses test run-time from last stable build 1 set per online slave (dynamic) Feb 2, 2013 FOSDEM 2013
  10. 10. Fast tests! All sets are put into Jenkins build queue Picked up by any slave – throttled to one per slave 12 slaves => 20 minutes – Tests: 10 min – Overhead: 10 min 24 slaves => 15 minutes Feb 2, 2013 FOSDEM 2013
  11. 11. Post processing Join when all sets complete – Triggered builds are blocking Collect test results – JUnit, HTML, screenshots, videos, console logs = 500 MB/run – Curl request to Jenkins API Process test results – JVM outputs, slice console logs, archive artifacts – Custom groovy script Feb 2, 2013 FOSDEM 2013
  12. 12. Optimizations
  13. 13. Optimizations Only on ITCase file-level, file scan Only on spec level – min time = longest running spec Only scale test processing time – 10 minutes today Feb 2, 2013 FOSDEM 2013
  14. 14. Lessons learned
  15. 15. Parallelization overhead Swarm setup and tear down – node initialization and test result collection Tests break a lot when run in parallel Fixing tests and code hurts Feb 2, 2013 FOSDEM 2013
  16. 16. ‘Random failures’ Failures appear probabilistic / random – Order dependencies – Timing issues – Rebuild ‘fixes’ the tests Slave failures – Weakest link breaks the pipeline – 24 slaves => 1 filled up due to slow mount => pipeline broken Feb 2, 2013 FOSDEM 2013
  17. 17. Cost optimization AWS EC2 instances: – 1/2 price gave 3/4 performance – 4/3 swarm size gave 1/4 price reduction – Equivalent performance Swarm is offline when not used – Kept online for 1 hour after use Feb 2, 2013 FOSDEM 2013
  18. 18. Credits puppetlabs.com github.com/andersdyekjaerhansen/ec2_collective Jenkins and plugins – Swarm plugin, parameterized trigger, envinject, rebuild, join, throttle concurrent build, conditional build step More details of our setup – tradeshift.com/blog/just-add-servers/ We’re hiring! – tradeshift.com/jobs Feb 2, 2013 FOSDEM 2013
  • ManishPant6

    Mar. 21, 2017
  • YotamShapira

    Aug. 14, 2015
  • PatrykGolabek

    Oct. 17, 2014
  • chelusly

    Feb. 27, 2014
  • powerirs

    Feb. 4, 2013
  • knorrium

    Feb. 3, 2013

As your product grows the number of tests will grow. At one point, the number of tests overwhelm the number of servers to run those tests, developer feedback becomes slower and bugs can sneak in. To solve this problem, we built parallelization into our Jenkins build pipeline, so we only need to add servers to keep overall test time down. In this talk, I'll cover how we did it and what came out of it in terms of pros and cons. As your product grows the number of test grows. At Tradeshift, we develop code in a test-driven manner, so every change, be it a new feature or a bug fix, includes a set of functional tests driven by Selenium. At one point, it started taking too long to run all of our tests in sequence, even when we scaled capacity of test-servers to make the individual tests run faster. When tests are slow developers don’t get immediate feedback when they commit a change and more time is wasted switching between tasks and different parts of the code. To solve this problem, we built parallelization into our Jenkins build pipeline, so we only need to add servers to make our test suites run faster and keep overall test time down. In this talk, I'll cover how we did it and what came out of it in terms of pros and cons.

Vues

Nombre de vues

3 779

Sur Slideshare

0

À partir des intégrations

0

Nombre d'intégrations

96

Actions

Téléchargements

33

Partages

0

Commentaires

0

Mentions J'aime

6

×