SlideShare une entreprise Scribd logo
1  sur  625
Télécharger pour lire hors ligne
DESIGN FOR
         RELIABILITY
            (DFR)
          SEMINAR

Mike Silverman // (408) 654-0499 // mikes@opsalacarte.com

          Ops A La Carte LLC // www.opsalacarte.com
                                                            1
                            © 2009 Ops A La Carte
The following presentation materials are
    copyright protected property of
         Ops A La Carte LLC.
These materials may not be distributed
      outside of your company.




                 © 2011 Ops A La Carte
                                           2
Presenter’s Biographical Sketch – Mike Silverman

◈ Mike Silverman is founder and managing partner at Ops A La Carte, a Professional
  Consulting Company that has in intense focus on helping customers with end-to-end
  reliability. Through Ops A La Carte, Mike has had extensive experience as a consultant
  to high-tech companies, and has consulted for over 300 companies including Cisco,
  Ciena, Siemens, Abbott Labs, and Applied Materials. He has consulted in a variety of
  different industries including power electronics, telecommunications, networking,
  medical, semiconductor, semiconductor equipment, consumer electronics, and defense.
◈ Mike has 20 years of reliability and quality experience. He is also an expert in
  accelerated reliability techniques, including HALT&HASS (and recently purchased a HALT
  Lab), testing over 500 products for 100 companies in 40 different industries.
◈ Mike just completed his first book on Reliability called “50 Ways to Improve Your Product
  Reliability”. This course is largely based on the book material.
◈ Mike has authored and published 8 papers on reliability techniques and has presented
  these around the world including China, Germany, Canada, Taiwan, Singapore, and
  Korea. Ops has also developed and currently teaches 31 courses on reliability
  techniques.
◈ Mike has a BS degree in Electrical and Computer Engineering from the University of
  Colorado at Boulder, and is both a Certified Reliability Engineer and a course instructor
  through the American Society for Quality (ASQ), IEEE, Effective Training Associates, and
  Hobbs Engineering. Mike is a member of ASQ, IEEE, SME, ASME, PATCA, and IEEE
  Consulting Society and is the current chapter president in the IEEE Reliability Society for
  Silicon Valley.
                                                                                                3
                                         © 2009 Ops A La Carte
Seminar Overview
                          Monday, May 9, 2011
                            - SEMINAR DAY 1 -

    8:30-9:00am    Introduction
    9:00-9:30pm    DFR Overview
    9:30-10:30am   Planning for Reliability – Assessments, Goals, Plans (CH 5-12)
    10:30-11:00am Allocation/Goal Setting Workshop
    11:00-12:00pm Modeling and Prediction (Ch 21)
    12:00-1:00pm   Lunch
    1:00-1:30pm    Prediction Workshop
    1:30-2:00pm    Thermal/Derating Analysis (Ch 22/23)
    2:00-3:00pm    Failure Modes and Effects Analysis (FMEA) (Ch 16)
    3:00-3:30pm    Design of Experiments (Ch 25)
    3:30-4:00pm    Human Factors Engineering
    4:00-4:30pm    Wrap-Up Day 1 / Questions

                                   © 2009 Ops A La Carte                             4
Seminar Overview
                         Tuesday, May 10, 2011
                             - SEMINAR DAY 2 -

    8:30-10:00am    Highly Accelerated Life Test (HALT) (Ch 34)
    10:00-11:00pm   Accelerated Life Test (ALT) (Ch 36)
    11:00-12:00pm   When to Use HALT vs. ALT (Ch 37)
    12:00-1:00pm    Lunch
    1:00-1:30pm     Reliability Demonstration Test (RDT) (Ch 35)
    1:30-2:00pm     When to Use HALT vs. RDT – the HALT Calculator (Ch 38)
    2:00-2:30pm     Highly Accelerated Stress Screen (HASS) (Ch 43)
    2:30-3:00pm     On-Going Reliability Test (ORT) (Ch44)
    3:00-3;30pm     Root Cause Analysis (RCA) (Ch 40)
    3:30-4:00pm     Field Data Analysis (Ch 48)
    4:00-4:30pm     Conclusion/Wrap-Up



                                 © 2009 Ops A La Carte                        5
COMPANY        OVERVIEW




Confidence in Reliability
Our Company
                is a privately-held professional reliability
engineering firm founded in 2001 and headquartered in Santa
Clara, California with offices in China, India and Singapore.

                was named one of the top 10
fastest growing, privately-held companies in
the Silicon Valley in 2006 and 2009 by the
San Jose Business Journal.

                is a solid company that has
been profitable every quarter since its
inception due to its outstanding reputation,
customer value and scalable business model.
Our Team
               is made up of a group of
highly accomplished Reliability Consultants.

Each of our consultants has 15+ years of
Reliability Engineering and Reliability
Management experience in various
industries.

We tap a large network of labs, test facilities,
and talented engineering professionals to
quickly assemble resources to supplement
your organization.
• Ops Solutions – Ops provides end-to-end solutions that target the corporate
product reliability objectives
• Ops Individual “A La Carte” Consulting – Ops identifies and solves the
missing key ingredients needed for a fully integrated reliable product
• Ops Training – Ops’ highly specialized leaders and experts in the industry
train others in both standard and customized training seminars
• Ops Testing – Ops’ state-of-the-art
provides comprehensive testing services
assists clients in developing and executing any and all elements
of Reliability through the Product Life Cycle.

                 has the unique ability to assess a product and understand the
key reliability elements necessary to measure/improve product performance
and customer satisfaction.

               pioneered “Reliability Integration” – using multiple tools in
conjunction throughout each client’s organization to greatly increase the power
and value of any Reliability Program.
Testing Services
• Our own lab facility located in Northern California in the heart
  of Silicon Valley. We provide HALT/HASS services on a world-
  wide basis, using partner labs for tests outside California.




• Second oldest HALT facility in the world, established in 1995
 (originally owned by QualMark)
• HALT equipment has all latest technology – only lab in region
• Highly-experienced staff with over 100 years of combined
  experience in HALT and HASS
• Tested over 500 products in over 30 different industries
• Our HALT/HASS services are fully integrated with our other
  consulting services.
                              Ops A La Carte ©
Ops’ New Reliability Book
        How Reliable Is Your Product?
   50 Ways to Improve Product Reliability
 A new book by Ops A La Carte LLC® Founder/Managing Partner Mike Silverman

                           The book focuses on Mike’s experiences
                           working with over 500 companies in his 25
                           year career as an engineer, manager, and
                           consultant. It is a practical guide to reliability
                           written for everyone in your organization. In
                           the book we give tips and case studies rather
                           than a textbook full of formulas.

                           Available January 2011 in hardback for
                           $44.95 or ebook for $19.95 @amazon.com or
                           http://www.happyabout.com/productreliability.php

                           For more info, go to www.opsalacarte.com

                             © 2009 Ops A La Carte                     12
FREE Webinars for 2011
• Feb 17 – Medical Device Seminar/Webinar, San Jose
• Mar 2 – Warranty Webinar (coincides with Warranty
  Chain Management Workshop on March 17)
• Mar 3 – Implantable Medical Seminar, Santa Clara
• Mar 22 – Book signing for “How Reliable Is Your
  Product”, Santa Clara
• Apr 6 – Solar Reliability Challenges
• May 3 – DfSS vs. DfR Webinar (tied with WQC)
• Jun 7 – How to Use HALT with Prognostics (tied with
  PHM)

Details for all are on our site at www.opsalacarte.com
                       © 2009 Ops A La Carte       13
Upcoming Events
• May 25 – SEMA (Solar) Event, San Jose. We will be
  giving a reliability presentation
• May 25 – ASQ Medical, Sunnyvale. We will be giving
  a reliability presentation
• June 6 – MD&M East, New York. We will be giving a
  one day seminar on medical reliability testing
• June 7-9 – ARS, San Diego. We will be exhibiting and
  giving two presentations on reliability.
• June 20-23 – PHM Conference, Denver. We will be
  exhibiting and giving a presentation on reliability.

Details for all are on our site at www.opsalacarte.com
                       © 2009 Ops A La Carte       14
Contact Information
                        Ops A La Carte, LLC
                            Mike Silverman
                          Managing Partner
                            (408) 654-0499
                       Skype: mikesilverman
                Email: mikes@opsalacarte.com
             URL: http://www.opsalacarte.com
      Blog: http://www.opsalacarte.com/reliability-blog
        Linked-In: http://www.linkedin.com/pub/mike-
                           silverman/0/3a7/24b
            Twitter: http://twitter.com/opsalacarte
Facebook: http://www.facebook.com/pages/Santa-Clara-CA/Ops-A-La-Carte-LLC/155189552669
            Bio: http://www.mike-silverman.com
                        Ops Public Calendar:
 http://www.google.com/calendar/embed?src=opsalacarte%4
            0gmail.com&ctz=America/Los_Angeles                                  15
                                     © 2009 Ops A La Carte
What Is DFR and What is NOT DFR

            A High Level Overview




5/7/2011           Ops A La Carte ©    1
DfR Is Not

 • Making a list of all possible reliability activities and then trying to 
   cover as many as possible within the timeframe of the product 
   development process.

 • Using only certain selected tools from the “DfR toolbox”

 • Assuming that  product reliability is the sole responsibility of a 
   reliability engineer (reliability engineer is the guide and mentor 
   but not the owner – designer should be the owner).

 • Completing the analytical work but delay test and verification until 
   the system testing stage. 

5/7/2011                       Ops A La Carte ©                         2
DfR Is Not (cont’d)
 • Getting the product into test as fast as possible to test reliability 
   into the product (a.k.a. Test‐Analyze‐and‐Fix)

 • Only working on the in‐house design items and not worrying 
   about vendor items

 • Working in silos between EE, Mech E, Software, etc. (even if they 
   apply some or most of the DfR tools) – all competencies must 
   work together to reach common goals.

 • Not looking at interactions between groups and not taking a 
   system level viewpoint.

5/7/2011                       Ops A La Carte ©                        3
DfR Is
   • Setting Goals at the beginning of the program and then 
     developing a plan to meet the goals.

   • Having the reliability goals being driven by the design team 
     with the reliability team acting as mentors. Having everyone 
     working to a common set of goals.  Reliability engineer doesn’t 
     own the goal but is a key influencer.

   • Providing metrics so that you have checkpoints on where you 
     are against your goals.

   • Writing a Reliability Plan (not only a test plan) to drive your 
     program.
5/7/2011                      Ops A La Carte ©                          4
DfR Is (cont’d)
   • DfR is the process of building reliability into the design.

   • DfR begins from the very early stages of the design (concept 
     phase) and should be integrated into every stage of this 
     process

   • As a result of this process, reliability must be designed into 
     products and processes using the best available science‐based 
     methods.

   • Before moving from one phase of the product life cycle to the 
     next, there must be a gate to measure reliability and assure 
     you are target.
5/7/2011                      Ops A La Carte ©                     5
DfR Flow
   •   Initiate a Reliability Program
   •   Determine next best steps
   •   Reduce customer complaints                                  $ Profits

   •   Select right tools
   •   Improve reliability                                             Goal        market
                                  Program Plan                                     share
                               Gap Analysis
                                                                    satisfaction
                     Benchmarking

               Statistical                           A detailed evaluation of an 
               Data Analysis                         organization’s approach and 
                                                     processes involved in creating 
           Assessment 
           Interviews                                reliable products. The 
field
                         $ unreliability
                                                     assessment captures the current 
failures
               Now                                   state and leads to an actionable 
  complaints             ? Unknown                   reliability program plan.
                          Reliability ?
5/7/2011                                   Ops A La Carte ©                            6
From the Toolbox Approach to the 
              Structured Approach to DfR




                                   A. Mettas, IJPE, 2010


5/7/2011               Ops A La Carte ©                    7
DfR Key Activities Flow
                                             6. Monitor and Control


                                 5. Validate 



              4. Verify                         3. Analyze



                             2. Design


           1. Identify
5/7/2011                  Ops A La Carte ©                            8
1. Identify
   • Goal: quantitatively define the reliability requirements for a 
     product as well as the end‐user environmental/usage conditions. 

   • Customer expectations and how to translate them into 
     engineering metrics (e.g., survive 15 yrs life)

   • Develop specific environmental test requirements (e.g., 
     converting the requirement of B5 life at 280K miles for a heavy 
     duty truck into a test flow and test sample size)

   • Identify technology limitations (e.g., battery, optics, specific 
     components, etc.) and the relevant validation strategies



5/7/2011                        Ops A La Carte ©                         9
1. Identify:  Activities/Tools
   • Goal Setting

   • Metrics

   • Gap Analysis

   • Benchmarking

   • Reliability Program Plan

   • QFD (Quality Function Deployment)
5/7/2011                Ops A La Carte ©     10
1. Identify:  Goals
    • Reliability Goals & Metrics tie together all stages of the 
      product life cycle. Well crafted goals provide the target for 
      the business to achieve, they set the direction.

    • Reliability Goals can be derived from:
           – Customer‐specified or implied requirements
           – Internally‐specified or self‐imposed requirements (usually based 
             on trying to be better than previous products)
           – Benchmarking against competition
           – Industry standards
           – Engineering common sense




5/7/2011                             Ops A La Carte ©                            11
1. Identify: Metrics
      • Metrics provide:
        – the milestones,
        – the “are we there, yet”, and
        – the feedback
           that all elements of the organization require 
           to stay on track toward the goals.




5/7/2011                     Ops A La Carte ©               12
1. Identify:  Reliability Program Plan
    • A Reliability Program and Integration Plan is 
      crucial at the beginning of the product life 
      cycle because in this plan, we define:
           – What are the overall goals of the product and of each 
             assembly that makes up the product ?
           – What has been the past performance of the product ?
           – What is the size of the gap ?
           – What are the constraints ?
           – What reliability elements/tools will be used ?
           – How will each tool be implemented and integrated to achieve 
             the goals ?
           – What is our schedule for meeting these goals ?

5/7/2011                         Ops A La Carte ©                    13
2. Design
• This is the stage where specific design activities begin, such as 
  circuit layout, mechanical drawing, component/supplier 
  selection, etc.  Therefore, a better design picture begins 
  emerging. 

• In this stage, a clearer picture about what the product is 
  supposed to do starts developing. 
   – More specific reliability requirements are defined
   – The more design/application change, the more reliability 
      risks are introduced.

• A program risk can be assessed
5/7/2011                   Ops A La Carte ©                    14
2. Design:  Activities/Tools
• Reliability Prediction (compare design alternatives, 
  identify preferred components and suppliers)
• Cost Trade‐offs
• Tolerance evaluation
• Better understanding of customer specifications
• FMEA
• FTA




5/7/2011               Ops A La Carte ©              15
3. Analyze
  • Estimating the product's reliability, often with a 
    rough first cut estimate, early in the design phase. It 
    is important at this phase to address all the 
    potential sources of product failure.  

  • Close cooperation between reliability engineer and 
    the design team can be very beneficial at this phase.




5/7/2011                 Ops A La Carte ©               16
3. Analyze:  Activities/Tools
  • Finite Element Analysis, Physics of Failure
  • Reliability Prediction (reliability block diagrams)
  • Engineering judgment, expert opinions, existing 
    data
  • Warranty Analysis of the existing products
  • DRBFM or Change Point Analysis (if needed)
  • Stress‐Strength Analysis                0.60




                                            0.48


  • FMEA (updated)
                                            0.36




                                            0.24




                                            0.12




                                            0.00
                                                   0.0   5.0   10.0   15.0   20.0        25.0




5/7/2011                 Ops A La Carte ©                                           17
4. Verify
 • Prototype hardware build. Quantify all of the previous 
   work based on test results. By this stage, prototypes 
   should be ready for testing and more detailed 
   analysis. 

 • Iterative process where different types of tests are 
   performed, product weaknesses are uncovered, the 
   results are analyzed, design changes are made.




5/7/2011               Ops A La Carte ©              18
4. Verify: Activities/Tools

 • HALT

 • ALT

 • Test to failure (Life data analysis)

 • Degradation analysis

 • Reliability Growth Process (if enough data is available)

 • DRBTR (Design Review Based on Test Results) 


5/7/2011                 Ops A La Carte ©             19
5. Validate 
           (assure production readiness)
• Validation usually involves functional and 
  environmental testing on a system level with the 
  purpose to become production‐ready. 

• Making sure that the product is ready for high 
  volume production.

• Design modifications might be necessary to improve 
  robustness.


5/7/2011               Ops A La Carte ©               20
5. Validate:  Activities/Tools

• Design Validation (Including Accelerated Life Testing 
  and Reliability Demonstration)

• Process Validation

Note: often program schedule leaves no time for test 
to failure at this stage.  Most of it should be done at 
the previous stages.  Validation phase is often done via 
‘test to success’


5/7/2011               Ops A La Carte ©               21
6. Control

 • Assuring that the process remains unchanged and 
   the variations remain within the tolerances.




5/7/2011              Ops A La Carte ©            22
6. Control:  Activities/Tools

 • Control Charts and Process Capability Studies (Cpk, 
   Ppk, etc.)
 • Human Reliability
 • Continuous Compliance
 • Field return analysis (warranty) and forecasting
 • ORT (ongoing reliability testing) 
 • Audits
 • Lessons Learned for the next generation of 
   products (important to close the cycle on DfR)

5/7/2011                Ops A La Carte ©              23
DfR Key Activities
              HASS, Control Charts, Re‐
              validation, Audits, Look 
                                             6. Monitor and Control
              Across,
              Lessons Learned, ORT                     Design and  Process Validation
                                                       Accelerated Test.  Reliability 
HALT,                            5. Validate           Demonstration.
Evaluation Testing, DRBTR,                                            FEA, Warranty Data 
Reliability Growth modeling,                                          Analysis, 
Change Point Analysis                                                 DRBFM, Reliability 
               4. Verify                                              prediction
                                                3. Analyze            Lessons Learned, 
                                                                      Reliability Block 
                                                                      Diagrams
                                                  DFMEA, Cost trade‐off analysis,
                               2. Design          Lessons Learned
                                                  Probabilistic design, Cost trade‐
                                                  offs, Tolerance Analysis
                              QFD, Requirements definitions, Benchmarking, 
                              Product usage analysis
           1. Identify        Understanding of customer requirements and 
                              specifications
5/7/2011                             Ops A La Carte ©                               24
Key points for implementing DfR
                       activities
  • Start DfR activities early in the process

  • Reliability engineer’s job is to lead/coach the design team

  • Integration of Reliability and Quality Engineers with design teams.  

  • Warranty/field data analysis (both statistical and root cause 
    analysis) needs to be fed back to both design and reliability teams.  

  • Reduce the number of tools in the toolbox, but use the remaining 
    well.  Neither all steps nor tools are necessary for all the 
    programs.


5/7/2011                      Ops A La Carte ©                      25
Program Risk Assessment. 
           Key to the Resource Management
              Ask the following questions
              at the beginning of the program:
 • Will this product contain any new technology with 
   unproven reliability record?
 • Will this design be significantly different from the old 
   one (e.g. more that 30% of content is new)?
 • Will this product be used at a different geographic 
   regions or be exposed to more extreme 
   environments?
 • Does this product have new requirements (e.g., 15 
   years live instead of 10 years)?

5/7/2011                Ops A La Carte ©               26
Program Risk Assessment.  (Cont’d)
            Key to the Resource Management
   • Will this product have a new application (e.g., 
     underhood vs. passenger compartment or military vs. 
     automotive)?
   • Any new materials used in the design?
   • Will this product have new suppliers?
   • Will the product be made at a different 
     manufacturing location?
   • Are there any other changes, which can affect 
     reliability?
           The more “yes” answers the higher is the
               risk – the more DfR tools to use
5/7/2011                  Ops A La Carte ©            27
AIAG Reliability Maturity Assessment 
                         Categories
• A.  Reliability planning
      – 9 questions (Benchmarking, reliability planning, etc.)
• B.  Design for Reliability
      – 21 questions (FMEA, Design optimization, sneak circuits, etc.)
• C.  Reliability prediction and modeling 
      – 7 questions (FTA, Reliability Prediction, etc.)
• D.  Reliability of mechanical components and systems“
      – FEA, derating, and degradation analysis


   Note: category B is called “Design for Reliability”, although it contains
   only a subset of tools from the ‘traditional’ DfR process

5/7/2011                          Ops A La Carte ©                             28
AIAG Reliability Maturity Assessment 
                         Categories
   • E.  Statistical concepts
           – 4 questions (DOE, statistical tolerancing, etc.)
   • F.  Failure reporting and analysis
           – 11 questions (problems solving, warranty databases, etc)
   • G. Analyzing reliability data
           – 4 questions (Weibull, Reliability Growth)
   • H.  Reliability testing
           – 7 questions (Test planning, HALT, HASS)
   • I.  Reliability in manufacturing
           – 8 questions. ESS, MSA, PPAP


5/7/2011                            Ops A La Carte ©                    29
Example of the AIAG Reliability 
                   Maturity Assessment

                       RMI Radar Plot by Category
                   Min B-Level        Min A-Level             Score      Avg Score

                                    A. RELIABILITY PLANNING
                                            100%
                      I. RELIABILITY IN      90%             B. DESIGN FOR
                      MANUFACTURING          80%               RELIABILITY
                                             70%
                                             60%
                                             50%                  C. RELIABILITY
                                             40%
           H. RELIABILITY TESTING            30%                 PREDICTION AND
                                             20%                    MODELING

                                                                D. RELIABILITY OF
           G. ANALYZING RELIABILITY
                                                                   MECHANICAL
                    DATA
                                                                COMPONENTS AND…
                    F. FAILURE REPORTING
                                                          E. STATISTICAL CONCEPTS
                        AND ANALYSIS




5/7/2011                               Ops A La Carte ©                              30
Challenges with Implementing DfR
  • Being early enough.  Time to market/Rush to demonstrate so they 
    skip steps.

  • Rel Engineers are tied up on current projects and new projects are 
    starting without them.

  • Getting the designers to understand DfR so that they can drive the 
    program.

  • Culture – will it accept?  How do you get management buy‐in?  
    Requires patience.  Requires addressing concerns of management.  

  • We are already good enough.  Why do we need it?


5/7/2011                     Ops A La Carte ©                     31
Overcoming Challenges
  • Cost justification

  • Management buy‐in

  • Education to designers

  • Voice of the customer

  • Case study/Successful demonstration

  • Ability to measure success (metrics)

5/7/2011                 Ops A La Carte ©   32
Conclusion
    •      DfR is the process which begins from the very 
           early stages of the design and should be 
           integrated into every stage of this process.

    •      As a result of this process, reliability must be 
           designed into products and processes using the 
           best available science‐based methods.




5/7/2011                     Ops A La Carte ©               33
What is DESIGN for
  RELIABILITY?



       © 2008 Ops A La Carte   1
First we must ask: What is Reliability?
Reliability is often considered quality over time.



Reliability is…
 “The ability of a system or component to perform its required
  functions under stated conditions for a specified period of time”

                                                      - IEEE 610.12-1990




 We shall revisit this when we discuss Reliability Goal Setting.




                              © 2008 Ops A La Carte                        2
Different Views of Reliability
 Product development teams
   View reliability as the domain to address
   mechanical and electrical, and                    Mechanical

   manufacturing issues.                             Reliability



 Customers                                             +
   View reliability as a system-level issue,          Electrical
   with minimal concern placed on the                 Reliability
   distinction into sub-domains.

 Since the primary measure of                           +
  reliability is made by the customer,
                                                         SW
  engineering teams must maintain a                   Reliability
  balance of both views (system and
  sub-domain) in order to develop a
  reliable product.
                                                      System
                             © 2008 Ops A La Carte                  3
Reliability vs. Cost
   Intuitively, the emphasis in reliability to
    achieve a reduction in warranty and in-service
    costs results in some minimal increase in
    development and manufacturing costs .


   Use of the proper tools during the proper life
    cycle phase will help to minimize total Life
    Cycle Cost (LCC).




                     © 2008 Ops A La Carte           4
Reliability vs. Cost, continued
   To minimize total Life Cycle Costs (LCC), an
   organization must do two things:
1. Choose the best tools from all of the tools
   available and apply these tools at the proper
   phases of the product life cycle.
2. Properly integrate these tools together to assure
   that the proper information is fed forwards and
   backwards at the proper times.




                      © 2008 Ops A La Carte            5
Reliability Integration
     “the process of seamlessly,
    cohesively integrating reliability
       tools together to maximize
      reliability and at the lowest
              possible cost”




                 © 2008 Ops A La Carte   6
Reliability vs. Cost, continued
                                                        TOTAL
                                                        COST
                        OPTIMUM                         CURVE
                         COST
                         POINT                          RELIABILITY
                                                        PROGRAM
                                                        COSTS
COST




                                                        WARRANTY
                                                        COSTS




                    RELIABILITY
       HW RELIABILITY & COSTS
       Does this apply to SW Reliability? Not really.
                           © 2008 Ops A La Carte                  7
Reliability vs. Cost, continued
                                                     TOTAL
                                                     COST
                      OPTIMUM                        CURVE
                       COST
                       POINT                         RELIABILITY
                                                     PROGRAM
                                                     COSTS
COST




                                                      HW
                                                      WARRANTY
                                                      COSTS




                   RELIABILITY                     The SW impact on
                                                   HW warranty costs
                                                   is minimal at best

       SYSTEM RELIABILITY & COSTS
                           © 2008 Ops A La Carte                    8
Reliability vs. Cost, continued
 SW has no associated manufacturing costs, so warranty
  costs and saving are almost entirely allocated to HW
 If there are no cost savings associated with improving
  software reliability, why not leave it as is and focus on
  improving HW reliability to save money?
       One study found that the root causes of typical embedded
        system failures were SW, not HW, by a ratio of 10:1.
       Customers buy systems, not just HW.
 The benefits for a SW Reliability Program are not in direct
  cost savings, rather in:
       Increased SW/FW staff availability with reduced operational
        schedules resulting from fewer corrective maintenance
        content.
       Increased customer goodwill based on improved customer
        satisfaction.
  This will be explored in more detail during the S/W DFR Seminar
                            © 2008 Ops A La Carte                     9
Design for Reliability (DfR)
     Tools by Phase




           © 2008 Ops A La Carte   10
System DfR Tools by Phase

           Phase                        Activities                      Tools
                                Define project reliability     Benchmarking
        Concept            requirements (Reliability Program   Internal Goal Setting
                                 and Integration Plan)         Gap Analysis
                                                               Reliability Modeling
                                                               System Failure
            Architecture
                                                                Predictive Analysis
  Design      and High          Modeling & Predictions
                                                                (FMECA & FTA)
            Level Design
                                                               Human Factors
                                                                Analysis
                                                               HALT
  Initial System Testing    Defect Detection at System Level
                                                               DVT
                                                               RDT
  Final System Testing          Verify Reliability Metrics
                                                               V&V
     Operations and        Continuous assessment of product    FRACAS
      Maintenance                      reliability             RCA




                                   © 2008 Ops A La Carte                                11
Hardware DfR Tools by Phase

         Phase                          Activities                            Tools
                                                                    Benchmarking
      Concept                Define HW reliability requirements     Internal Goal Setting
                                                                    Gap Analysis
                                                                    Reliability Modeling
           Architecture                                             HW Failure Predictive
           & High Level           Modeling & Predictions             Analysis (FMECA & FTA)
              Design                                                HW Fault Tolerance
Design                                                              Human Factors Analysis
                                                                    Human Factors Analysis
            Low Level
                                    Reliability Analysis            Derating Analysis
             Design
                                                                    Worst Case Analysis
                                                                    HALT
     Prototype                                                      ALT
 (first time product is            Detect design defects
                                                                    DOE
          tested)                                                   Multi-variant Testing
                                                                    RDT
                            Identify and correct manufacturing
  Manufacturing                                                     HASS
                                      process issues
                                                                    HASA
  Operations and
                          Continuous assessment of HW reliability   ORT
   Maintenance



                                         © 2008 Ops A La Carte                                12
Software DfR Tools by Phase

         Phase                           Activities                           Tools
                                                                    Benchmarking
     Concept                 Define SW reliability requirements     Internal Goal Setting
                                                                    Gap Analysis
           Architecture                                             SW Failure Analysis
           & High Level            Modeling & Predictions           SW Fault Tolerance
              Design                                                Human Factors Analysis
Design
                          Identify core, critical and vulnerable   Human Factors Analysis
            Low Level
                           sections of the design                   Derating Analysis
             Design
                          Static detection of design defects       Worst Case Analysis
                                                                    FRACAS
     Coding                  Static detection of coding defects
                                                                    RCA
                          Dynamic detection of design and coding    FRACAS
   Unit Testing                          defects                    RCA
                                                                    FRACAS
  Integration and
                                    SW Statistical Testing          RCA
  System Testing                                                    SW Reliability Testing
  Operations and             Continuous assessment of product       FRACAS
   Maintenance                           reliability                RCA




                                          © 2008 Ops A La Carte                               13
ELEMENTS
   OF A
RELIABILITY
 PROGRAM
    © 2008 Ops A La Carte   14
Where to Start a DfR Program?


A reliability assessment is our recommended first
step in establishing a reliability program. This
mechanism is the appropriate forum for selecting
the best tools for each product life cycle phase.




                     © 2008 Ops A La Carte          15
RELIABILITY
ASSESSMENT

    © 2008 Ops A La Carte   16
Reliability Program Assessment
  •   Initiate a Reliability Program
  •   Determine next best steps                                        $ Profits
  •   Reduce customer complaints
  •   Select right tools
  •   Improve reliability                                                               market
                                                                          Goal          share
                                       Program Plan

                          Gap Analysis
                                                                         satisfaction
                   Benchmarking

              Statistical
              Data Analysis
                                                         A detailed evaluation of an
                                                         organization’s approach and
           Assessment
           Interviews
                                                         processes involved in creating
field
                                                         reliable products. The assessment
failures             $ unreliability                     captures the current state and
             Now                                         leads to an actionable reliability
                     ? Unknown
                                                         program plan.
 complaints
                      Reliability ?    © 2008 Ops A La Carte                                     17
Steps within an Assessment

                       • motivation
                       • approach
                       • results
                       • findings
                       • observations
                       • next steps
                       • close

           © 2008 Ops A La Carte        18
Assessment Motivation

• Identify systemic changes that impact
 reliability
   – Tie into culture and product
   – Both enjoy benefits

• Provides roadmap for activities that
 achieve results
   – Matching of capabilities and expectations
   – Cooperative approach


                   © 2008 Ops A La Carte         19
Assessment Approach
 Preparation

 Checklist

 Who to interview in organization

 Analysis, average scores and summary of
 comments




                      © 2008 Ops A La Carte   20
Steps Involved

 selecting people to survey
 selecting survey topics
 develop scoring system
 data analysis
 summary feedback results
 review of results
 recommended actions




                  © 2008 Ops A La Carte   21
Select People to Survey
Hardware:
 Hardware manager
 Electrical engineering lead
 Mechanical engineering lead
 System engineering lead
 Reliability manager/engineer
 Procurement
 Manufacturing

Software:
 sw r&d manager
 sw r&d engineer
 sw test manager
 sw test engineer
                       © 2008 Ops A La Carte   22
Select Survey Topics
                    DFR Methods Survey
           Scoring: 4 = 100%, top priority, always done
                    3 = >75%, use normally, expected
                    2 = 25% - 75%, variable use
                    1 = <25%, only occasional use
                    0 = not done or discontinued
                    - = not visible, no comment

 Management:
 □ Goal setting for division
 □ Priority of quality & reliability improvement
 □ Management attention & follow up (goal ownership)


 Design:
 □ Documented hardware design cycle
 □ Goal setting by product or module
                          © 2008 Ops A La Carte           23
Example
 To what extent is FMEA used?
    Design Engineer

        Score = 1: Used only as a troubleshooting tool

      Manufacturing Engineer
        Score = 3: Commonly used on critical design elements

      Reliability Engineer
        Score = 4: Always used on all products

Results: Score 2.6
Comments: Clearly a disconnect between reliability and
design engineering – indicative of a problem with the tool.
                          © 2008 Ops A La Carte                24
Reliability Maturity Grid
• 5 levels of maturity

• Loosely based on IEEE 1624: “Reliability Program
  for the Development and Production of Electronic
  Products”
• Similar to Crosby’s Quality Maturity
• On the following page is a matrix based on
  Crosby’s as an example.
• Read across each row and find the statement that
  seems most true for your organization.
• The center of mass of the levels is the
  organization’s overall level.

                         © 2008 Ops A La Carte       25
Reliability Maturity Matrix
        Measurement                       Stage I:                        Stage II:                      Stage III:                 Stage IV:                     Stage V:
        Category                         Uncertainty                   Awakening                        Enlightenment                  Wisdom                     Certainty
Management                    No comprehension of               Recognizing that reliability    Still learning more about   Participating.               Consider reliability
Understanding and Attitude    reliability as a management       management may be of            reliability management.     Understand absolutes of      management an
                              tool. Tend to blame               value but not willing to        Becoming supportive and     reliability management.      essential part of company
                              reliability engineering for       provide money or time to        helpful.                    Recognize their personal     system.
                              ‘reliability problems’            make it happen.                                             role in continuing
                                                                                                                            emphasis.
Reliability status            Reliability is hidden in          A stronger reliability          Reliability manager         Reliability manager is an    Reliability manager is on
                              manufacturing or                  leader appointed, yet           reports to top              officer of company;          board of directors.
                              engineering departments.          main emphasis is still on       management, with role in    effective status reporting   Prevention is main
                              Reliability testing probably      an audit of initial product     management of division.     and preventive action.       concern. Reliability is a
                              not part of organization.         functionality. Reliability                                  Involved with consumer       thought leader.
                              Emphasis on initial product       testing still not performed.                                affairs.
                              functionality.
Problem handling              Fire fighting; no root cause      Teams are set up to solve       Corrective action process   Problems are identified      Except in the most
                              analysis or resolution; lots of   major problems. Long-           in place. Problems are      early in their               unusual cases, problems
                              yelling and accusations.          range solutions are not         recognized and solved in    development. All             are prevented.
                                                                identified or                   orderly way.                functions are open to
                                                                implemented.                                                suggestion and
                                                                                                                            improvement.
Cost of Reliability as % of   Warranty: unknown                 Warranty: 3%                    Warranty: 4%                Warranty: 3%                 Warranty: 1.5%
net revenue                   Reported: unknown                 Reported: unknown               Reported: 8%                Reported: 6.5%               Reported: 3%
                              Actual: 20%                       Actual: 18%                     Actual: 12%                 Actual: 8%                   Actual: 3%
Feedback process              None. No reliability testing.     Some understanding of           Accelerated testing of      Refinement of testing        The few field failures are
                              No field failure reporting        field failures and              critical systems during     systems – only testing       fully analyzed and
                              other than customer               complaints. Designers           design. System level        critical or uncertain        product designs or
                              complaints and returns.           and manufacturing do            modeling and testing.       areas. Increased             procurement
                                                                not get meaningful              Field failures analyzed     understanding of causes      specifications altered.
                                                                information.                    and root causes reported.   of failure allow             Reliability testing done to
                                                                                                                            deterministic failure rate   augment reliability
                                                                                                                            prediction models            models.
DFR program status            No organized activities.          Organization told               Implementation of DFR       DFR program active in all    Reliability improvement is
                              No understanding of such          reliability is important. DFR   program with thorough       areas of division – not      a normal and continued
                              activities.                       tools and processes             understanding and           just design & mfg’ing.       activity.
                                                                inconsistently applied and      establishment of each       DFR normal part of R&D
                                                                only ‘when time permits’.       tool.                       and manufacturing.
Summation of reliability      “We don’t know why we             “Is it absolutely necessary     “Through commitment         “Failure prevention is a     “We know why we do not
posture                       have problems with                to always have problems         and reliability             routine part of our          have problems with
                              reliability”                      with reliability?”              improvement we are          operation.”                  reliability.”
                                                                                                identifying and resolving
                                                                                                our problems.”



                                                                             © 2008 Ops A La Carte                                                                                26
Reliability Maturity Matrix
Lets look at one row to get a better understanding.
Measure-    Stage I:       Stage II:                    Stage III:    Stage IV:    Stage V:
           Uncertainty    Awakening                    Enlighten-     Wisdom       Certainty
 ment
                                                          ment
Category
Problem    Fire           Teams are               Corrective         Problems     Except in
handling   fighting; no   set up to               action             are          the most
           root cause     solve                   process in         identified   unusual
           analysis or    major                   place.             early in     cases,
           resolution;    problems.               Problems           their        problems
           lots of        Long-                   are                developm     are
           yelling and    range                   recognized         ent. All     prevented.
           accusations    solutions               and solved         functions
           .              are not                 in orderly         are open
                          identified              way.               to
                          or                                         suggestio
                          implement                                  n and
                          ed.                                        improvem
                                                                     ent.
                               © 2008 Ops A La Carte                                           27
Results & Meaning
• Looking for trends, gaps in process, skill mismatches,
  over analysis, under analysis, etc.

• Looking for differences across the organization,
  pockets of excellence, areas with good results

• Process provides snapshot of current system

• No one tool make an entire reliability program. The
  tools need to match the needs of the products and the
  culture.

• Check step is critical before moving to
  recommendation around improvement plan


                       © 2008 Ops A La Carte               28
HW Observations
What Companies Are                        What Companies Are
 Doing Best                                Weak at
 Prediction                               Goal setting/Planning
 HALT                                     Repair & warranty
                                                 invisible
 Golden nuggets
                                           Lessons learned
 Fast reaction to fix
                                                 capture
  problems
                                           Single owner of product
                                                 reliability
                                           Multiple defect tracking
                                                 systems
                                           Reliability Integration
                                           Statistics
                         © 2008 Ops A La Carte                         29
SW Observations
What Companies Are                    What Companies Are
 Doing Best                            Weak at
 Unit Testing                         Synergy with the
 Bug tracking database
                                             Hardware Team
                                       reliability goal setting
                                             for SW
                                       sufficient development
                                             best-practices
                                       lessons learned capture
                                       explicit SW reliability
                                             measurements and
                                             metrics
                                       effective system testing
                     © 2008 Ops A La Carte                         30
Typical Recommended Tools Based on
Assessments
• Goal Setting
• Writing Solid Plans and Executing (with check steps)
• Predictions (not just to get an MTBF number)
• FMEAs
• ALT
• HALT
• Lessons Learned
• Field Data Review




                      © 2008 Ops A La Carte          31
Next Steps
• Determine current state of your organization
  (Summary of Assessment)
    – Identify strong and weak areas

• Goal Setting
   – Market Analysis to gather requirements
   – Benchmarking


• Gap Analysis


• Develop plan and implement

                       © 2008 Ops A La Carte     32
Reliability Philosophies



  Two fundamental methods to
achieving high product reliability:

      Build, Test, Fix
      Analytical Approach



              © 2008 Ops A La Carte   33
Build, Test, Fix
 In any design there are a finite number of flaws.
 If we find them, we can remove the flaw.


 Rapid prototyping
 HALT
 Large field trials or ‘beta’ testing
 Reliability growth modeling




                        © 2008 Ops A La Carte         34
Analytical Approach
 Develop goals
 Model expected failure mechanisms
 Conduct accelerated life tests
 Conduct reliability demonstration tests
 Routinely update system level model


 Balance of simulation/testing to increase ability of
  reliability model to predict field performance.




                      © 2008 Ops A La Carte              35
Issues with each approach

Build, Test, Fix                                Analytical
 Uncertain if design is                         Fix mostly known flaws
  good enough
                                                 ALT’s take too long
 Limited prototypes means
                                                 RDT’s take even longer
  limited flaws discovered
                                                 Models have large uncertainty
 Unable to plan for
                                                  with new technology and
  warranty or field service
                                                  environments




                              © 2008 Ops A La Carte                           36
Balanced approach

              Goal
              Plan


   FMEA            Prediction
   HALT              RDT/ALT


     Verification
         Review



      © 2008 Ops A La Carte     37
Balanced approach

              Goal
              Plan


   FMEA            Prediction
   HALT              RDT/ALT


     Verification
         Review



      © 2008 Ops A La Carte     38
Balanced approach

              Goal
              Plan


   FMEA            Prediction
   HALT              RDT/ALT


     Verification
         Review



      © 2008 Ops A La Carte     39
Balanced approach

              Goal
              Plan


   FMEA            Prediction
   HALT              RDT/ALT


     Verification
         Review



      © 2008 Ops A La Carte     40
RELIABILITY
GOAL-SETTING
 Establish the target in an engineering
          meaningful manner



                © 2008 Ops A La Carte     41
Reliability Definition (revisited)
Reliability is often considered quality over time.



Reliability is…
 “The ability of a system or component to perform its required
  functions under stated conditions for a specified period of time”

                                                      IEEE 610.12-1990




                              © 2008 Ops A La Carte                      42
Reliability Goals & Metrics Summary

 Reliability Goals & Metrics tie together all stages
  of the product life cycle. Well crafted goals
  provide the target for the business to achieve,
  they set the direction.


 Metrics provide:
    the milestones,
    the “are we there, yet”, and
    the feedback
  that all elements of the organization require to
  stay on track toward the goals.

                        © 2008 Ops A La Carte           43
Reliability Goal-Setting

 Reliability Goals can be derived from:
    Customer-specified or implied requirements


    Internally-specified or self-imposed requirements
     (usually based on trying to be better than previous
     products)


    Benchmarking against competition




                       © 2008 Ops A La Carte               44
Reliability Goal-Setting

 Customer-specified or implied requirements

    Many times the customer will specify the reliability
    requirements for the product
     • MTBF, MTTR, Availability, DOA Rate, and Return
       Rate are the most common, but there are many
       others
    Sometimes, the customer will not specify the exact
    requirements, but there will be implied requirements
     • “Product must be ‘highly reliable’ over its life”
     • “The product should not fail in a way that requires
       a drilling session to be aborted.”
     • “A partial loss of collection data is allowed.”
                        © 2008 Ops A La Carte                45
Reliability Goal-Setting

 Internally-specified or self-imposed requirements
    These are usually based on trying to be better than
     previous products


    The process involves interviewing key members of
     various departments and at contract manufacturing
     partners to find out what they have set as internal
     goals


    These goals may need to be adjusted as
     information is gathered, but this represents a good
     starting point

                       © 2008 Ops A La Carte               46
Reliability Goal-Setting

 Internally-specified Goals
  (Based on Trying to be Better than Previous
  Products)
   Often, companies will set an internal goal to improve
    reliability by X% from one generation to the next.
   It is not uncommon for this factor to be as high as 2x.
   For SW, internal improvement goals require changes
    to development processes:
     • Goals less than 2x can generally be achieved by
       adjustments to existing processes
     • Goals of 2x or higher usually require significant
       changes to existing processes or the adoption of
       new development practices
                       © 2008 Ops A La Carte                  47
Reliability Goal-Setting

 Internally-specified Goals
  (Based on Interviewing Key Members of Various
  Departments)
   Key individuals from various departments within
    company such as
      – marketing and sales
      – hardware and software engineering
      – customer service and field support
      – manufacturing and test
      – quality and reliability
   Key individuals at Contract Manufacturing partners


                      © 2008 Ops A La Carte              48
Reliability Goal-Setting
 Internally-specified Goals
  (May Need to be Adjusted as Information is
  Gathered, but This Represents a Good Starting
  Point)
   New goals from customers may supersede any
    internal goals
   Information from Gap Analysis may cause us to
    change our goals
      – If Gap is unrealistically high, it may make sense
        to reduce goals so that they are obtainable




                      © 2008 Ops A La Carte                 49
Reliability Goal-Setting
 Benchmarking Against Competition
   Benchmarking is the process of comparing the
    current project, methods, or processes with the best
    practices in the industry
   Benchmarking is crucial to both a start-up as well
    as an established company that is coming out with
    a new product to assure that the new product is
    competitive based on reliability and cost.
   Benchmarking is often useful even if your customer
    has specified the reliability requirements so that we
    get a “sanity check” against the rest of the industry.



                       © 2008 Ops A La Carte                 50
Reliability Goal-Setting

 Benchmarking Key
   Work with Marketing
     – Marketing knows who competitors are
     – Marketing knows what customers are asking for


  Work with Marketing to Marry Up Requirements!




                     © 2008 Ops A La Carte             51
Reliability Goal-Setting
 Product vs. Process Benchmarking
   Product Benchmarking: Comparing products
    requirements such as failure rate, MTBF, DOA rate,
    Annualized Failure Rate, Availability,
    Maintainability, and more.


   Process Benchmarking: Comparing process
    methodologies such as in-house vs. outsource
    builds, quality philosophy, and screening methods.




                      © 2008 Ops A La Carte              52
Reliability Goal-Setting
 Reliability Goals – Which Should We Use ?
    Customer-specified or implied requirements ?
    Internally-specified or self-imposed requirements ?
    Benchmarking ?


 For Best Results, Use All Three !




                       © 2008 Ops A La Carte               53
Reliability Goals & Metrics Summary
 A reliability goal includes each of the five elements of
  the reliability definition.
    Probability of product performance
    Intended function
    Specified life
    Specified operating conditions
    Customer expectations




                         © 2008 Ops A La Carte           54
Reliability Goals & Metrics Summary
 A reliability metric is often something that
  organization can measure on a relatively short,
  periodic basis:
    Predicted failure rate (during design phase)
    Field failure rate
    Warranty
    Actual field return rate
    Dead on Arrival rate




                          © 2008 Ops A La Carte     55
Fully-Stated Reliability Goals
 System goal at multiple points
    Supporting metrics during development and field
    Apportionment to appropriate level


 Provide connections to overall business plan,
  contracts, customer expectations, and include any
  assumptions concerning financials


 Benefit: clear target for development, vendor and
  production teams.



                       © 2008 Ops A La Carte           56
Reliability Goal

 Let’s say we expect a few
                                                            t
  failures in one year.
                                          R(t )  e              
 Less than 2%
 Laboratory environ.                     ln(.98)  8760 / 
 XYZ function
                                               XYZ function for one year with
                                                98% reliability in the lab.
 Assuming constant failure
                                               (MTBF is 433,605 hrs.)
  rate




                              © 2008 Ops A La Carte                              57
Other Points in Time
 Also consider the bathtub curve


 Infant mortality, out of box type failures
    Shipping damage
    Component defects, manufacturing defects


 Wear out related failures
    Bearings, connectors, solder joints, e-caps




                       © 2008 Ops A La Carte       58
Apportionment of Goals

 Let’s look at example


 A computer with a one year warranty and the
  business model requires less than 5% failures
  within the first year.
   A desktop business computer in office environment
    with 95% reliability at one year.




                     © 2008 Ops A La Carte              59
Apportionment of Goals
 For simplicity consider five major elements of
  the computer
    Motherboard
    Hard Disk Drive
    Power Supply
    Monitor
    Keyboard


 For starters, let’s give each sub-system the
  same goal


                       © 2008 Ops A La Carte       60
Apportionment of Goals


                                Computer
                                R = 0.95




  Motherboard      HDD              P/S                Monitor    Keyboard
   R = 0.99       R = 0.99        R = 0.99             R = 0.99   R = 0.99



Assuming failures within each sub-system are independent, the simple
multiplication of the reliabilities should result in meeting the system goal

0.99 * 0.99 * 0.99 * 0.99 * 0.99 = 0.95

Given no history or vendor data – this is just a starting point.

                               © 2008 Ops A La Carte                           61
Estimate Reliability
 The next step is to determine the sub-system
  reliability.
    Historical data from similar products
    Reliability estimates/test data by vendors
    In house reliability testing


 At first estimates are crude, refine as needed to
  make good decisions.




                       © 2008 Ops A La Carte          62
Apportionment of Goals


                              Computer
                              R = 0.95


Goals
    Motherboard    HDD             P/S                Monitor    Keyboard
     R = 0.99     R = 0.99       R = 0.99             R = 0.99   R = 0.99


Estimates
    Motherboard    HDD             P/S                Monitor    Keyboard
     R = 0.96     R = 0.98      R = 0.999             R = 0.99   R = 0.999


    First pass estimates do not meet system goal. Now what?


                              © 2008 Ops A La Carte                          63
Resolving the Gap
 CPU goal 99% est. 96%                              Use the simple reliability
                                                      model to determine if
                                                      reliability improvements
 Largest gap, lowest estimate                        will impact the system
                                                      reliability. i.e. changing the
                                                      bios reliability from 99.9%
 First, will the known issues                        to 99.99% will not
  bridge the difference?                              significantly alter the
                                                      system reliability result.
 If not enough, then use FMEA
  and HALT to populate Pareto
  of what to fix
                                                     Invest in improvements that
                                                      will impact the system
                                                      reliability.
 Third, validate improvements



                                 © 2008 Ops A La Carte                                 64
Resolving the Gap, (continued)

HDD goal 0.99 est. 0.98                      When the relationship of the
                                               failure mode and either design
                                               or environmental conditions
                                               exist we do not need FMEA or
 Small gap, clear path to                     HALT – go straight to design
  resolve                                      improvements.


 HDD reliability and                         Use ALT to validate the model
  operating temperature are                    and/or design improvements.
  related. Lowering the
  internal temperature the
  HDD reliability will improve.




                             © 2008 Ops A La Carte                              65
Resolving the Gap, (continued)

 P/S goal 0.99 est. 0.999                        For any subsystem that exceeds
                                                   the reliability goal, explore
                                                   potential cost savings by
 Estimate over the goal                           reducing the reliability
                                                   performance.

                                                  This is only done when there is
 Further improvement not cost
                                                   accurate reliability estimates and
  effective given minimal impact
                                                   significant cost savings.
  to system reliability.



 Possible to reduce reliability
  (select less expensive model)
  and use savings to improve
  CPU/motherboard.

                              © 2008 Ops A La Carte                                 66
Progression of Estimates

   Initial Engineering Guess or Estimate




                                                                                 Test Data
                                           Vendor Data
                                                                                             Actual Field
                                                                                                Data




                                                         © 2008 Ops A La Carte                              67
Microsoft Model
  Classic Model:
    Get feedback to the design and manufacturing team
    that permits visibility of the reliability gap. Permit
    comparison to goal.


  Microsoft Model:
    Not estimating or measuring the reliability during
    design is something I call the Microsoft model. Just
    ship it, the customers will tell you what needs
    improvement.

                Don’t try the Microsoft Model!
It works for them (on the software side) but probably won’t work for you
              (note that it did not work for them on the Xbox)
                               © 2008 Ops A La Carte                       68
RELIABILITY
  PROGRAM
    AND
INTEGRATION
    PLAN
     © 2008 Ops A La Carte   69
Planning Introduction

“The purpose of this task is to develop a reliability
 program which identifies, and ties together, all
 program management tasks required to accomplish
 program requirements.”


                      - Mil Handbook 785 task 1




                      © 2008 Ops A La Carte             70
Motivation for a Reliability Integration Plan

 Customer requirements
   Meet terms of contract
   Meet customer expectations

 Business opportunity
   Reduce expenses
   Improve brand perception

 Employee opportunity
   Provide direction
   Excite empowerment

                        © 2008 Ops A La Carte   71
Reliability Program and Integration Plan
 A Reliability Program and Integration Plan is crucial
  at the beginning of the product life cycle because in
  this plan, we define:
    What are the overall goals of the product and of each
     assembly that makes up the product ?
    What has been the past performance of the product ?
    What is the size of the gap ?
    What reliability elements/tools will be used ?
    How will each tool be implemented and integrated to
     achieve the goals ?
    What is our schedule for meeting these goals ?


                        © 2008 Ops A La Carte                72
Reliability Program and Integration Plan
 The overall goals of a Reliability Program
 Plan
   The goals are typically in the form of MTBF or
    Availability but can be about any measurable activity.
    At a minimum, the goals that are generally measured
    are:
     • Out of box failure rate
     • Reliability within warranty period
     • Reliability throughout life of product
     • Preventive maintenance / End-of-life goal




                       © 2008 Ops A La Carte                 73
Reliability Program and Integration Plan
 What has been the past performance of the product ?
   For past performance, we can use data from
     – Field analysis
     – HALT
     – Any other reliability studies
     – Predictions
     – If this is the first product, we can benchmark the
       product against competitors in the industry and
       use their data




                        © 2008 Ops A La Carte               74
Reliability Program and Integration Plan
 What is the size of the Gap ?
    The gap analysis is a key part of the plan because it
       – sets the expectation on how much improvement
         is necessary from the previous generation
       – it helps dictate the tools that will be needed to
         reach the new reliability goals
       – it helps dictate the schedule / how long it will take
         to achieve these goals




                        © 2008 Ops A La Carte                    75
Reliability Program and Integration Plan
 What is the size of the Gap ?
  [Breakdown by Assembly]
    To make this task more manageable, we must
     break down by Assembly
       – What are the results for the current product by
         Assembly ?
       – What are the goals for the new product by
         Assembly ?
       – What is the Gap by Assembly ?




                        © 2008 Ops A La Carte              76
Reliability Program and Integration Plan
 What is the size of the Gap ? (continued)
  [Breakdown by Assembly]
   Now that we understand the size of our Gap by
    Assembly, we must understand what is driving this
    Gap
      – Was it a particular design issue on the previous
        product ?
      – Were the returns largely DOA’s ?
   Once we understand this, we are in a better position
    to choose the appropriate reliability tool to overcome
    this gap


                       © 2008 Ops A La Carte                 77
Reliability Program and Integration Plan
 State Constraints or Limiting Factors
    Time constraints
    Money or budget constraints
    Resources have not been allocated
    Engineering approaches related to reliability,
     including predetermined vendor selections




                         © 2008 Ops A La Carte        78
Reliability Program and Integration Plan
 Time Constraints
   We don’t have enough time to properly execute the
    program. Perhaps we may need to increase the
    sample size in our testing to accelerate the test
    results. Or perhaps we push some of the testing
    back onto the suppliers.




                      © 2008 Ops A La Carte             79
Reliability Program and Integration Plan
 Money or Budget Constraint
   Here we face the opposite problem as with a Time
    Constraint. Now we have a constraint on money so
    we may need to stretch out the testing and get more
    test information with fewer samples. Or we may
    elect to spend more time in the design before
    jumping into prototype testing, using lesser
    expensive design analysis tools than the prototype
    tools.




                       © 2008 Ops A La Carte              80
Reliability Program and Integration Plan
 Resource Constraint
   This may require that we go outside and look for
    help from consultants or contractors. There are
    always resources that can help, even if we don’t
    have within the company.




                        © 2008 Ops A La Carte          81
Reliability Program and Integration Plan
 Pre-Determined Methods
   Engineering approaches related to reliability,
    including predetermined vendor selections. This
    may require us to justify why we have apportioned
    the reliability to the assemblies the way we did.




                       © 2008 Ops A La Carte            82
Reliability Program and Integration Plan
 What reliability elements/tools will be used ?
    Based on the size of the gap AND what is driving this gap, we
     will choose which reliability tools to implement
    If the gap is large, we will need to invest a lot of resources in
     the design tools prior to prototyping and testing the new
     product, such as:
        – Design of experiments
        – FMECA’s
        – Tolerance analyses
    If the gap is small, we may decide to invest more resources in
     the prototype tools such as:
        – HALT
        – Reliability Demonstration / Life Tests
    If the gap is largely a result of DOA’s and production escapes,
     we may want to invest more effort into the developing good
     manufacturing reliability tools such as HASS and HASA.              83
                             © 2008 Ops A La Carte
Reliability Program and Integration Plan
 What reliability elements/tools will be used ?
    As with most programs, the gap will likely fall
     somewhere in between. So, we must develop a well-
     balanced program that has selected tools from each
     of the phases
      – Design tools
      – Prototype tools
      – Manufacturing tools




                       © 2008 Ops A La Carte              84
Reliability Program and Integration Plan
 How will each tool be implemented and integrated
  to achieve the goals ?
    The implementation and integration of each tool is
     perhaps the most difficult to plan. Here we must
     estimate the effects each tool will have on the
     overall reliability to understand how we are closing
     the gap
    For this, we must look at specific issues that
     occurred on previous products and understand how
     a specific tool will help mitigate this issue on this
     next generation
    If we can quantify the effect an issue had and we
     can quantify the reduction as a results, then we
     have evaluated how we are going to close the gap
                        © 2008 Ops A La Carte                85
Reliability Program and Integration Plan
 How will each tool be implemented and integrated to
  achieve the goals ? AN EXAMPLE
    Our current product is running at a 0.25% DOA rate
     per month and our goal is to reduce this by 50%.
    The DOAs tend to focus around solder issues.
    For this next generation, we decide to choose HASS
     as our tool to solve this.
    Through research, we determine that HASS is 90%
     effective in finding and preventing solder defects from
     escaping into the field.
    We write in our plan that we expect to meet and
     exceed our 50% reduction.

                       © 2008 Ops A La Carte               86
Reliability Program and Integration Plan
 How will each tool be implemented and integrated to
  achieve the goals ? AN EXAMPLE (continued)
    But we are not done there. What did we forget ?




                       © 2008 Ops A La Carte            87
Reliability Program and Integration Plan
 How will each tool be implemented and integrated
  to achieve the goals ? AN EXAMPLE (continued)
    How we will implement and integrate ?
    To say that we will use HASS and that it is possible
     is one thing, but how will we do it. In our Reliability
     Program Plan, we need to:
      – determine what level HASS will be performed
        (assembly or system)
      – Outline functional and environmental equipment
        needed
      – determine production needs and throughput
      – understand manpower needs
    Are we done there ? Not quite...
                        © 2008 Ops A La Carte                  88
Reliability Program and Integration Plan
 How will each tool be implemented and integrated
  to achieve the goals ? AN EXAMPLE (continued)
    Next we must explain the integration.
    What tools will feed into HASS in order to make it
     successful ? And how will they be used ?
      – Predictions – explain the first year multiplier
      – FMECA – understand technology limiting devices
      – HALT – develop margins
    What tools will HASS feed into ?
      – Field Failure Tracking System – monitor DOA’s
      – Repair Depot – how to reduce NTF’s


                       © 2008 Ops A La Carte              89
Reliability Program and Integration Plan
 What is our schedule for meeting these goals ?
   The last piece of our Reliability Program Plan is the
    schedule.
   With an infinite amount of time (and money) we can
    achieve any reliability. But we do not have the luxury !
   We must schedule our reliability activities and assure
    that they are aligned with the schedule for the overall
    program.




                       © 2008 Ops A La Carte                  90
Reliability Program and Integration Plan
 What is our schedule for meeting these goals ?
   First, we determine the order of occurrence of the
    tools. If we did a good job describing the tools and
    the integration of each, then this should be straight-
    forward.
   Next we estimate a length of time for each tool.
   Then, we put on an integration timeline along with
    dependencies.
   Finally, we must compare with the master project
    schedule and make adjustments as necessary.




                       © 2008 Ops A La Carte                 91
Reliability Schedule as Part of the Plan                                                                                                                   1st Quarter
  ID   ID        Task Name                                                 Duration        Start          Finish     % Complete Deliverable Predecessors   Jan Feb
   1        1    Reliability                                                 504 days     Mon 1/6/03     Tue 12/7/04      70%
  2         2        Reliability During Concept                               45 days     Mon 1/6/03        Fri 3/7/03    100%
  3         3            Reliability Benchmarking                              10 days     Mon 1/6/03      Fri 1/17/03    100%      Rel Plan
  4         4            Establishing Reliability Targets                       5 days     Mon 3/3/03       Fri 3/7/03    100%      Rel Plan
  5         5        Predictive Modeling                                      70 days     Mon 6/9/03      Fri 9/12/03     100%
  6         6            Power Supply                                         60 days     Mon 6/9/03      Fri 8/29/03     100%
  7         7                  Initial Draft                                   15 days     Mon 6/9/03      Fri 6/27/03    100%       Report
  8         8                  Provide Stress Values for Each Component        25 days    Mon 7/28/03      Fri 8/29/03    100%        Study 7

  9         9            Electronics Prediction                               30 days     Mon 8/4/03      Fri 9/12/03     100%
  10        10                 LCD/Touch Screen                                30 days     Mon 8/4/03      Fri 9/12/03    100%       Report

  11        11                 Control System Software/Electronics             25 days     Mon 8/4/03       Fri 9/5/03    100%       Report
  12        12       Total System Prediction                                 163 days       Fri 8/1/03   Mon 3/15/04      100%      Report
  13        13           Perform prediction                                    41 days      Fri 8/1/03     Fri 9/26/03    100%

  14        14           Signatures on report/into DHF                        123 days     Fri 9/26/03   Mon 3/15/04      100%
  15        15       HALT Testing                                            202 days     Mon 6/9/03     Mon 3/15/04      100%
  16        16           Power Supply HALT                                   202 days     Mon 6/9/03     Mon 3/15/04      100%
  17        17                 Power Supply HALT Protocol                        1 day     Mon 6/9/03     Mon 6/9/03      100%      Protocol
  18        18                 Power Supply HALT                              194 days    Thu 6/19/03    Mon 3/15/04      100%

  19        19                 Power Supply HALT/Report                       194 days    Thu 6/19/03    Mon 3/15/04      100%       Report
  20        20           System Level HALT                                   110 days Wed 10/15/03       Mon 3/15/04      100%
  21        21                 System HALT Protocol                            44 days   Wed 10/15/03     Fri 12/12/03    100%      Protocol

  22        22                 System HALT                                      5 days   Mon 12/15/03     Fri 12/19/03    100%                 21
  23        23                 System HALT Report                              58 days   Thu 12/25/03    Mon 3/15/04      100%       Report 22

  24        24       Accelerated Life Testing                                 95 days     Mon 6/9/03     Fri 10/17/03       0%
  25        25           Power Supply                                         95 days     Mon 6/9/03     Fri 10/17/03       0%
  26        26                 Life Test Protocol                              95 days     Mon 6/9/03     Fri 10/17/03      0%      Protocol

  27        27                 Final Report                                    14 days     Mon 6/9/03     Thu 6/26/03       0%       Report
  28        28       Manufacturing Reliability                               318 days     Tue 9/23/03    Tue 12/7/04       30%
  29        29           Review ESS Process/Make Recommendations               51 days    Tue 9/23/03    Mon 12/1/03      100% Study/Memo
  30        30           HASA Feasibility Study                               120 days    Tue 12/2/03     Sat 5/15/04      50% Study/Memo 29
  31        31           HASS Development Protocol                              5 days     Wed 3/3/04      Tue 3/9/04       0%      Protocol

  32        32           Setup HASS Process                                    60 days     Mon 9/6/04     Fri 11/26/04      0% Study/Memo
  33        33           Field Monitoring Protocol                              5 days      Sat 5/1/04     Thu 5/6/04       0%      Protocol
  34        34           Field Monitoring/Reporting                           130 days     Wed 6/9/04     Tue 12/7/04       0%       Report

                                                              © 2008 Ops A La Carte                                                                                      92
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar
Design for Reliability (DfR) Seminar

Contenu connexe

Tendances

Reliability centred maintenance
Reliability centred maintenanceReliability centred maintenance
Reliability centred maintenanceSHIVAJI CHOUDHURY
 
Reliability Centered Maintenance Implementation and Case Study
Reliability Centered Maintenance Implementation and Case StudyReliability Centered Maintenance Implementation and Case Study
Reliability Centered Maintenance Implementation and Case StudyWaseem Akram
 
rm_-_34_certified_maintenance_reliability_professional_(cmrp)
rm_-_34_certified_maintenance_reliability_professional_(cmrp)rm_-_34_certified_maintenance_reliability_professional_(cmrp)
rm_-_34_certified_maintenance_reliability_professional_(cmrp)Ahmad Antar Hebeshy
 
A Proposal for an Alternative to MTBF/MTTF
A Proposal for an Alternative to MTBF/MTTFA Proposal for an Alternative to MTBF/MTTF
A Proposal for an Alternative to MTBF/MTTFASQ Reliability Division
 
Reliability Centered Maintenance
Reliability Centered MaintenanceReliability Centered Maintenance
Reliability Centered MaintenanceRonald Shewchuk
 
Reliability centered maintenance
Reliability centered maintenanceReliability centered maintenance
Reliability centered maintenancePankaj Singh
 
Failure mode and effects analysis
Failure mode  and effects analysisFailure mode  and effects analysis
Failure mode and effects analysisAndreea Precup
 
DFR a case study using a physics of failure
DFR a case study using a physics of failure DFR a case study using a physics of failure
DFR a case study using a physics of failure ASQ Reliability Division
 
Introduction to Failure Mode and Effects Analysis (FMEA) in TQM
Introduction to Failure Mode and Effects Analysis (FMEA) in TQMIntroduction to Failure Mode and Effects Analysis (FMEA) in TQM
Introduction to Failure Mode and Effects Analysis (FMEA) in TQMDr.Raja R
 
Root Cause Analysis .pdf
Root Cause Analysis .pdfRoot Cause Analysis .pdf
Root Cause Analysis .pdfssuser089410
 
Maintenance strategy & cbm basic introduction
Maintenance strategy & cbm basic introductionMaintenance strategy & cbm basic introduction
Maintenance strategy & cbm basic introductionSmruti Ranjan Jena
 

Tendances (20)

Reliability centred maintenance
Reliability centred maintenanceReliability centred maintenance
Reliability centred maintenance
 
Reliability Centered Maintenance Implementation and Case Study
Reliability Centered Maintenance Implementation and Case StudyReliability Centered Maintenance Implementation and Case Study
Reliability Centered Maintenance Implementation and Case Study
 
rm_-_34_certified_maintenance_reliability_professional_(cmrp)
rm_-_34_certified_maintenance_reliability_professional_(cmrp)rm_-_34_certified_maintenance_reliability_professional_(cmrp)
rm_-_34_certified_maintenance_reliability_professional_(cmrp)
 
Failure Mode & Effects Analysis (FMEA)
Failure Mode & Effects Analysis (FMEA)Failure Mode & Effects Analysis (FMEA)
Failure Mode & Effects Analysis (FMEA)
 
A Proposal for an Alternative to MTBF/MTTF
A Proposal for an Alternative to MTBF/MTTFA Proposal for an Alternative to MTBF/MTTF
A Proposal for an Alternative to MTBF/MTTF
 
Reliability Centered Maintenance
Reliability Centered MaintenanceReliability Centered Maintenance
Reliability Centered Maintenance
 
Reliability and Safety
Reliability and SafetyReliability and Safety
Reliability and Safety
 
Intro to reliability management
Intro to reliability managementIntro to reliability management
Intro to reliability management
 
Introdution to POF reliability methods
Introdution to POF reliability methodsIntrodution to POF reliability methods
Introdution to POF reliability methods
 
Reliability centered maintenance
Reliability centered maintenanceReliability centered maintenance
Reliability centered maintenance
 
Failure mode and effects analysis
Failure mode  and effects analysisFailure mode  and effects analysis
Failure mode and effects analysis
 
Best Damn D-FMEA Method!
Best Damn D-FMEA Method!Best Damn D-FMEA Method!
Best Damn D-FMEA Method!
 
Design for reliability
Design for reliabilityDesign for reliability
Design for reliability
 
DFR a case study using a physics of failure
DFR a case study using a physics of failure DFR a case study using a physics of failure
DFR a case study using a physics of failure
 
Introduction to Failure Mode and Effects Analysis (FMEA) in TQM
Introduction to Failure Mode and Effects Analysis (FMEA) in TQMIntroduction to Failure Mode and Effects Analysis (FMEA) in TQM
Introduction to Failure Mode and Effects Analysis (FMEA) in TQM
 
Root Cause Analysis .pdf
Root Cause Analysis .pdfRoot Cause Analysis .pdf
Root Cause Analysis .pdf
 
Reliability centered maintenance
Reliability centered maintenanceReliability centered maintenance
Reliability centered maintenance
 
FMEA Presentation
FMEA PresentationFMEA Presentation
FMEA Presentation
 
Maintenance strategy & cbm basic introduction
Maintenance strategy & cbm basic introductionMaintenance strategy & cbm basic introduction
Maintenance strategy & cbm basic introduction
 
FMEA
FMEAFMEA
FMEA
 

Similaire à Design for Reliability (DfR) Seminar

Agile Testing at Scale
Agile Testing at ScaleAgile Testing at Scale
Agile Testing at ScaleTechWell
 
Agile Testing at Scale
Agile Testing at ScaleAgile Testing at Scale
Agile Testing at ScaleTechWell
 
Introduction to Reliability and Maintenance Management - Paper
Introduction to Reliability and Maintenance Management - PaperIntroduction to Reliability and Maintenance Management - Paper
Introduction to Reliability and Maintenance Management - PaperAccendo Reliability
 
Digital Transformation: Beyond the Buzz(words): Go to the Cloud: How Product ...
Digital Transformation: Beyond the Buzz(words): Go to the Cloud: How Product ...Digital Transformation: Beyond the Buzz(words): Go to the Cloud: How Product ...
Digital Transformation: Beyond the Buzz(words): Go to the Cloud: How Product ...Aggregage
 
ARC's Paula Hollywood Sustain Reliability Presentation @ ARC Industry Forum 2010
ARC's Paula Hollywood Sustain Reliability Presentation @ ARC Industry Forum 2010ARC's Paula Hollywood Sustain Reliability Presentation @ ARC Industry Forum 2010
ARC's Paula Hollywood Sustain Reliability Presentation @ ARC Industry Forum 2010ARC Advisory Group
 
Software QA Testing Company India Presentation - AAPNA Infotech
Software QA Testing Company India Presentation - AAPNA InfotechSoftware QA Testing Company India Presentation - AAPNA Infotech
Software QA Testing Company India Presentation - AAPNA InfotechAapna Infotech
 
Reliability and Maintenance Conference - 7-9 April 2014, Al Khobar, Kingdom o...
Reliability and Maintenance Conference - 7-9 April 2014, Al Khobar, Kingdom o...Reliability and Maintenance Conference - 7-9 April 2014, Al Khobar, Kingdom o...
Reliability and Maintenance Conference - 7-9 April 2014, Al Khobar, Kingdom o...Ricky Smith CMRP, CMRT
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersTechWell
 
Conquer the Murky Waters of Test Automation
Conquer the Murky Waters of Test AutomationConquer the Murky Waters of Test Automation
Conquer the Murky Waters of Test AutomationTechWell
 
Reaching for Your Quality Stretch Goals: Testing at Realtor.com
Reaching for Your Quality Stretch Goals: Testing at Realtor.comReaching for Your Quality Stretch Goals: Testing at Realtor.com
Reaching for Your Quality Stretch Goals: Testing at Realtor.comKlaus Salchner
 
DevOps: From Industry Buzzword to Real Implementation / Real Benefits
DevOps: From Industry Buzzword to Real Implementation / Real BenefitsDevOps: From Industry Buzzword to Real Implementation / Real Benefits
DevOps: From Industry Buzzword to Real Implementation / Real BenefitsCA Technologies
 
Survey feedback 2014
Survey feedback 2014Survey feedback 2014
Survey feedback 2014Paul Liddiatt
 
"ML Services - How do you begin and when do you start scaling?" - Madhura Dud...
"ML Services - How do you begin and when do you start scaling?" - Madhura Dud..."ML Services - How do you begin and when do you start scaling?" - Madhura Dud...
"ML Services - How do you begin and when do you start scaling?" - Madhura Dud...Grid Dynamics
 
Apple quality policy
Apple quality policyApple quality policy
Apple quality policyChetan Chawla
 
The Journey to Continuous Testing
The Journey to Continuous TestingThe Journey to Continuous Testing
The Journey to Continuous TestingTechWell
 
Social sourcing ppt
Social sourcing pptSocial sourcing ppt
Social sourcing pptkrittenlee
 

Similaire à Design for Reliability (DfR) Seminar (20)

Agile Testing at Scale
Agile Testing at ScaleAgile Testing at Scale
Agile Testing at Scale
 
Agile Testing at Scale
Agile Testing at ScaleAgile Testing at Scale
Agile Testing at Scale
 
Introduction to Reliability and Maintenance Management - Paper
Introduction to Reliability and Maintenance Management - PaperIntroduction to Reliability and Maintenance Management - Paper
Introduction to Reliability and Maintenance Management - Paper
 
Digital Transformation: Beyond the Buzz(words): Go to the Cloud: How Product ...
Digital Transformation: Beyond the Buzz(words): Go to the Cloud: How Product ...Digital Transformation: Beyond the Buzz(words): Go to the Cloud: How Product ...
Digital Transformation: Beyond the Buzz(words): Go to the Cloud: How Product ...
 
2011 Summer Conference Brochure
2011 Summer Conference Brochure2011 Summer Conference Brochure
2011 Summer Conference Brochure
 
ARC's Paula Hollywood Sustain Reliability Presentation @ ARC Industry Forum 2010
ARC's Paula Hollywood Sustain Reliability Presentation @ ARC Industry Forum 2010ARC's Paula Hollywood Sustain Reliability Presentation @ ARC Industry Forum 2010
ARC's Paula Hollywood Sustain Reliability Presentation @ ARC Industry Forum 2010
 
Get_Bent_On_Agile
Get_Bent_On_AgileGet_Bent_On_Agile
Get_Bent_On_Agile
 
Software QA Testing Company India Presentation - AAPNA Infotech
Software QA Testing Company India Presentation - AAPNA InfotechSoftware QA Testing Company India Presentation - AAPNA Infotech
Software QA Testing Company India Presentation - AAPNA Infotech
 
Reliability and Maintenance Conference - 7-9 April 2014, Al Khobar, Kingdom o...
Reliability and Maintenance Conference - 7-9 April 2014, Al Khobar, Kingdom o...Reliability and Maintenance Conference - 7-9 April 2014, Al Khobar, Kingdom o...
Reliability and Maintenance Conference - 7-9 April 2014, Al Khobar, Kingdom o...
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
 
Conquer the Murky Waters of Test Automation
Conquer the Murky Waters of Test AutomationConquer the Murky Waters of Test Automation
Conquer the Murky Waters of Test Automation
 
Afternoon Session: Innovation and platform Architect Day
Afternoon Session: Innovation and platform Architect Day Afternoon Session: Innovation and platform Architect Day
Afternoon Session: Innovation and platform Architect Day
 
Reaching for Your Quality Stretch Goals: Testing at Realtor.com
Reaching for Your Quality Stretch Goals: Testing at Realtor.comReaching for Your Quality Stretch Goals: Testing at Realtor.com
Reaching for Your Quality Stretch Goals: Testing at Realtor.com
 
DevOps: From Industry Buzzword to Real Implementation / Real Benefits
DevOps: From Industry Buzzword to Real Implementation / Real BenefitsDevOps: From Industry Buzzword to Real Implementation / Real Benefits
DevOps: From Industry Buzzword to Real Implementation / Real Benefits
 
Survey feedback 2014
Survey feedback 2014Survey feedback 2014
Survey feedback 2014
 
"ML Services - How do you begin and when do you start scaling?" - Madhura Dud...
"ML Services - How do you begin and when do you start scaling?" - Madhura Dud..."ML Services - How do you begin and when do you start scaling?" - Madhura Dud...
"ML Services - How do you begin and when do you start scaling?" - Madhura Dud...
 
Apple quality policy
Apple quality policyApple quality policy
Apple quality policy
 
Dev ops.enterprise.2014 (1)
Dev ops.enterprise.2014 (1)Dev ops.enterprise.2014 (1)
Dev ops.enterprise.2014 (1)
 
The Journey to Continuous Testing
The Journey to Continuous TestingThe Journey to Continuous Testing
The Journey to Continuous Testing
 
Social sourcing ppt
Social sourcing pptSocial sourcing ppt
Social sourcing ppt
 

Plus de Accendo Reliability

Should RCM be applied to all assets.pdf
Should RCM be applied to all assets.pdfShould RCM be applied to all assets.pdf
Should RCM be applied to all assets.pdfAccendo Reliability
 
T or F Must have failure data.pdf
T or F Must have failure data.pdfT or F Must have failure data.pdf
T or F Must have failure data.pdfAccendo Reliability
 
Should RCM Templates be used.pdf
Should RCM Templates be used.pdfShould RCM Templates be used.pdf
Should RCM Templates be used.pdfAccendo Reliability
 
12-RCM NOT a Maintenance Program.pdf
12-RCM NOT a Maintenance Program.pdf12-RCM NOT a Maintenance Program.pdf
12-RCM NOT a Maintenance Program.pdfAccendo Reliability
 
09-Myth RCM only product is maintenance.pdf
09-Myth RCM only product is maintenance.pdf09-Myth RCM only product is maintenance.pdf
09-Myth RCM only product is maintenance.pdfAccendo Reliability
 
10-RCM has serious weaknesses industrial environment.pdf
10-RCM has serious weaknesses industrial environment.pdf10-RCM has serious weaknesses industrial environment.pdf
10-RCM has serious weaknesses industrial environment.pdfAccendo Reliability
 
08-Master the basics carousel.pdf
08-Master the basics carousel.pdf08-Master the basics carousel.pdf
08-Master the basics carousel.pdfAccendo Reliability
 
07-Manufacturer Recommended Maintenance.pdf
07-Manufacturer Recommended Maintenance.pdf07-Manufacturer Recommended Maintenance.pdf
07-Manufacturer Recommended Maintenance.pdfAccendo Reliability
 
06-Is a Criticality Analysis Required.pdf
06-Is a Criticality Analysis Required.pdf06-Is a Criticality Analysis Required.pdf
06-Is a Criticality Analysis Required.pdfAccendo Reliability
 
05-Failure Modes Right Detail.pdf
05-Failure Modes Right Detail.pdf05-Failure Modes Right Detail.pdf
05-Failure Modes Right Detail.pdfAccendo Reliability
 
04-Equipment Experts Couldn't believe response.pdf
04-Equipment Experts Couldn't believe response.pdf04-Equipment Experts Couldn't believe response.pdf
04-Equipment Experts Couldn't believe response.pdfAccendo Reliability
 
Reliability Engineering Management course flyer
Reliability Engineering Management course flyerReliability Engineering Management course flyer
Reliability Engineering Management course flyerAccendo Reliability
 
How to Create an Accelerated Life Test
How to Create an Accelerated Life TestHow to Create an Accelerated Life Test
How to Create an Accelerated Life TestAccendo Reliability
 

Plus de Accendo Reliability (20)

Should RCM be applied to all assets.pdf
Should RCM be applied to all assets.pdfShould RCM be applied to all assets.pdf
Should RCM be applied to all assets.pdf
 
T or F Must have failure data.pdf
T or F Must have failure data.pdfT or F Must have failure data.pdf
T or F Must have failure data.pdf
 
Should RCM Templates be used.pdf
Should RCM Templates be used.pdfShould RCM Templates be used.pdf
Should RCM Templates be used.pdf
 
12-RCM NOT a Maintenance Program.pdf
12-RCM NOT a Maintenance Program.pdf12-RCM NOT a Maintenance Program.pdf
12-RCM NOT a Maintenance Program.pdf
 
13-RCM Reduces Maintenance.pdf
13-RCM Reduces Maintenance.pdf13-RCM Reduces Maintenance.pdf
13-RCM Reduces Maintenance.pdf
 
11-RCM is like a diet.pdf
11-RCM is like a diet.pdf11-RCM is like a diet.pdf
11-RCM is like a diet.pdf
 
09-Myth RCM only product is maintenance.pdf
09-Myth RCM only product is maintenance.pdf09-Myth RCM only product is maintenance.pdf
09-Myth RCM only product is maintenance.pdf
 
10-RCM has serious weaknesses industrial environment.pdf
10-RCM has serious weaknesses industrial environment.pdf10-RCM has serious weaknesses industrial environment.pdf
10-RCM has serious weaknesses industrial environment.pdf
 
08-Master the basics carousel.pdf
08-Master the basics carousel.pdf08-Master the basics carousel.pdf
08-Master the basics carousel.pdf
 
07-Manufacturer Recommended Maintenance.pdf
07-Manufacturer Recommended Maintenance.pdf07-Manufacturer Recommended Maintenance.pdf
07-Manufacturer Recommended Maintenance.pdf
 
06-Is a Criticality Analysis Required.pdf
06-Is a Criticality Analysis Required.pdf06-Is a Criticality Analysis Required.pdf
06-Is a Criticality Analysis Required.pdf
 
05-Failure Modes Right Detail.pdf
05-Failure Modes Right Detail.pdf05-Failure Modes Right Detail.pdf
05-Failure Modes Right Detail.pdf
 
03-3 Ways to Do RCM.pdf
03-3 Ways to Do RCM.pdf03-3 Ways to Do RCM.pdf
03-3 Ways to Do RCM.pdf
 
04-Equipment Experts Couldn't believe response.pdf
04-Equipment Experts Couldn't believe response.pdf04-Equipment Experts Couldn't believe response.pdf
04-Equipment Experts Couldn't believe response.pdf
 
02-5 RCM Myths Carousel.pdf
02-5 RCM Myths Carousel.pdf02-5 RCM Myths Carousel.pdf
02-5 RCM Myths Carousel.pdf
 
01-5 CBM Facts.pdf
01-5 CBM Facts.pdf01-5 CBM Facts.pdf
01-5 CBM Facts.pdf
 
Lean Manufacturing
Lean ManufacturingLean Manufacturing
Lean Manufacturing
 
Reliability Engineering Management course flyer
Reliability Engineering Management course flyerReliability Engineering Management course flyer
Reliability Engineering Management course flyer
 
How to Create an Accelerated Life Test
How to Create an Accelerated Life TestHow to Create an Accelerated Life Test
How to Create an Accelerated Life Test
 
Reliability Programs
Reliability ProgramsReliability Programs
Reliability Programs
 

Dernier

Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
Spring24-Release Overview - Wellingtion User Group-1.pdf
Spring24-Release Overview - Wellingtion User Group-1.pdfSpring24-Release Overview - Wellingtion User Group-1.pdf
Spring24-Release Overview - Wellingtion User Group-1.pdfAnna Loughnan Colquhoun
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopBachir Benyammi
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintMahmoud Rabie
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UbiTrack UK
 
RAG Patterns and Vector Search in Generative AI
RAG Patterns and Vector Search in Generative AIRAG Patterns and Vector Search in Generative AI
RAG Patterns and Vector Search in Generative AIUdaiappa Ramachandran
 
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Will Schroeder
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Adtran
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Commit University
 
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdfJamie (Taka) Wang
 
PicPay - GenAI Finance Assistant - ChatGPT for Customer Service
PicPay - GenAI Finance Assistant - ChatGPT for Customer ServicePicPay - GenAI Finance Assistant - ChatGPT for Customer Service
PicPay - GenAI Finance Assistant - ChatGPT for Customer ServiceRenan Moreira de Oliveira
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfDaniel Santiago Silva Capera
 
Babel Compiler - Transforming JavaScript for All Browsers.pptx
Babel Compiler - Transforming JavaScript for All Browsers.pptxBabel Compiler - Transforming JavaScript for All Browsers.pptx
Babel Compiler - Transforming JavaScript for All Browsers.pptxYounusS2
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding TeamAdam Moalla
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.YounusS2
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024SkyPlanner
 
Cloud Revolution: Exploring the New Wave of Serverless Spatial Data
Cloud Revolution: Exploring the New Wave of Serverless Spatial DataCloud Revolution: Exploring the New Wave of Serverless Spatial Data
Cloud Revolution: Exploring the New Wave of Serverless Spatial DataSafe Software
 
Things you didn't know you can use in your Salesforce
Things you didn't know you can use in your SalesforceThings you didn't know you can use in your Salesforce
Things you didn't know you can use in your SalesforceMartin Humpolec
 
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...Aggregage
 

Dernier (20)

Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
Spring24-Release Overview - Wellingtion User Group-1.pdf
Spring24-Release Overview - Wellingtion User Group-1.pdfSpring24-Release Overview - Wellingtion User Group-1.pdf
Spring24-Release Overview - Wellingtion User Group-1.pdf
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 Workshop
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership Blueprint
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
 
RAG Patterns and Vector Search in Generative AI
RAG Patterns and Vector Search in Generative AIRAG Patterns and Vector Search in Generative AI
RAG Patterns and Vector Search in Generative AI
 
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)
 
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf
 
PicPay - GenAI Finance Assistant - ChatGPT for Customer Service
PicPay - GenAI Finance Assistant - ChatGPT for Customer ServicePicPay - GenAI Finance Assistant - ChatGPT for Customer Service
PicPay - GenAI Finance Assistant - ChatGPT for Customer Service
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
 
Babel Compiler - Transforming JavaScript for All Browsers.pptx
Babel Compiler - Transforming JavaScript for All Browsers.pptxBabel Compiler - Transforming JavaScript for All Browsers.pptx
Babel Compiler - Transforming JavaScript for All Browsers.pptx
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024
 
Cloud Revolution: Exploring the New Wave of Serverless Spatial Data
Cloud Revolution: Exploring the New Wave of Serverless Spatial DataCloud Revolution: Exploring the New Wave of Serverless Spatial Data
Cloud Revolution: Exploring the New Wave of Serverless Spatial Data
 
Things you didn't know you can use in your Salesforce
Things you didn't know you can use in your SalesforceThings you didn't know you can use in your Salesforce
Things you didn't know you can use in your Salesforce
 
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
 

Design for Reliability (DfR) Seminar

  • 1. DESIGN FOR RELIABILITY (DFR) SEMINAR Mike Silverman // (408) 654-0499 // mikes@opsalacarte.com Ops A La Carte LLC // www.opsalacarte.com 1 © 2009 Ops A La Carte
  • 2. The following presentation materials are copyright protected property of Ops A La Carte LLC. These materials may not be distributed outside of your company. © 2011 Ops A La Carte 2
  • 3. Presenter’s Biographical Sketch – Mike Silverman ◈ Mike Silverman is founder and managing partner at Ops A La Carte, a Professional Consulting Company that has in intense focus on helping customers with end-to-end reliability. Through Ops A La Carte, Mike has had extensive experience as a consultant to high-tech companies, and has consulted for over 300 companies including Cisco, Ciena, Siemens, Abbott Labs, and Applied Materials. He has consulted in a variety of different industries including power electronics, telecommunications, networking, medical, semiconductor, semiconductor equipment, consumer electronics, and defense. ◈ Mike has 20 years of reliability and quality experience. He is also an expert in accelerated reliability techniques, including HALT&HASS (and recently purchased a HALT Lab), testing over 500 products for 100 companies in 40 different industries. ◈ Mike just completed his first book on Reliability called “50 Ways to Improve Your Product Reliability”. This course is largely based on the book material. ◈ Mike has authored and published 8 papers on reliability techniques and has presented these around the world including China, Germany, Canada, Taiwan, Singapore, and Korea. Ops has also developed and currently teaches 31 courses on reliability techniques. ◈ Mike has a BS degree in Electrical and Computer Engineering from the University of Colorado at Boulder, and is both a Certified Reliability Engineer and a course instructor through the American Society for Quality (ASQ), IEEE, Effective Training Associates, and Hobbs Engineering. Mike is a member of ASQ, IEEE, SME, ASME, PATCA, and IEEE Consulting Society and is the current chapter president in the IEEE Reliability Society for Silicon Valley. 3 © 2009 Ops A La Carte
  • 4. Seminar Overview Monday, May 9, 2011 - SEMINAR DAY 1 -  8:30-9:00am Introduction  9:00-9:30pm DFR Overview  9:30-10:30am Planning for Reliability – Assessments, Goals, Plans (CH 5-12)  10:30-11:00am Allocation/Goal Setting Workshop  11:00-12:00pm Modeling and Prediction (Ch 21)  12:00-1:00pm Lunch  1:00-1:30pm Prediction Workshop  1:30-2:00pm Thermal/Derating Analysis (Ch 22/23)  2:00-3:00pm Failure Modes and Effects Analysis (FMEA) (Ch 16)  3:00-3:30pm Design of Experiments (Ch 25)  3:30-4:00pm Human Factors Engineering  4:00-4:30pm Wrap-Up Day 1 / Questions © 2009 Ops A La Carte 4
  • 5. Seminar Overview Tuesday, May 10, 2011 - SEMINAR DAY 2 -  8:30-10:00am Highly Accelerated Life Test (HALT) (Ch 34)  10:00-11:00pm Accelerated Life Test (ALT) (Ch 36)  11:00-12:00pm When to Use HALT vs. ALT (Ch 37)  12:00-1:00pm Lunch  1:00-1:30pm Reliability Demonstration Test (RDT) (Ch 35)  1:30-2:00pm When to Use HALT vs. RDT – the HALT Calculator (Ch 38)  2:00-2:30pm Highly Accelerated Stress Screen (HASS) (Ch 43)  2:30-3:00pm On-Going Reliability Test (ORT) (Ch44)  3:00-3;30pm Root Cause Analysis (RCA) (Ch 40)  3:30-4:00pm Field Data Analysis (Ch 48)  4:00-4:30pm Conclusion/Wrap-Up © 2009 Ops A La Carte 5
  • 6. COMPANY OVERVIEW Confidence in Reliability
  • 7. Our Company is a privately-held professional reliability engineering firm founded in 2001 and headquartered in Santa Clara, California with offices in China, India and Singapore. was named one of the top 10 fastest growing, privately-held companies in the Silicon Valley in 2006 and 2009 by the San Jose Business Journal. is a solid company that has been profitable every quarter since its inception due to its outstanding reputation, customer value and scalable business model.
  • 8. Our Team is made up of a group of highly accomplished Reliability Consultants. Each of our consultants has 15+ years of Reliability Engineering and Reliability Management experience in various industries. We tap a large network of labs, test facilities, and talented engineering professionals to quickly assemble resources to supplement your organization.
  • 9. • Ops Solutions – Ops provides end-to-end solutions that target the corporate product reliability objectives • Ops Individual “A La Carte” Consulting – Ops identifies and solves the missing key ingredients needed for a fully integrated reliable product • Ops Training – Ops’ highly specialized leaders and experts in the industry train others in both standard and customized training seminars • Ops Testing – Ops’ state-of-the-art provides comprehensive testing services
  • 10. assists clients in developing and executing any and all elements of Reliability through the Product Life Cycle. has the unique ability to assess a product and understand the key reliability elements necessary to measure/improve product performance and customer satisfaction. pioneered “Reliability Integration” – using multiple tools in conjunction throughout each client’s organization to greatly increase the power and value of any Reliability Program.
  • 11. Testing Services • Our own lab facility located in Northern California in the heart of Silicon Valley. We provide HALT/HASS services on a world- wide basis, using partner labs for tests outside California. • Second oldest HALT facility in the world, established in 1995 (originally owned by QualMark) • HALT equipment has all latest technology – only lab in region • Highly-experienced staff with over 100 years of combined experience in HALT and HASS • Tested over 500 products in over 30 different industries • Our HALT/HASS services are fully integrated with our other consulting services. Ops A La Carte ©
  • 12. Ops’ New Reliability Book How Reliable Is Your Product? 50 Ways to Improve Product Reliability A new book by Ops A La Carte LLC® Founder/Managing Partner Mike Silverman The book focuses on Mike’s experiences working with over 500 companies in his 25 year career as an engineer, manager, and consultant. It is a practical guide to reliability written for everyone in your organization. In the book we give tips and case studies rather than a textbook full of formulas. Available January 2011 in hardback for $44.95 or ebook for $19.95 @amazon.com or http://www.happyabout.com/productreliability.php For more info, go to www.opsalacarte.com © 2009 Ops A La Carte 12
  • 13. FREE Webinars for 2011 • Feb 17 – Medical Device Seminar/Webinar, San Jose • Mar 2 – Warranty Webinar (coincides with Warranty Chain Management Workshop on March 17) • Mar 3 – Implantable Medical Seminar, Santa Clara • Mar 22 – Book signing for “How Reliable Is Your Product”, Santa Clara • Apr 6 – Solar Reliability Challenges • May 3 – DfSS vs. DfR Webinar (tied with WQC) • Jun 7 – How to Use HALT with Prognostics (tied with PHM) Details for all are on our site at www.opsalacarte.com © 2009 Ops A La Carte 13
  • 14. Upcoming Events • May 25 – SEMA (Solar) Event, San Jose. We will be giving a reliability presentation • May 25 – ASQ Medical, Sunnyvale. We will be giving a reliability presentation • June 6 – MD&M East, New York. We will be giving a one day seminar on medical reliability testing • June 7-9 – ARS, San Diego. We will be exhibiting and giving two presentations on reliability. • June 20-23 – PHM Conference, Denver. We will be exhibiting and giving a presentation on reliability. Details for all are on our site at www.opsalacarte.com © 2009 Ops A La Carte 14
  • 15. Contact Information Ops A La Carte, LLC Mike Silverman Managing Partner (408) 654-0499 Skype: mikesilverman Email: mikes@opsalacarte.com URL: http://www.opsalacarte.com Blog: http://www.opsalacarte.com/reliability-blog Linked-In: http://www.linkedin.com/pub/mike- silverman/0/3a7/24b Twitter: http://twitter.com/opsalacarte Facebook: http://www.facebook.com/pages/Santa-Clara-CA/Ops-A-La-Carte-LLC/155189552669 Bio: http://www.mike-silverman.com Ops Public Calendar: http://www.google.com/calendar/embed?src=opsalacarte%4 0gmail.com&ctz=America/Los_Angeles 15 © 2009 Ops A La Carte
  • 16. What Is DFR and What is NOT DFR A High Level Overview 5/7/2011 Ops A La Carte © 1
  • 17. DfR Is Not • Making a list of all possible reliability activities and then trying to  cover as many as possible within the timeframe of the product  development process. • Using only certain selected tools from the “DfR toolbox” • Assuming that  product reliability is the sole responsibility of a  reliability engineer (reliability engineer is the guide and mentor  but not the owner – designer should be the owner). • Completing the analytical work but delay test and verification until  the system testing stage.  5/7/2011 Ops A La Carte © 2
  • 18. DfR Is Not (cont’d) • Getting the product into test as fast as possible to test reliability  into the product (a.k.a. Test‐Analyze‐and‐Fix) • Only working on the in‐house design items and not worrying  about vendor items • Working in silos between EE, Mech E, Software, etc. (even if they  apply some or most of the DfR tools) – all competencies must  work together to reach common goals. • Not looking at interactions between groups and not taking a  system level viewpoint. 5/7/2011 Ops A La Carte © 3
  • 19. DfR Is • Setting Goals at the beginning of the program and then  developing a plan to meet the goals. • Having the reliability goals being driven by the design team  with the reliability team acting as mentors. Having everyone  working to a common set of goals.  Reliability engineer doesn’t  own the goal but is a key influencer. • Providing metrics so that you have checkpoints on where you  are against your goals. • Writing a Reliability Plan (not only a test plan) to drive your  program. 5/7/2011 Ops A La Carte © 4
  • 20. DfR Is (cont’d) • DfR is the process of building reliability into the design. • DfR begins from the very early stages of the design (concept  phase) and should be integrated into every stage of this  process • As a result of this process, reliability must be designed into  products and processes using the best available science‐based  methods. • Before moving from one phase of the product life cycle to the  next, there must be a gate to measure reliability and assure  you are target. 5/7/2011 Ops A La Carte © 5
  • 21. DfR Flow • Initiate a Reliability Program • Determine next best steps • Reduce customer complaints  $ Profits • Select right tools • Improve reliability Goal market Program Plan share Gap Analysis satisfaction Benchmarking Statistical A detailed evaluation of an  Data Analysis organization’s approach and  processes involved in creating  Assessment  Interviews reliable products. The  field $ unreliability assessment captures the current  failures Now state and leads to an actionable  complaints ? Unknown  reliability program plan. Reliability ? 5/7/2011 Ops A La Carte © 6
  • 22. From the Toolbox Approach to the  Structured Approach to DfR A. Mettas, IJPE, 2010 5/7/2011 Ops A La Carte © 7
  • 23. DfR Key Activities Flow 6. Monitor and Control 5. Validate  4. Verify 3. Analyze 2. Design 1. Identify 5/7/2011 Ops A La Carte © 8
  • 24. 1. Identify • Goal: quantitatively define the reliability requirements for a  product as well as the end‐user environmental/usage conditions.  • Customer expectations and how to translate them into  engineering metrics (e.g., survive 15 yrs life) • Develop specific environmental test requirements (e.g.,  converting the requirement of B5 life at 280K miles for a heavy  duty truck into a test flow and test sample size) • Identify technology limitations (e.g., battery, optics, specific  components, etc.) and the relevant validation strategies 5/7/2011 Ops A La Carte © 9
  • 25. 1. Identify:  Activities/Tools • Goal Setting • Metrics • Gap Analysis • Benchmarking • Reliability Program Plan • QFD (Quality Function Deployment) 5/7/2011 Ops A La Carte © 10
  • 26. 1. Identify:  Goals • Reliability Goals & Metrics tie together all stages of the  product life cycle. Well crafted goals provide the target for  the business to achieve, they set the direction. • Reliability Goals can be derived from: – Customer‐specified or implied requirements – Internally‐specified or self‐imposed requirements (usually based  on trying to be better than previous products) – Benchmarking against competition – Industry standards – Engineering common sense 5/7/2011 Ops A La Carte © 11
  • 27. 1. Identify: Metrics • Metrics provide: – the milestones, – the “are we there, yet”, and – the feedback that all elements of the organization require  to stay on track toward the goals. 5/7/2011 Ops A La Carte © 12
  • 28. 1. Identify:  Reliability Program Plan • A Reliability Program and Integration Plan is  crucial at the beginning of the product life  cycle because in this plan, we define: – What are the overall goals of the product and of each  assembly that makes up the product ? – What has been the past performance of the product ? – What is the size of the gap ? – What are the constraints ? – What reliability elements/tools will be used ? – How will each tool be implemented and integrated to achieve  the goals ? – What is our schedule for meeting these goals ? 5/7/2011 Ops A La Carte © 13
  • 29. 2. Design • This is the stage where specific design activities begin, such as  circuit layout, mechanical drawing, component/supplier  selection, etc.  Therefore, a better design picture begins  emerging.  • In this stage, a clearer picture about what the product is  supposed to do starts developing.  – More specific reliability requirements are defined – The more design/application change, the more reliability  risks are introduced. • A program risk can be assessed 5/7/2011 Ops A La Carte © 14
  • 30. 2. Design:  Activities/Tools • Reliability Prediction (compare design alternatives,  identify preferred components and suppliers) • Cost Trade‐offs • Tolerance evaluation • Better understanding of customer specifications • FMEA • FTA 5/7/2011 Ops A La Carte © 15
  • 31. 3. Analyze • Estimating the product's reliability, often with a  rough first cut estimate, early in the design phase. It  is important at this phase to address all the  potential sources of product failure.   • Close cooperation between reliability engineer and  the design team can be very beneficial at this phase. 5/7/2011 Ops A La Carte © 16
  • 32. 3. Analyze:  Activities/Tools • Finite Element Analysis, Physics of Failure • Reliability Prediction (reliability block diagrams) • Engineering judgment, expert opinions, existing  data • Warranty Analysis of the existing products • DRBFM or Change Point Analysis (if needed) • Stress‐Strength Analysis 0.60 0.48 • FMEA (updated) 0.36 0.24 0.12 0.00 0.0 5.0 10.0 15.0 20.0 25.0 5/7/2011 Ops A La Carte © 17
  • 33. 4. Verify • Prototype hardware build. Quantify all of the previous  work based on test results. By this stage, prototypes  should be ready for testing and more detailed  analysis.  • Iterative process where different types of tests are  performed, product weaknesses are uncovered, the  results are analyzed, design changes are made. 5/7/2011 Ops A La Carte © 18
  • 34. 4. Verify: Activities/Tools • HALT • ALT • Test to failure (Life data analysis) • Degradation analysis • Reliability Growth Process (if enough data is available) • DRBTR (Design Review Based on Test Results)  5/7/2011 Ops A La Carte © 19
  • 35. 5. Validate  (assure production readiness) • Validation usually involves functional and  environmental testing on a system level with the  purpose to become production‐ready.  • Making sure that the product is ready for high  volume production. • Design modifications might be necessary to improve  robustness. 5/7/2011 Ops A La Carte © 20
  • 36. 5. Validate:  Activities/Tools • Design Validation (Including Accelerated Life Testing  and Reliability Demonstration) • Process Validation Note: often program schedule leaves no time for test  to failure at this stage.  Most of it should be done at  the previous stages.  Validation phase is often done via  ‘test to success’ 5/7/2011 Ops A La Carte © 21
  • 37. 6. Control • Assuring that the process remains unchanged and  the variations remain within the tolerances. 5/7/2011 Ops A La Carte © 22
  • 38. 6. Control:  Activities/Tools • Control Charts and Process Capability Studies (Cpk,  Ppk, etc.) • Human Reliability • Continuous Compliance • Field return analysis (warranty) and forecasting • ORT (ongoing reliability testing)  • Audits • Lessons Learned for the next generation of  products (important to close the cycle on DfR) 5/7/2011 Ops A La Carte © 23
  • 39. DfR Key Activities HASS, Control Charts, Re‐ validation, Audits, Look  6. Monitor and Control Across, Lessons Learned, ORT  Design and  Process Validation Accelerated Test.  Reliability  HALT,  5. Validate  Demonstration. Evaluation Testing, DRBTR, FEA, Warranty Data  Reliability Growth modeling,  Analysis,  Change Point Analysis DRBFM, Reliability  4. Verify prediction 3. Analyze Lessons Learned,  Reliability Block  Diagrams DFMEA, Cost trade‐off analysis, 2. Design Lessons Learned Probabilistic design, Cost trade‐ offs, Tolerance Analysis QFD, Requirements definitions, Benchmarking,  Product usage analysis 1. Identify Understanding of customer requirements and  specifications 5/7/2011 Ops A La Carte © 24
  • 40. Key points for implementing DfR activities • Start DfR activities early in the process • Reliability engineer’s job is to lead/coach the design team • Integration of Reliability and Quality Engineers with design teams.   • Warranty/field data analysis (both statistical and root cause  analysis) needs to be fed back to both design and reliability teams.   • Reduce the number of tools in the toolbox, but use the remaining  well.  Neither all steps nor tools are necessary for all the  programs. 5/7/2011 Ops A La Carte © 25
  • 41. Program Risk Assessment.  Key to the Resource Management Ask the following questions at the beginning of the program: • Will this product contain any new technology with  unproven reliability record? • Will this design be significantly different from the old  one (e.g. more that 30% of content is new)? • Will this product be used at a different geographic  regions or be exposed to more extreme  environments? • Does this product have new requirements (e.g., 15  years live instead of 10 years)? 5/7/2011 Ops A La Carte © 26
  • 42. Program Risk Assessment.  (Cont’d) Key to the Resource Management • Will this product have a new application (e.g.,  underhood vs. passenger compartment or military vs.  automotive)? • Any new materials used in the design? • Will this product have new suppliers? • Will the product be made at a different  manufacturing location? • Are there any other changes, which can affect  reliability? The more “yes” answers the higher is the risk – the more DfR tools to use 5/7/2011 Ops A La Carte © 27
  • 43. AIAG Reliability Maturity Assessment  Categories • A.  Reliability planning – 9 questions (Benchmarking, reliability planning, etc.) • B.  Design for Reliability – 21 questions (FMEA, Design optimization, sneak circuits, etc.) • C.  Reliability prediction and modeling  – 7 questions (FTA, Reliability Prediction, etc.) • D.  Reliability of mechanical components and systems“ – FEA, derating, and degradation analysis Note: category B is called “Design for Reliability”, although it contains only a subset of tools from the ‘traditional’ DfR process 5/7/2011 Ops A La Carte © 28
  • 44. AIAG Reliability Maturity Assessment  Categories • E.  Statistical concepts – 4 questions (DOE, statistical tolerancing, etc.) • F.  Failure reporting and analysis – 11 questions (problems solving, warranty databases, etc) • G. Analyzing reliability data – 4 questions (Weibull, Reliability Growth) • H.  Reliability testing – 7 questions (Test planning, HALT, HASS) • I.  Reliability in manufacturing – 8 questions. ESS, MSA, PPAP 5/7/2011 Ops A La Carte © 29
  • 45. Example of the AIAG Reliability  Maturity Assessment RMI Radar Plot by Category Min B-Level Min A-Level Score Avg Score A. RELIABILITY PLANNING 100% I. RELIABILITY IN 90% B. DESIGN FOR MANUFACTURING 80% RELIABILITY 70% 60% 50% C. RELIABILITY 40% H. RELIABILITY TESTING 30% PREDICTION AND 20% MODELING D. RELIABILITY OF G. ANALYZING RELIABILITY MECHANICAL DATA COMPONENTS AND… F. FAILURE REPORTING E. STATISTICAL CONCEPTS AND ANALYSIS 5/7/2011 Ops A La Carte © 30
  • 46. Challenges with Implementing DfR • Being early enough.  Time to market/Rush to demonstrate so they  skip steps. • Rel Engineers are tied up on current projects and new projects are  starting without them. • Getting the designers to understand DfR so that they can drive the  program. • Culture – will it accept?  How do you get management buy‐in?   Requires patience.  Requires addressing concerns of management.   • We are already good enough.  Why do we need it? 5/7/2011 Ops A La Carte © 31
  • 47. Overcoming Challenges • Cost justification • Management buy‐in • Education to designers • Voice of the customer • Case study/Successful demonstration • Ability to measure success (metrics) 5/7/2011 Ops A La Carte © 32
  • 48. Conclusion • DfR is the process which begins from the very  early stages of the design and should be  integrated into every stage of this process. • As a result of this process, reliability must be  designed into products and processes using the  best available science‐based methods. 5/7/2011 Ops A La Carte © 33
  • 49. What is DESIGN for RELIABILITY? © 2008 Ops A La Carte 1
  • 50. First we must ask: What is Reliability? Reliability is often considered quality over time. Reliability is… “The ability of a system or component to perform its required functions under stated conditions for a specified period of time” - IEEE 610.12-1990  We shall revisit this when we discuss Reliability Goal Setting. © 2008 Ops A La Carte 2
  • 51. Different Views of Reliability  Product development teams View reliability as the domain to address mechanical and electrical, and Mechanical manufacturing issues. Reliability  Customers + View reliability as a system-level issue, Electrical with minimal concern placed on the Reliability distinction into sub-domains.  Since the primary measure of + reliability is made by the customer, SW engineering teams must maintain a Reliability balance of both views (system and sub-domain) in order to develop a reliable product. System © 2008 Ops A La Carte 3
  • 52. Reliability vs. Cost  Intuitively, the emphasis in reliability to achieve a reduction in warranty and in-service costs results in some minimal increase in development and manufacturing costs .  Use of the proper tools during the proper life cycle phase will help to minimize total Life Cycle Cost (LCC). © 2008 Ops A La Carte 4
  • 53. Reliability vs. Cost, continued To minimize total Life Cycle Costs (LCC), an organization must do two things: 1. Choose the best tools from all of the tools available and apply these tools at the proper phases of the product life cycle. 2. Properly integrate these tools together to assure that the proper information is fed forwards and backwards at the proper times. © 2008 Ops A La Carte 5
  • 54. Reliability Integration “the process of seamlessly, cohesively integrating reliability tools together to maximize reliability and at the lowest possible cost” © 2008 Ops A La Carte 6
  • 55. Reliability vs. Cost, continued TOTAL COST OPTIMUM CURVE COST POINT RELIABILITY PROGRAM COSTS COST WARRANTY COSTS RELIABILITY HW RELIABILITY & COSTS Does this apply to SW Reliability? Not really. © 2008 Ops A La Carte 7
  • 56. Reliability vs. Cost, continued TOTAL COST OPTIMUM CURVE COST POINT RELIABILITY PROGRAM COSTS COST HW WARRANTY COSTS RELIABILITY The SW impact on HW warranty costs is minimal at best SYSTEM RELIABILITY & COSTS © 2008 Ops A La Carte 8
  • 57. Reliability vs. Cost, continued  SW has no associated manufacturing costs, so warranty costs and saving are almost entirely allocated to HW  If there are no cost savings associated with improving software reliability, why not leave it as is and focus on improving HW reliability to save money?  One study found that the root causes of typical embedded system failures were SW, not HW, by a ratio of 10:1.  Customers buy systems, not just HW.  The benefits for a SW Reliability Program are not in direct cost savings, rather in:  Increased SW/FW staff availability with reduced operational schedules resulting from fewer corrective maintenance content.  Increased customer goodwill based on improved customer satisfaction. This will be explored in more detail during the S/W DFR Seminar © 2008 Ops A La Carte 9
  • 58. Design for Reliability (DfR) Tools by Phase © 2008 Ops A La Carte 10
  • 59. System DfR Tools by Phase Phase Activities Tools Define project reliability Benchmarking Concept requirements (Reliability Program Internal Goal Setting and Integration Plan) Gap Analysis Reliability Modeling System Failure Architecture Predictive Analysis Design and High Modeling & Predictions (FMECA & FTA) Level Design Human Factors Analysis HALT Initial System Testing Defect Detection at System Level DVT RDT Final System Testing Verify Reliability Metrics V&V Operations and Continuous assessment of product FRACAS Maintenance reliability RCA © 2008 Ops A La Carte 11
  • 60. Hardware DfR Tools by Phase Phase Activities Tools Benchmarking Concept Define HW reliability requirements Internal Goal Setting Gap Analysis Reliability Modeling Architecture HW Failure Predictive & High Level Modeling & Predictions Analysis (FMECA & FTA) Design HW Fault Tolerance Design Human Factors Analysis Human Factors Analysis Low Level Reliability Analysis Derating Analysis Design Worst Case Analysis HALT Prototype ALT (first time product is Detect design defects DOE tested) Multi-variant Testing RDT Identify and correct manufacturing Manufacturing HASS process issues HASA Operations and Continuous assessment of HW reliability ORT Maintenance © 2008 Ops A La Carte 12
  • 61. Software DfR Tools by Phase Phase Activities Tools Benchmarking Concept Define SW reliability requirements Internal Goal Setting Gap Analysis Architecture SW Failure Analysis & High Level Modeling & Predictions SW Fault Tolerance Design Human Factors Analysis Design Identify core, critical and vulnerable Human Factors Analysis Low Level sections of the design Derating Analysis Design Static detection of design defects Worst Case Analysis FRACAS Coding Static detection of coding defects RCA Dynamic detection of design and coding FRACAS Unit Testing defects RCA FRACAS Integration and SW Statistical Testing RCA System Testing SW Reliability Testing Operations and Continuous assessment of product FRACAS Maintenance reliability RCA © 2008 Ops A La Carte 13
  • 62. ELEMENTS OF A RELIABILITY PROGRAM © 2008 Ops A La Carte 14
  • 63. Where to Start a DfR Program? A reliability assessment is our recommended first step in establishing a reliability program. This mechanism is the appropriate forum for selecting the best tools for each product life cycle phase. © 2008 Ops A La Carte 15
  • 64. RELIABILITY ASSESSMENT © 2008 Ops A La Carte 16
  • 65. Reliability Program Assessment • Initiate a Reliability Program • Determine next best steps $ Profits • Reduce customer complaints • Select right tools • Improve reliability market Goal share Program Plan Gap Analysis satisfaction Benchmarking Statistical Data Analysis A detailed evaluation of an organization’s approach and Assessment Interviews processes involved in creating field reliable products. The assessment failures $ unreliability captures the current state and Now leads to an actionable reliability ? Unknown program plan. complaints Reliability ? © 2008 Ops A La Carte 17
  • 66. Steps within an Assessment • motivation • approach • results • findings • observations • next steps • close © 2008 Ops A La Carte 18
  • 67. Assessment Motivation • Identify systemic changes that impact reliability – Tie into culture and product – Both enjoy benefits • Provides roadmap for activities that achieve results – Matching of capabilities and expectations – Cooperative approach © 2008 Ops A La Carte 19
  • 68. Assessment Approach  Preparation  Checklist  Who to interview in organization  Analysis, average scores and summary of comments © 2008 Ops A La Carte 20
  • 69. Steps Involved  selecting people to survey  selecting survey topics  develop scoring system  data analysis  summary feedback results  review of results  recommended actions © 2008 Ops A La Carte 21
  • 70. Select People to Survey Hardware:  Hardware manager  Electrical engineering lead  Mechanical engineering lead  System engineering lead  Reliability manager/engineer  Procurement  Manufacturing Software:  sw r&d manager  sw r&d engineer  sw test manager  sw test engineer © 2008 Ops A La Carte 22
  • 71. Select Survey Topics DFR Methods Survey Scoring: 4 = 100%, top priority, always done 3 = >75%, use normally, expected 2 = 25% - 75%, variable use 1 = <25%, only occasional use 0 = not done or discontinued - = not visible, no comment Management: □ Goal setting for division □ Priority of quality & reliability improvement □ Management attention & follow up (goal ownership) Design: □ Documented hardware design cycle □ Goal setting by product or module © 2008 Ops A La Carte 23
  • 72. Example  To what extent is FMEA used?  Design Engineer Score = 1: Used only as a troubleshooting tool  Manufacturing Engineer Score = 3: Commonly used on critical design elements  Reliability Engineer Score = 4: Always used on all products Results: Score 2.6 Comments: Clearly a disconnect between reliability and design engineering – indicative of a problem with the tool. © 2008 Ops A La Carte 24
  • 73. Reliability Maturity Grid • 5 levels of maturity • Loosely based on IEEE 1624: “Reliability Program for the Development and Production of Electronic Products” • Similar to Crosby’s Quality Maturity • On the following page is a matrix based on Crosby’s as an example. • Read across each row and find the statement that seems most true for your organization. • The center of mass of the levels is the organization’s overall level. © 2008 Ops A La Carte 25
  • 74. Reliability Maturity Matrix Measurement Stage I: Stage II: Stage III: Stage IV: Stage V: Category Uncertainty Awakening Enlightenment Wisdom Certainty Management No comprehension of Recognizing that reliability Still learning more about Participating. Consider reliability Understanding and Attitude reliability as a management management may be of reliability management. Understand absolutes of management an tool. Tend to blame value but not willing to Becoming supportive and reliability management. essential part of company reliability engineering for provide money or time to helpful. Recognize their personal system. ‘reliability problems’ make it happen. role in continuing emphasis. Reliability status Reliability is hidden in A stronger reliability Reliability manager Reliability manager is an Reliability manager is on manufacturing or leader appointed, yet reports to top officer of company; board of directors. engineering departments. main emphasis is still on management, with role in effective status reporting Prevention is main Reliability testing probably an audit of initial product management of division. and preventive action. concern. Reliability is a not part of organization. functionality. Reliability Involved with consumer thought leader. Emphasis on initial product testing still not performed. affairs. functionality. Problem handling Fire fighting; no root cause Teams are set up to solve Corrective action process Problems are identified Except in the most analysis or resolution; lots of major problems. Long- in place. Problems are early in their unusual cases, problems yelling and accusations. range solutions are not recognized and solved in development. All are prevented. identified or orderly way. functions are open to implemented. suggestion and improvement. Cost of Reliability as % of Warranty: unknown Warranty: 3% Warranty: 4% Warranty: 3% Warranty: 1.5% net revenue Reported: unknown Reported: unknown Reported: 8% Reported: 6.5% Reported: 3% Actual: 20% Actual: 18% Actual: 12% Actual: 8% Actual: 3% Feedback process None. No reliability testing. Some understanding of Accelerated testing of Refinement of testing The few field failures are No field failure reporting field failures and critical systems during systems – only testing fully analyzed and other than customer complaints. Designers design. System level critical or uncertain product designs or complaints and returns. and manufacturing do modeling and testing. areas. Increased procurement not get meaningful Field failures analyzed understanding of causes specifications altered. information. and root causes reported. of failure allow Reliability testing done to deterministic failure rate augment reliability prediction models models. DFR program status No organized activities. Organization told Implementation of DFR DFR program active in all Reliability improvement is No understanding of such reliability is important. DFR program with thorough areas of division – not a normal and continued activities. tools and processes understanding and just design & mfg’ing. activity. inconsistently applied and establishment of each DFR normal part of R&D only ‘when time permits’. tool. and manufacturing. Summation of reliability “We don’t know why we “Is it absolutely necessary “Through commitment “Failure prevention is a “We know why we do not posture have problems with to always have problems and reliability routine part of our have problems with reliability” with reliability?” improvement we are operation.” reliability.” identifying and resolving our problems.” © 2008 Ops A La Carte 26
  • 75. Reliability Maturity Matrix Lets look at one row to get a better understanding. Measure- Stage I: Stage II: Stage III: Stage IV: Stage V: Uncertainty Awakening Enlighten- Wisdom Certainty ment ment Category Problem Fire Teams are Corrective Problems Except in handling fighting; no set up to action are the most root cause solve process in identified unusual analysis or major place. early in cases, resolution; problems. Problems their problems lots of Long- are developm are yelling and range recognized ent. All prevented. accusations solutions and solved functions . are not in orderly are open identified way. to or suggestio implement n and ed. improvem ent. © 2008 Ops A La Carte 27
  • 76. Results & Meaning • Looking for trends, gaps in process, skill mismatches, over analysis, under analysis, etc. • Looking for differences across the organization, pockets of excellence, areas with good results • Process provides snapshot of current system • No one tool make an entire reliability program. The tools need to match the needs of the products and the culture. • Check step is critical before moving to recommendation around improvement plan © 2008 Ops A La Carte 28
  • 77. HW Observations What Companies Are What Companies Are Doing Best Weak at  Prediction  Goal setting/Planning  HALT  Repair & warranty invisible  Golden nuggets  Lessons learned  Fast reaction to fix capture problems  Single owner of product reliability  Multiple defect tracking systems  Reliability Integration  Statistics © 2008 Ops A La Carte 29
  • 78. SW Observations What Companies Are What Companies Are Doing Best Weak at  Unit Testing  Synergy with the  Bug tracking database Hardware Team  reliability goal setting for SW  sufficient development best-practices  lessons learned capture  explicit SW reliability measurements and metrics  effective system testing © 2008 Ops A La Carte 30
  • 79. Typical Recommended Tools Based on Assessments • Goal Setting • Writing Solid Plans and Executing (with check steps) • Predictions (not just to get an MTBF number) • FMEAs • ALT • HALT • Lessons Learned • Field Data Review © 2008 Ops A La Carte 31
  • 80. Next Steps • Determine current state of your organization (Summary of Assessment) – Identify strong and weak areas • Goal Setting – Market Analysis to gather requirements – Benchmarking • Gap Analysis • Develop plan and implement © 2008 Ops A La Carte 32
  • 81. Reliability Philosophies Two fundamental methods to achieving high product reliability: Build, Test, Fix Analytical Approach © 2008 Ops A La Carte 33
  • 82. Build, Test, Fix  In any design there are a finite number of flaws.  If we find them, we can remove the flaw.  Rapid prototyping  HALT  Large field trials or ‘beta’ testing  Reliability growth modeling © 2008 Ops A La Carte 34
  • 83. Analytical Approach  Develop goals  Model expected failure mechanisms  Conduct accelerated life tests  Conduct reliability demonstration tests  Routinely update system level model  Balance of simulation/testing to increase ability of reliability model to predict field performance. © 2008 Ops A La Carte 35
  • 84. Issues with each approach Build, Test, Fix Analytical  Uncertain if design is  Fix mostly known flaws good enough  ALT’s take too long  Limited prototypes means  RDT’s take even longer limited flaws discovered  Models have large uncertainty  Unable to plan for with new technology and warranty or field service environments © 2008 Ops A La Carte 36
  • 85. Balanced approach Goal Plan FMEA Prediction HALT RDT/ALT Verification Review © 2008 Ops A La Carte 37
  • 86. Balanced approach Goal Plan FMEA Prediction HALT RDT/ALT Verification Review © 2008 Ops A La Carte 38
  • 87. Balanced approach Goal Plan FMEA Prediction HALT RDT/ALT Verification Review © 2008 Ops A La Carte 39
  • 88. Balanced approach Goal Plan FMEA Prediction HALT RDT/ALT Verification Review © 2008 Ops A La Carte 40
  • 89. RELIABILITY GOAL-SETTING Establish the target in an engineering meaningful manner © 2008 Ops A La Carte 41
  • 90. Reliability Definition (revisited) Reliability is often considered quality over time. Reliability is… “The ability of a system or component to perform its required functions under stated conditions for a specified period of time” IEEE 610.12-1990 © 2008 Ops A La Carte 42
  • 91. Reliability Goals & Metrics Summary  Reliability Goals & Metrics tie together all stages of the product life cycle. Well crafted goals provide the target for the business to achieve, they set the direction.  Metrics provide:  the milestones,  the “are we there, yet”, and  the feedback that all elements of the organization require to stay on track toward the goals. © 2008 Ops A La Carte 43
  • 92. Reliability Goal-Setting  Reliability Goals can be derived from:  Customer-specified or implied requirements  Internally-specified or self-imposed requirements (usually based on trying to be better than previous products)  Benchmarking against competition © 2008 Ops A La Carte 44
  • 93. Reliability Goal-Setting  Customer-specified or implied requirements  Many times the customer will specify the reliability requirements for the product • MTBF, MTTR, Availability, DOA Rate, and Return Rate are the most common, but there are many others  Sometimes, the customer will not specify the exact requirements, but there will be implied requirements • “Product must be ‘highly reliable’ over its life” • “The product should not fail in a way that requires a drilling session to be aborted.” • “A partial loss of collection data is allowed.” © 2008 Ops A La Carte 45
  • 94. Reliability Goal-Setting  Internally-specified or self-imposed requirements  These are usually based on trying to be better than previous products  The process involves interviewing key members of various departments and at contract manufacturing partners to find out what they have set as internal goals  These goals may need to be adjusted as information is gathered, but this represents a good starting point © 2008 Ops A La Carte 46
  • 95. Reliability Goal-Setting  Internally-specified Goals (Based on Trying to be Better than Previous Products)  Often, companies will set an internal goal to improve reliability by X% from one generation to the next.  It is not uncommon for this factor to be as high as 2x.  For SW, internal improvement goals require changes to development processes: • Goals less than 2x can generally be achieved by adjustments to existing processes • Goals of 2x or higher usually require significant changes to existing processes or the adoption of new development practices © 2008 Ops A La Carte 47
  • 96. Reliability Goal-Setting  Internally-specified Goals (Based on Interviewing Key Members of Various Departments)  Key individuals from various departments within company such as – marketing and sales – hardware and software engineering – customer service and field support – manufacturing and test – quality and reliability  Key individuals at Contract Manufacturing partners © 2008 Ops A La Carte 48
  • 97. Reliability Goal-Setting  Internally-specified Goals (May Need to be Adjusted as Information is Gathered, but This Represents a Good Starting Point)  New goals from customers may supersede any internal goals  Information from Gap Analysis may cause us to change our goals – If Gap is unrealistically high, it may make sense to reduce goals so that they are obtainable © 2008 Ops A La Carte 49
  • 98. Reliability Goal-Setting  Benchmarking Against Competition  Benchmarking is the process of comparing the current project, methods, or processes with the best practices in the industry  Benchmarking is crucial to both a start-up as well as an established company that is coming out with a new product to assure that the new product is competitive based on reliability and cost.  Benchmarking is often useful even if your customer has specified the reliability requirements so that we get a “sanity check” against the rest of the industry. © 2008 Ops A La Carte 50
  • 99. Reliability Goal-Setting  Benchmarking Key  Work with Marketing – Marketing knows who competitors are – Marketing knows what customers are asking for Work with Marketing to Marry Up Requirements! © 2008 Ops A La Carte 51
  • 100. Reliability Goal-Setting  Product vs. Process Benchmarking  Product Benchmarking: Comparing products requirements such as failure rate, MTBF, DOA rate, Annualized Failure Rate, Availability, Maintainability, and more.  Process Benchmarking: Comparing process methodologies such as in-house vs. outsource builds, quality philosophy, and screening methods. © 2008 Ops A La Carte 52
  • 101. Reliability Goal-Setting  Reliability Goals – Which Should We Use ?  Customer-specified or implied requirements ?  Internally-specified or self-imposed requirements ?  Benchmarking ?  For Best Results, Use All Three ! © 2008 Ops A La Carte 53
  • 102. Reliability Goals & Metrics Summary  A reliability goal includes each of the five elements of the reliability definition.  Probability of product performance  Intended function  Specified life  Specified operating conditions  Customer expectations © 2008 Ops A La Carte 54
  • 103. Reliability Goals & Metrics Summary  A reliability metric is often something that organization can measure on a relatively short, periodic basis:  Predicted failure rate (during design phase)  Field failure rate  Warranty  Actual field return rate  Dead on Arrival rate © 2008 Ops A La Carte 55
  • 104. Fully-Stated Reliability Goals  System goal at multiple points  Supporting metrics during development and field  Apportionment to appropriate level  Provide connections to overall business plan, contracts, customer expectations, and include any assumptions concerning financials  Benefit: clear target for development, vendor and production teams. © 2008 Ops A La Carte 56
  • 105. Reliability Goal  Let’s say we expect a few t failures in one year. R(t )  e   Less than 2%  Laboratory environ. ln(.98)  8760 /   XYZ function  XYZ function for one year with 98% reliability in the lab.  Assuming constant failure  (MTBF is 433,605 hrs.) rate © 2008 Ops A La Carte 57
  • 106. Other Points in Time  Also consider the bathtub curve  Infant mortality, out of box type failures  Shipping damage  Component defects, manufacturing defects  Wear out related failures  Bearings, connectors, solder joints, e-caps © 2008 Ops A La Carte 58
  • 107. Apportionment of Goals  Let’s look at example  A computer with a one year warranty and the business model requires less than 5% failures within the first year.  A desktop business computer in office environment with 95% reliability at one year. © 2008 Ops A La Carte 59
  • 108. Apportionment of Goals  For simplicity consider five major elements of the computer  Motherboard  Hard Disk Drive  Power Supply  Monitor  Keyboard  For starters, let’s give each sub-system the same goal © 2008 Ops A La Carte 60
  • 109. Apportionment of Goals Computer R = 0.95 Motherboard HDD P/S Monitor Keyboard R = 0.99 R = 0.99 R = 0.99 R = 0.99 R = 0.99 Assuming failures within each sub-system are independent, the simple multiplication of the reliabilities should result in meeting the system goal 0.99 * 0.99 * 0.99 * 0.99 * 0.99 = 0.95 Given no history or vendor data – this is just a starting point. © 2008 Ops A La Carte 61
  • 110. Estimate Reliability  The next step is to determine the sub-system reliability.  Historical data from similar products  Reliability estimates/test data by vendors  In house reliability testing  At first estimates are crude, refine as needed to make good decisions. © 2008 Ops A La Carte 62
  • 111. Apportionment of Goals Computer R = 0.95 Goals Motherboard HDD P/S Monitor Keyboard R = 0.99 R = 0.99 R = 0.99 R = 0.99 R = 0.99 Estimates Motherboard HDD P/S Monitor Keyboard R = 0.96 R = 0.98 R = 0.999 R = 0.99 R = 0.999 First pass estimates do not meet system goal. Now what? © 2008 Ops A La Carte 63
  • 112. Resolving the Gap  CPU goal 99% est. 96%  Use the simple reliability model to determine if reliability improvements  Largest gap, lowest estimate will impact the system reliability. i.e. changing the bios reliability from 99.9%  First, will the known issues to 99.99% will not bridge the difference? significantly alter the system reliability result.  If not enough, then use FMEA and HALT to populate Pareto of what to fix  Invest in improvements that will impact the system reliability.  Third, validate improvements © 2008 Ops A La Carte 64
  • 113. Resolving the Gap, (continued) HDD goal 0.99 est. 0.98  When the relationship of the failure mode and either design or environmental conditions exist we do not need FMEA or  Small gap, clear path to HALT – go straight to design resolve improvements.  HDD reliability and  Use ALT to validate the model operating temperature are and/or design improvements. related. Lowering the internal temperature the HDD reliability will improve. © 2008 Ops A La Carte 65
  • 114. Resolving the Gap, (continued)  P/S goal 0.99 est. 0.999  For any subsystem that exceeds the reliability goal, explore potential cost savings by  Estimate over the goal reducing the reliability performance.  This is only done when there is  Further improvement not cost accurate reliability estimates and effective given minimal impact significant cost savings. to system reliability.  Possible to reduce reliability (select less expensive model) and use savings to improve CPU/motherboard. © 2008 Ops A La Carte 66
  • 115. Progression of Estimates Initial Engineering Guess or Estimate Test Data Vendor Data Actual Field Data © 2008 Ops A La Carte 67
  • 116. Microsoft Model  Classic Model: Get feedback to the design and manufacturing team that permits visibility of the reliability gap. Permit comparison to goal.  Microsoft Model: Not estimating or measuring the reliability during design is something I call the Microsoft model. Just ship it, the customers will tell you what needs improvement. Don’t try the Microsoft Model! It works for them (on the software side) but probably won’t work for you (note that it did not work for them on the Xbox) © 2008 Ops A La Carte 68
  • 117. RELIABILITY PROGRAM AND INTEGRATION PLAN © 2008 Ops A La Carte 69
  • 118. Planning Introduction “The purpose of this task is to develop a reliability program which identifies, and ties together, all program management tasks required to accomplish program requirements.” - Mil Handbook 785 task 1 © 2008 Ops A La Carte 70
  • 119. Motivation for a Reliability Integration Plan  Customer requirements  Meet terms of contract  Meet customer expectations  Business opportunity  Reduce expenses  Improve brand perception  Employee opportunity  Provide direction  Excite empowerment © 2008 Ops A La Carte 71
  • 120. Reliability Program and Integration Plan  A Reliability Program and Integration Plan is crucial at the beginning of the product life cycle because in this plan, we define:  What are the overall goals of the product and of each assembly that makes up the product ?  What has been the past performance of the product ?  What is the size of the gap ?  What reliability elements/tools will be used ?  How will each tool be implemented and integrated to achieve the goals ?  What is our schedule for meeting these goals ? © 2008 Ops A La Carte 72
  • 121. Reliability Program and Integration Plan  The overall goals of a Reliability Program Plan  The goals are typically in the form of MTBF or Availability but can be about any measurable activity. At a minimum, the goals that are generally measured are: • Out of box failure rate • Reliability within warranty period • Reliability throughout life of product • Preventive maintenance / End-of-life goal © 2008 Ops A La Carte 73
  • 122. Reliability Program and Integration Plan  What has been the past performance of the product ?  For past performance, we can use data from – Field analysis – HALT – Any other reliability studies – Predictions – If this is the first product, we can benchmark the product against competitors in the industry and use their data © 2008 Ops A La Carte 74
  • 123. Reliability Program and Integration Plan  What is the size of the Gap ?  The gap analysis is a key part of the plan because it – sets the expectation on how much improvement is necessary from the previous generation – it helps dictate the tools that will be needed to reach the new reliability goals – it helps dictate the schedule / how long it will take to achieve these goals © 2008 Ops A La Carte 75
  • 124. Reliability Program and Integration Plan  What is the size of the Gap ? [Breakdown by Assembly]  To make this task more manageable, we must break down by Assembly – What are the results for the current product by Assembly ? – What are the goals for the new product by Assembly ? – What is the Gap by Assembly ? © 2008 Ops A La Carte 76
  • 125. Reliability Program and Integration Plan  What is the size of the Gap ? (continued) [Breakdown by Assembly]  Now that we understand the size of our Gap by Assembly, we must understand what is driving this Gap – Was it a particular design issue on the previous product ? – Were the returns largely DOA’s ?  Once we understand this, we are in a better position to choose the appropriate reliability tool to overcome this gap © 2008 Ops A La Carte 77
  • 126. Reliability Program and Integration Plan  State Constraints or Limiting Factors  Time constraints  Money or budget constraints  Resources have not been allocated  Engineering approaches related to reliability, including predetermined vendor selections © 2008 Ops A La Carte 78
  • 127. Reliability Program and Integration Plan  Time Constraints  We don’t have enough time to properly execute the program. Perhaps we may need to increase the sample size in our testing to accelerate the test results. Or perhaps we push some of the testing back onto the suppliers. © 2008 Ops A La Carte 79
  • 128. Reliability Program and Integration Plan  Money or Budget Constraint  Here we face the opposite problem as with a Time Constraint. Now we have a constraint on money so we may need to stretch out the testing and get more test information with fewer samples. Or we may elect to spend more time in the design before jumping into prototype testing, using lesser expensive design analysis tools than the prototype tools. © 2008 Ops A La Carte 80
  • 129. Reliability Program and Integration Plan  Resource Constraint  This may require that we go outside and look for help from consultants or contractors. There are always resources that can help, even if we don’t have within the company. © 2008 Ops A La Carte 81
  • 130. Reliability Program and Integration Plan  Pre-Determined Methods  Engineering approaches related to reliability, including predetermined vendor selections. This may require us to justify why we have apportioned the reliability to the assemblies the way we did. © 2008 Ops A La Carte 82
  • 131. Reliability Program and Integration Plan  What reliability elements/tools will be used ?  Based on the size of the gap AND what is driving this gap, we will choose which reliability tools to implement  If the gap is large, we will need to invest a lot of resources in the design tools prior to prototyping and testing the new product, such as: – Design of experiments – FMECA’s – Tolerance analyses  If the gap is small, we may decide to invest more resources in the prototype tools such as: – HALT – Reliability Demonstration / Life Tests  If the gap is largely a result of DOA’s and production escapes, we may want to invest more effort into the developing good manufacturing reliability tools such as HASS and HASA. 83 © 2008 Ops A La Carte
  • 132. Reliability Program and Integration Plan  What reliability elements/tools will be used ?  As with most programs, the gap will likely fall somewhere in between. So, we must develop a well- balanced program that has selected tools from each of the phases – Design tools – Prototype tools – Manufacturing tools © 2008 Ops A La Carte 84
  • 133. Reliability Program and Integration Plan  How will each tool be implemented and integrated to achieve the goals ?  The implementation and integration of each tool is perhaps the most difficult to plan. Here we must estimate the effects each tool will have on the overall reliability to understand how we are closing the gap  For this, we must look at specific issues that occurred on previous products and understand how a specific tool will help mitigate this issue on this next generation  If we can quantify the effect an issue had and we can quantify the reduction as a results, then we have evaluated how we are going to close the gap © 2008 Ops A La Carte 85
  • 134. Reliability Program and Integration Plan  How will each tool be implemented and integrated to achieve the goals ? AN EXAMPLE  Our current product is running at a 0.25% DOA rate per month and our goal is to reduce this by 50%.  The DOAs tend to focus around solder issues.  For this next generation, we decide to choose HASS as our tool to solve this.  Through research, we determine that HASS is 90% effective in finding and preventing solder defects from escaping into the field.  We write in our plan that we expect to meet and exceed our 50% reduction. © 2008 Ops A La Carte 86
  • 135. Reliability Program and Integration Plan  How will each tool be implemented and integrated to achieve the goals ? AN EXAMPLE (continued)  But we are not done there. What did we forget ? © 2008 Ops A La Carte 87
  • 136. Reliability Program and Integration Plan  How will each tool be implemented and integrated to achieve the goals ? AN EXAMPLE (continued)  How we will implement and integrate ?  To say that we will use HASS and that it is possible is one thing, but how will we do it. In our Reliability Program Plan, we need to: – determine what level HASS will be performed (assembly or system) – Outline functional and environmental equipment needed – determine production needs and throughput – understand manpower needs  Are we done there ? Not quite... © 2008 Ops A La Carte 88
  • 137. Reliability Program and Integration Plan  How will each tool be implemented and integrated to achieve the goals ? AN EXAMPLE (continued)  Next we must explain the integration.  What tools will feed into HASS in order to make it successful ? And how will they be used ? – Predictions – explain the first year multiplier – FMECA – understand technology limiting devices – HALT – develop margins  What tools will HASS feed into ? – Field Failure Tracking System – monitor DOA’s – Repair Depot – how to reduce NTF’s © 2008 Ops A La Carte 89
  • 138. Reliability Program and Integration Plan  What is our schedule for meeting these goals ?  The last piece of our Reliability Program Plan is the schedule.  With an infinite amount of time (and money) we can achieve any reliability. But we do not have the luxury !  We must schedule our reliability activities and assure that they are aligned with the schedule for the overall program. © 2008 Ops A La Carte 90
  • 139. Reliability Program and Integration Plan  What is our schedule for meeting these goals ?  First, we determine the order of occurrence of the tools. If we did a good job describing the tools and the integration of each, then this should be straight- forward.  Next we estimate a length of time for each tool.  Then, we put on an integration timeline along with dependencies.  Finally, we must compare with the master project schedule and make adjustments as necessary. © 2008 Ops A La Carte 91
  • 140. Reliability Schedule as Part of the Plan 1st Quarter ID ID Task Name Duration Start Finish % Complete Deliverable Predecessors Jan Feb 1 1 Reliability 504 days Mon 1/6/03 Tue 12/7/04 70% 2 2 Reliability During Concept 45 days Mon 1/6/03 Fri 3/7/03 100% 3 3 Reliability Benchmarking 10 days Mon 1/6/03 Fri 1/17/03 100% Rel Plan 4 4 Establishing Reliability Targets 5 days Mon 3/3/03 Fri 3/7/03 100% Rel Plan 5 5 Predictive Modeling 70 days Mon 6/9/03 Fri 9/12/03 100% 6 6 Power Supply 60 days Mon 6/9/03 Fri 8/29/03 100% 7 7 Initial Draft 15 days Mon 6/9/03 Fri 6/27/03 100% Report 8 8 Provide Stress Values for Each Component 25 days Mon 7/28/03 Fri 8/29/03 100% Study 7 9 9 Electronics Prediction 30 days Mon 8/4/03 Fri 9/12/03 100% 10 10 LCD/Touch Screen 30 days Mon 8/4/03 Fri 9/12/03 100% Report 11 11 Control System Software/Electronics 25 days Mon 8/4/03 Fri 9/5/03 100% Report 12 12 Total System Prediction 163 days Fri 8/1/03 Mon 3/15/04 100% Report 13 13 Perform prediction 41 days Fri 8/1/03 Fri 9/26/03 100% 14 14 Signatures on report/into DHF 123 days Fri 9/26/03 Mon 3/15/04 100% 15 15 HALT Testing 202 days Mon 6/9/03 Mon 3/15/04 100% 16 16 Power Supply HALT 202 days Mon 6/9/03 Mon 3/15/04 100% 17 17 Power Supply HALT Protocol 1 day Mon 6/9/03 Mon 6/9/03 100% Protocol 18 18 Power Supply HALT 194 days Thu 6/19/03 Mon 3/15/04 100% 19 19 Power Supply HALT/Report 194 days Thu 6/19/03 Mon 3/15/04 100% Report 20 20 System Level HALT 110 days Wed 10/15/03 Mon 3/15/04 100% 21 21 System HALT Protocol 44 days Wed 10/15/03 Fri 12/12/03 100% Protocol 22 22 System HALT 5 days Mon 12/15/03 Fri 12/19/03 100% 21 23 23 System HALT Report 58 days Thu 12/25/03 Mon 3/15/04 100% Report 22 24 24 Accelerated Life Testing 95 days Mon 6/9/03 Fri 10/17/03 0% 25 25 Power Supply 95 days Mon 6/9/03 Fri 10/17/03 0% 26 26 Life Test Protocol 95 days Mon 6/9/03 Fri 10/17/03 0% Protocol 27 27 Final Report 14 days Mon 6/9/03 Thu 6/26/03 0% Report 28 28 Manufacturing Reliability 318 days Tue 9/23/03 Tue 12/7/04 30% 29 29 Review ESS Process/Make Recommendations 51 days Tue 9/23/03 Mon 12/1/03 100% Study/Memo 30 30 HASA Feasibility Study 120 days Tue 12/2/03 Sat 5/15/04 50% Study/Memo 29 31 31 HASS Development Protocol 5 days Wed 3/3/04 Tue 3/9/04 0% Protocol 32 32 Setup HASS Process 60 days Mon 9/6/04 Fri 11/26/04 0% Study/Memo 33 33 Field Monitoring Protocol 5 days Sat 5/1/04 Thu 5/6/04 0% Protocol 34 34 Field Monitoring/Reporting 130 days Wed 6/9/04 Tue 12/7/04 0% Report © 2008 Ops A La Carte 92