The document discusses industrial engineering. It defines industrial engineering as concerned with designing integrated systems involving people, materials, equipment, and processes to increase productivity. It discusses the history and evolution of the field. It also describes key industrial engineering activities like work measurement, facilities design, quality control, and using operations research to improve processes. Productivity and its relationship to economic growth and living standards is also examined.
1. INDUSTRIAL ENGINEERING
12/1/2010
ASSIGNMENT NO-01
METHOD & TIME STUDY OF OPERATIONS WITH MOTION
ECONOMICS
OPERATIONS:
1. HEMMING
2. LAP FELT SEAM
3. BOUND SEAM
4.
SUBMITTED BY:
DILIP SIGH (ROLL NO-11) S
KUMAR SARVESH (ROLL NO-13)
RAJEEV SHARAN (ROLL NO-23)
1
2. TIME STUDY & MOTION ECONOMICS
[Type your address] [Type your phone number] [Type your e-mail address]
DILIP SINGH KUMAR SARVESH RAJEEV SHARAN
INDUSTRIAL ENGINEERING
INDUSTRIAL ENGINEERING
December 1, 2010
What is Industrial Engineering?
Definition of Industrial Engineering - The Work of an Industrial
Engineer
The field of engineering is subdivided in several major disciplines like
mechanical engineering, electrical engineering, civil engineering,
electronical engineering, chemical engineering, metallurgical engineering,
and also industrial engineering. Certainly this disciplines can also be
subdivided further. Industrial Engineering integrates knowledge and skills
from several fields of science: From the Technical Sciences, Economic
Sciences as well as Human Science - all these can also be supported with
skills in Information Sciences. The Industrial Engineer comprehends
knowledge in those sciences in order to increase the productivity of
processes, achieve quality products and assures Labour safety.
BFTECH-05 / 2008-12/ NIFT BANGALORE |
3. 12/1/2010
Hence, we can define Industrial Engineering as given below:
“Industrial Engineering is concerned with the design, improvement, and
installation of integrated systems of people, materials, information,
equipment and energy. It draws upon specialized knowledge and skill in the
mathematical, physical, and social sciences together with the principles and
methods of engineering analysis and design to specify, predict, and evaluate
the results to be obtained from such system”.
(The above given formal definition of industrial engineering has been
adopted by the Institute of Industrial Engineers (IIE)).
Role of Industrial Engineering (IE):
To understand the role of industrial engineering (IE) it is helpful to
learn the historical developments that were involved in the
development of IE.
Principles of early engineering were first taught in military academies
and were concerned primarily with road and bridge construction and
with defenses.
Interrelated advancements in the fields of physics and mathematics
laid the groundwork for practical applications of mechanical
principles.
The first significant application of electrical science was the
development of the telegraph by Samuel Morse (approximately 1840).
Thomas Edison‟s invention of the carbon lamp (approximately 1880)
led to widespread use of electricity for lighting purposes.
The science of chemistry is concerned with understanding the nature
of matter and learning how to produce desirable changes in materials.
Fuels were needed for the new internal combustion engines.
Lubricants were needed for the rapidly growing collection of
3
4. 12/1/2010
mechanical devices. Protective coatings were needed for houses,
metal products, ships, and so forth.
Five major engineering disciplines (civil, chemical, electrical,
industrial, and mechanical) were the branches of engineering that
came out prior to the 1st World War.
Developments following 2nd World War led to other engineering
disciplines, such as nuclear engineering, electronic engineering,
aeronautical engineering, and even computer engineering.
Chronology of Industrial Engineering
Charles Babbage visited factories in England and the United States in
the early 1800‟s and began a systematic recording of the details
involved in many factory operations.
He carefully measured the cost of performing each operation as
well as the time per operation required to manufacture a pound
of pins.
Babbage presented this information in a table, and thus
demonstrated that money could be saved by using women and
children to perform the lower-skilled operations.
The higher-skilled, higher-paid men need only perform those
operations requiring the higher skill levels.
Frederick W. Taylor is credited with recognizing the potential
improvements to be gained through analyzing the work content of a
job and designing the job for maximum efficiency.
Frank B. Gilbreth extended Taylor‟s work considerably. Gilbreth‟s
primary contribution was the identification, analysis and measurement
of fundamental motions involved in performing work.
Another early pioneer in industrial engineering was Henry L. Gantt,
who developed the so-called Gantt chart. The Gantt chart was a
significant contribution in that it provided a systematic graphical
procedure for pre-planning and scheduling work activities, reviewing
progress, and updating the schedule. Gantt charts are still in
widespread use today.
4
5. 12/1/2010
During the 1920s and 1930s much fundamental work was done on
economic aspects of managerial decisions, inventory problems,
incentive plans, factory layout problems, material handling problems,
and principles of organization.
Scope of Industrial Engineering (IE)
The extent of industrial engineering is evidenced by the wide range of
such activities as research in biotechnology, development of new
concepts of information processing, design of automated factories,
and operation of incentive wage plans.
Diversity of Industrial Engineering (IE)
Industrial engineering is a diverse discipline concerned with the
design, improvement, installation, and management of integrated
systems of people, materials, and equipment for all kinds of
manufacturing and service operations.
IE is concerned with performance measures and standards, research of
new products and product applications, ways to improve use of scarce
resources and many other problem solving adventures.
IE draws upon specialized knowledge and skill in the mathematical,
physical, and social sciences, together with a strong background in
engineering analysis and design and the management sciences to
specify, predict, and evaluate the performance from such systems.
5
6. 12/1/2010
What Industrial Engineers do??
So what do industrial engineers do to increase productivity and assure
quality?
An Industrial Engineer can perform several activities to fulfil its task:
Processes and Procedures of manufacturing or service activites can be
examined through Process Analysis.
He can Use Work Study comprehending Method Study and Time Study.
Method Study is the Study of How a job is performed examining and
recording the activities, operators, equipment and materials involved in the
process. Time Study records and rates the times of jobs being performed.
The mentioned activities are also called operations Management.
Furthermore can Industrial Engineering involve inventory management to
make a manufacturing process more feasible and efficient. Industrial
Engineers are also involved in design activities for Products, Equipment,
Plants an Workstations. Here ergonomics and motion economy play a role.
Last but not least is the Industrial Engineer playing an important role in
developing Quality Management Systems (as they i.e. should comply with
the ISO 9000 Standards). Here they often have job titles like Quality
Engineer or Quality Manager.
Employment
An Industrial Engineer may be employed in almost any type of
industry, business or institution, from retail establishments to
manufacturing plants to government offices to hospitals.
Because their skills can be used in almost any type of organization,
and also industrial engineers are more widely distributed among
industries than other engineers.
6
7. 12/1/2010
For example, industrial engineers work in insurance companies,
banks, hospitals, retail organizations, airlines, government agencies,
consulting firms, transportation, construction, public utilities, social
service, electronics, personnel, sales, facilities design, manufacturing,
processing, and warehousing.
What activities…???
Develop applications of new processing, automation, and control
technology.
Install data processing, management information, wage incentive
systems.
Develop performance standards, job evaluation, and wage and salary
programs.
Research new products and product applications.
Improve productivity through application of technology and human
factors.
Select operating processes and methods to do a task with proper tools
and equipment.
Design facilities, management systems, operating procedures.
Improve planning and allocation of scarce resources.
Enhance plant environment and quality of people's working life
Evaluate reliability and quality performance
Develop management control systems to aid in financial planning and
cost analysis
Implement office systems, procedures, and policies
Analyze complex business problems by operations research
Conduct organization studies, plant location surveys, and system
effectiveness studies
Study potential markets for goods and services, raw material sources,
labor supply, energy resources, financing, and taxes.
7
8. 12/1/2010
The evolution of the industrial and systems engineering profession has been
affected significantly by a number of related developments:
1. Impact of Operations Research
The development of industrial engineering has been greatly influenced
by the impact of an analysis approach called operations research.
This approach originated in England and the United States during 2nd
World War and was aimed at solving difficult war-related problems
through the use of science, mathematics, behavioral science,
probability theory, and statistics.
2. Impact of Digital Computers
Digital computers permit the rapid and accurate handling of vast
quantities of data, thereby permitting the IE to design systems for
effectively managing and controlling large, complex operations.
The digital computer also permits the IE to construct computer
simulation models of manufacturing facilities and the like in order to
evaluate the effectiveness of alternative facility configurations,
different management policies, and other management considerations.
Computer simulation is emerging as the most widely used IE
technique.The development and widespread utilization of personal
computers is having an exciting impact on the practice of industrial
engineering.
3. Emergence of Service Industries
In the early days of the industrial engineering profession, IE practice
was applied almost fully in manufacturing organizations. After the
2nd World War there was a growing awareness that the principles and
techniques of IE were also applicable in non-manufacturing
environments.
Thousands of Industrial Engineers are employed by government
organizations to increase efficiency, reduce paperwork, design
computerized management control systems, implement project
8
9. 12/1/2010
management techniques, monitor the quality and reliability of vendor-
supplied purchases, and for many other functions
Productivity
Productivity is a measure of output from a production process, per unit of
input. For example, labor productivity is typically measured as a ratio of
output per labor-hour, an input. Productivity may be conceived of as a
metric of the technical or engineering efficiency of production. As such, the
emphasis is on quantitative metrics of input, and sometimes output.
Productivity is distinct from metrics of allocative efficiency, which take into
account both the monetary value (price) of what is produced and the cost of
inputs used, and also distinct from metrics of profitability, which address the
difference between the revenues obtained from output and the expense
associated with consumption of inputs.
Productivity is the ratio of output to input and is normally represented in
the following way:
PRODUCTIVITY = ( OUTPUT / INPUT)
OUTPUT refers to goods or services produced
INPUT refers to all resources used in producing the Output
This includes one, or all, of the following:
Land and Buildings
Materials
Machines
People
The use which is made of all of these resources combined, determines the
productivity of the enterprise.
9
10. 12/1/2010
The Task of Management
Management is responsible for making sure that the best use is made of all
resources, ie. Land and buildings, materials, machine and men. This can be
achieved by co-ordinating the efforts to everyone in the organisation to
achieve the best results, and to use the resources as effectively as possible.
Economic growth and productivity
Production is a process of combining various material inputs (stuff) and
immaterial inputs (plans, know-how) in order to make something for
consumption (the output). The methods of combining the inputs of
production in the process of making output are called technology.
Technology can be depicted mathematically by the production function
which describes the relation between input and output. The production
function can be used as a measure of relative performance when comparing
technologies.
The production function is a simple description of the mechanism of
economic growth. Economic growth is defined as any production increase of
a business or nation (whatever you are measuring). It is usually expressed as
an annual growth percentage depicting growth of the company output (per
entity) or the national product (per nation). Real economic growth (as
opposed to inflation) consists of two components. These components are an
increase in production input and an increase in productivity.
10
11. 12/1/2010
The figure illustrates an economic growth process (exaggerated for clarity).
The Value T2 (value at time 2) represents the growth in output from Value
T1 (value at time 1). Each time of measurement has its own graph of the
production function for that time (the straight lines). The output measured at
time 2 is greater than the output measured at time one for both of the
components of growth: an increase of inputs and an increase of productivity.
The portion of growth caused by the increase in inputs is shown on line 1
and does not change the relation between inputs and outputs. The portion of
growth caused by an increase in productivity is shown on line 2 with a
steeper slope. So increased productivity represents greater output per unit of
input.
Accordingly, an increase in productivity is characterised by a shift of the
production function (steepening slope) and a consequent change to the
output/input relation. The formula of total productivity is normally written as
follows:
Total productivity = Output quantity / Input quantity
According to this formula, changes in input and output have to be measured
inclusive of both quantitative and qualitative changes. In practice,
quantitative and qualitative changes take place when relative quantities and
relative prices of different input and output factors alter. In order to
accentuate qualitative changes in output and input, the formula of total
productivity shall be written as follows:
Total productivity = Output quality and quantity / Input quality and
quantity
Relationship between higher productivity and higher living standards
Higher productivity can contribute to a higher standard of living and will
also provide:
• Larger supplies of consumer goods and capital goods at lower costs
and lower prices.
• Higher real earnings .
• Improvements in working and living conditions, including shorter
hours of work.
11
12. 12/1/2010
• A strengthening of the economic foundations on which the well being
of individuals is based.
Main processes of a company
A company can be divided into sub-processes in different ways; yet, the
following five are identified as main processes, each with a logic, objectives,
theory and key figures of its own. It is important to examine each of them
individually, yet, as a part of the whole, in order to be able to measure and
understand them. The main processes of a company are as follows
real process
income distribution process
production process
monetary process
market value process
Productivity is created in the real process, productivity gains are distributed
in the income distribution process and these two processes constitute the
production process. The production process and its sub-processes, the real
process and income distribution process occur simultaneously, and only the
production process is identifiable and measurable by the traditional
accounting practices. The real process and income distribution process can
be identified and measured by extra calculation, and this is why they need to
be analysed separately in order to understand the logic of production
performance.
12
13. 12/1/2010
Real process generates the production output from input, and it can be
described by means of the production function. It refers to a series of events
in production in which production inputs of different quality and quantity are
combined into products of different quality and quantity. Products can be
physical goods, immaterial services and most often combinations of both.
The characteristics created into the product by the manufacturer imply
surplus value to the consumer, and on the basis of the price this value is
shared by the consumer and the producer in the marketplace. This is the
mechanism through which surplus value originates to the consumer and the
producer likewise. Surplus value to the producer is a result of the real
process, and measured proportionally it means productivity.
Income distribution process of the production refers to a series of events in
which the unit prices of constant-quality products and inputs alter causing a
change in income distribution among those participating in the exchange.
The magnitude of the change in income distribution is directly proportionate
to the change in prices of the output and inputs and to their quantities.
Productivity gains are distributed, for example, to customers as lower
product sales prices or to staff as higher income pay. Davis has deliberated [4]
the phenomenon of productivity, measurement of productivity, distribution
of productivity gains, and how to measure such gains. He refers to an
article[5] suggesting that the measurement of productivity shall be developed
so that it ”will indicate increases or decreases in the productivity of the
company and also the distribution of the ‟fruits of production‟ among all
parties at interest”. According to Davis, the price system is a mechanism
through which productivity gains are distributed, and besides the business
enterprise, receiving parties may consist of its customers, staff and the
suppliers of production inputs. In this article, the concept of ”distribution of
the fruits of production” by Davis is simply referred to as production income
distribution or shorter still as distribution.
The production process consists of the real process and the income
distribution process. A result and a criterion of success of the production
process is profitability. The profitability of production is the share of the real
process result the producer has been able to keep to himself in the income
distribution process. Factors describing the production process are the
components of profitability, i.e., returns and costs. They differ from the
factors of the real process in that the components of profitability are given at
nominal prices whereas in the real process the factors are at periodically
fixed prices.
13
14. 12/1/2010
Monetary process refers to events related to financing the business. Market
value process refers to a series of events in which investors determine the
market value of the company in the investment markets.
Surplus value as a measure of production profitability
The scale of success run by a going concern is manifold, and there are no
criteria that might be universally applicable to success. Nevertheless, there is
one criterion by which we can generalise the rate of success in production.
This criterion is the ability to produce surplus value. As a criterion of
profitability, surplus value refers to the difference between returns and costs,
taking into consideration the costs of equity in addition to the costs included
in the profit and loss statement as usual. Surplus value indicates that the
output has more value than the sacrifice made for it, in other words, the
output value is higher than the value (production costs) of the used inputs. If
the surplus value is positive, the owner‟s profit expectation has been
surpassed.
The table presents a surplus value calculation. This basic example is a
simplified profitability calculation used for illustration and modelling. Even
as reduced, it comprises all phenomena of a real measuring situation and
most importantly the change in the output-input mix between two periods.
Hence, the basic example works as an illustrative “scale model” of
production without any features of a real measuring situation being lost. In
practice, there may be hundreds of products and inputs but the logic of
measuring does not differ from that presented in the basic example.
Both the absolute and relative surplus value have been calculated in the
example. Absolute value is the difference of the output and input values and
the relative value is their relation, respectively. The surplus value calculation
14
15. 12/1/2010
in the example is at a nominal price, calculated at the market price of each
period.
Productivity model
The next step is to describe a productivity model[6] by help of which it is
possible to calculate the results of the real process, income distribution
process and production process. The starting point is a profitability
calculation using surplus value as a criterion of profitability. The surplus
value calculation is the only valid measure for understanding the connection
between profitability and productivity or understanding the connection
between real process and production process. A valid measurement of total
productivity necessitates considering all production inputs, and the surplus
value calculation is the only calculation to conform to the requirement.
The process of calculating is best understood by applying the term ceteris
paribus, i.e. "all other things being the same," stating that at a time only the
impact of one changing factor be introduced to the phenomenon being
15
16. 12/1/2010
examined. Therefore, the calculation can be presented as a process
advancing step by step. First, the impacts of the income distribution process
are calculated, and then, the impacts of the real process on the profitability
of the production.
The first step of the calculation is to separate the impacts of the real process
and the income distribution process, respectively, from the change in
profitability (285.12 – 266.00 = 19.12). This takes place by simply creating
one auxiliary column (4) in which a surplus value calculation is compiled
using the quantities of Period 1 and the prices of Period 2. In the resulting
profitability calculation, Columns 3 and 4 depict the impact of a change in
income distribution process on the profitability and in Columns 4 and 7 the
impact of a change in real process on the profitability.
Illustration of the real and income distribution processes
Measurement results can be illustrated by models and graphic presentations.
The following figure illustrates the connections between the processes by
means of indexes describing the change. A presentation by means of an
index is illustrative because the magnitudes of the changes are
commensurate. Figures are from the above calculation example of the
production model. (Loggerenberg van et al. 1982. Saari 2006).
The nine most central key figures depicting changes in production
performance can be presented as shown in Figure. Vertical lines depict the
key figures of the real process, production process and income distribution
process. Key figures in the production process are a result of the real process
and the income distribution process. Horizontal lines show the changes in
input and output processes and their impact on profitability. The logic
16
17. 12/1/2010
behind the figure is simple. Squares in the corners refer to initial calculation
data. Profitability figures are obtained by dividing the output figures by the
input figures in each process. After this, the production process figures are
obtained by multiplying the figures of the real and income distribution
process.
Depicting the development by time series
Development in the real process, income distribution process and production
process can be illustrated by means of time series. (Kendrick 1984, Saari
2006) The principle of a time series is to describe, for example, the
profitability of production annually by means of a relative surplus value and
also to explain how profitability was produced as a consequence of
productivity development and income distribution. A time series can be
composed using the chain indexes as seen in the following.
Now the intention is to draw up the time series for the ten periods in order to
express the annual profitability of production by help of productivity and
income distribution development. With the time series it is possible to prove
that productivity of the real process is the distributable result of production,
and profitability is the share remaining in the company after income
distribution between the company and interested parties participating in the
exchange.
The graph shows how profitability depends on the development of
productivity and income distribution. Productivity figures are fictional but in
practice they are perfectly feasible indicating an annual growth of 1.5 per
cent on average. Growth potentials in productivity vary greatly by industry,
and as a whole, they are directly proportionate to the technical development
in the branch. Fast-developing industries attain stronger growth in
17
18. 12/1/2010
productivity. This is a traditional way of thinking. Today we understand that
human and social capitals together with competition have a significant
impact on productivity growth. In any case, productivity grows in small
steps. By the accurate measurement of productivity, it is possible to
appreciate these small changes and create an organisation culture where
continuous improvement is a common value.
Measuring and interpreting partial productivity
Measurement of partial productivity refers to the measurement solutions
which do not meet the requirements of total productivity measurement, yet,
being practicable as indicators of total productivity. In practice,
measurement in production means measures of partial productivity. In that
case, the objects of measurement are components of total productivity, and
interpreted correctly, these components are indicative of productivity
development. The term of partial productivity illustrates well the fact that
total productivity is only measured partially – or approximately. In a way,
measurements are defective but, by understanding the logic of total
productivity, it is possible to interpret correctly the results of partial
productivity and to benefit from them in practical situations.
Typical solutions of partial productivity are:
1. Single-factor productivity
2. Value-added productivity
3. Unit cost accounting
4. Efficiency ratios
5. Managerial control ratio system
Single-factor productivity refers to the measurement of productivity that is a
ratio of output and one input factor. A most well-known measure of single-
factor productivity is the measure of output per work input, describing work
productivity. Sometimes it is practical to employ the value added as output.
Productivity measured in this way is called Value-added productivity. Also,
18
19. 12/1/2010
productivity can be examined in cost accounting using Unit costs. Then it is
mostly a question of exploiting data from standard cost accounting for
productivity measurements. Efficiency ratios, which tell something about the
ratio between the value produced and the sacrifices made for it, are available
in large numbers. Managerial control ratio systems are composed of single
measures which are interpreted in parallel with other measures related to the
subject. Ratios may be related to any success factor of the area of
responsibility, such as profitability, quality, position on the market, etc.
Ratios may be combined to form one whole using simple rules, hence,
creating a key figure system.
The measures of partial productivity are physical measures, nominal price
value measures and fixed price value measures. These measures differ from
one another by the variables they measure and by the variables excluded
from measurements. By excluding variables from measurement makes it
possible to better focus the measurement on a given variable, yet, this means
a more narrow approach. The table below was compiled to compare the
basic types of measurement. The first column presents the measure types,
the second the variables being measured, and the third column gives the
variables excluded from measurement.
National productivity
Productivity measures are often used to indicate the capacity of a nation to
harness its human and physical resources to generate economic growth.
Productivity measures are key indicators of economic performance and there
is strong interest in comparing them internationally. The OECD publishes an
annual Compendium of Productivity Indicators[7] that includes both labour
and multi-factor measures of productivity.
Labour productivity and multi-factor productivity
Labour productivity is the ratio of (the real value of) output to the input of
labour. Where possible, hours worked, rather than the numbers of
employees, is used as the measure of labour input. With an increase in part-
time employment, hours worked provides the more accurate measure of
labour input. Labour productivity should be interpreted very carefully if used
as a measure of efficiency. In particular, it reflects more than just the
efficiency or productivity of workers. Labour productivity is the ratio of
output to labour input; and output is influenced by many factors that are
19
20. 12/1/2010
outside of workers' influence, including the nature and amount of capital
equipment that is available, the introduction of new technologies, and
management practices.
Multifactor productivity is the ratio of the real value of output to the
combined input of labour and capital. Sometimes this measure is referred to
as total factor productivity. In principle, multifactor productivity is a better
indicator of efficiency. It measures how efficiently and effectively the main
factors of production - labour and capital - combine to generate output.
However, in some circumstances, robust measures of capital input can be
hard to find.
Labour productivity and multifactor productivity both increase over the long
term. Usually, the growth in labour productivity exceeds the growth in
multifactor productivity, reflecting the influence of relatively rapid growth
of capital on labour productivity.
Importance of national productivity growth
Productivity growth is a crucial source of growth in living standards.
Productivity growth means more value is added in production and this
means more income is available to be distributed.
At a firm or industry level, the benefits of productivity growth can be
distributed in a number of different ways:
to the workforce through better wages and conditions;
to shareholders and superannuation funds through increased profits
and dividend distributions;
to customers through lower prices;
to the environment through more stringent environmental protection;
and
to governments through increases in tax payments (which can be used
to fund social and environmental programs).
Productivity growth is important to the firm because it means that it can
meet its (perhaps growing) obligations to workers, shareholders, and
governments (taxes and regulation), and still remain competitive or even
improve its competitiveness in the market place.
There are essentially two ways to promote growth in output:
20
21. 12/1/2010
bring additional inputs into production; or
increase productivity.
Adding more inputs will not increase the income earned per unit of input
(unless there are increasing returns to scale). In fact, it is likely to mean
lower average wages and lower rates of profit.
But, when there is productivity growth, even the existing commitment of
resources generates more output and income. Income generated per unit of
input increases. Additional resources are also attracted into production and
can be profitably employed.
At the national level, productivity growth raises living standards because
more real income improves people's ability to purchase goods and services
(whether they are necessities or luxuries), enjoy leisure, improve housing
and education and contribute to social and environmental programs.
„Productivity isn't everything, but in the long run it is almost everything. A
country's ability to improve its standard of living over time depends almost
entirely on its ability to raise its output per worker. World War II veterans
came home to an economy that doubled its productivity over the next 25
years; as a result, they found themselves achieving living standards their
parents had never imagined. Vietnam veterans came home to an economy
that raised its productivity less than 10 percent in 15 years; as a result, they
found themselves living no better - and in many cases worse - than their
parents‟.
„Over long periods of time, small differences in rates of productivity growth
compound, like interest in a bank account, and can make an enormous
difference to a society's prosperity. Nothing contributes more to reduction of
poverty, to increases in leisure, and to the country's ability to finance
education, public health, environment and the arts‟.
Sources of productivity growth
In the most immediate sense, productivity is determined by:
the available technology or know-how for converting resources into
outputs desired in an economy; and
the way in which resources are organised in firms and industries to
produce goods and services.
21
22. 12/1/2010
Average productivity can improve as firms move toward the best available
technology; plants and firms with poor productivity performance cease
operation; and as new technologies become available. Firms can change
organisational structures (e.g. core functions and supplier relationships),
management systems and work arrangements to take the best advantage of
new technologies and changing market opportunities. A nation's average
productivity level can also be affected by the movement of resources from
low-productivity to high-productivity industries and activities.
National productivity growth stems from a complex interaction of factors.
As just outlined, some of the most important immediate factors include
technological change, organisational change, industry restructuring and
resource reallocation, as well as economies of scale and scope. Over time,
other factors such as research and development and innovative effort, the
development of human capital through education, and incentives from
stronger competition promote the search for productivity improvements and
the ability to achieve them. Ultimately, many policy, institutional and
cultural factors determine a nation's success in improving productivity.
Productivity studies
Productivity studies analyze technical processes and engineering
relationships such as how much of an output can be produced in a specified
period of time (see also Taylorism). It is related to the concept of efficiency.
While productivity is the amount of output produced relative to the amount
of resources (time and money) that go into the production, efficiency is the
value of output relative to the cost of inputs used. Productivity improves
when the quantity of output increases relative to the quantity of input.
Efficiency improves, when the cost of inputs used is reduced relative the
value of output. A change in the price of inputs might lead a firm to change
the mix of inputs used, in order to reduce the cost of inputs used, and
improve efficiency, without actually increasing the quantity of output
relative the quantity of inputs. A change in technology, however, might
allow a firm to increase output with a given quantity of inputs; such an
increase in productivity would be more technically efficient, but might not
reflect any change in allocative efficiency.
The Ishikawa diagram, and related business process modeling, may be a
useful tools for studying productivity. These methods list process inputs
22
23. 12/1/2010
such as people, methods, machines, energy and materials and the
environment.
Energy efficiency
Energy efficiency has played a significant role in increasing productivity in
the past; however, most industrial processes have exhausted the easy
efficiency gains. The early Newcomen steam engine was 1% efficient,
Watt's improvements increased efficiency to 4%, and today's steam turbines
may have efficiencies in the 40% range.
Increases in productivity
Companies can increase productivity in a variety of ways. The most obvious
methods involve automation and computerization which minimize the tasks
that must be performed by employees. Recently, less obvious techniques are
being employed that involve ergonomic design and worker comfort. A
comfortable employee, the theory maintains, can produce more than a
counterpart who struggles through the day. In fact, some studies claim that
measures such as raising workplace temperature can have a drastic effect on
office productivity. Experiments done by the Japanese Shiseido corporation
also suggested that productivity could be increased by means of perfuming
or deodorising the air conditioning system of workplaces. Increases in
productivity also can influence society more broadly, by improving living
standards, and creating income. They are central to the process generating
economic growth and capital accumulation. A new theory suggests that the
increased contribution that productivity has on economic growth is largely
due to the relatively high price of technology and its exportation via trade, as
well as domestic use due to high demand, rather than attributing it to micro
economic efficiency theories which tend to downsize economic growth and
reduce labor productivity for the most part. Many economists see the
economic expansion of the later 1990s in the United States as being allowed
by the massive increase in worker productivity that occurred during that
period. The growth in aggregate supply allowed increases in aggregate
demand and decreases in unemployment at the same time that inflation
remained stable. Others emphasize drastic changes in patterns of social
behaviour resulting from new communication technologies and changed
male-female relationships.
23
24. 12/1/2010
Labor productivity
Labour productivity is generally speaking held to be the same as the
"average product of labor" (average output per worker or per worker-hour,
an output which could be measured in physical terms or in price terms). It is
not the same as the marginal product of labor, which refers to the increase in
output that results from a corresponding increase in labor input. The
qualitative aspects of labor productivity such as creativity, innovation,
teamwork, improved quality of work and the effects on other areas in a
company are more difficult to measure.
Marx on productivity
In Karl Marx's labor theory of value, the concept of capital productivity is
rejected as an instance of reification, and replaced with the concepts of the
organic composition of capital and the value product of labor. A sharp
distinction is drawn by Marx for the productivity of labor in terms of
physical outputs produced, and the value or price of those outputs. A small
physical output might create a large value, while a large physical output
might create only a small value - with obvious consequences for the way the
labor producing it would be rewarded in the marketplace. Moreover if a
large output value was created by people, this did not necessarily have
anything to do with their physical productivity; it could be just due to the
favorable valuation of that output when traded in markets. Therefore, merely
focusing on an output value realised, to assess productivity, might lead to
mistaken conclusions. In general, Marx rejected the possibility of a concept
of productivity that would be completely neutral and unbiased by the
interests or norms of different social classes. At best, one could say that
objectively, some practices in a society were generally regarded as more or
less productive, or as improving productivity - irrespective of whether this
was really true. In other words, productivity was always interpreted from
some definite point of view. Typically, Marx suggested in his critique of
political economy, only the benefits of raising productivity were focused on,
rather than the human (or environmental) costs involved. Thus, Marx could
even find some sympathy for the Luddites, and he introduced the critical
concept of the rate of exploitation of human labour power to balance the
obvious economic progress resulting from an increase in the productive
forces of labor.
24
25. 12/1/2010
Sceular decline in productivity
U.S. productivity growth has been in long term decline. U.S. GDP growth
has never returned to the 4% plus rates of the pre World War 1 decades.
Resource depletion decreases productivity as more effort in the form of
labor, materials and energy are required for extraction and processing. For
example, early U.S. onshore oil production yielded 100 barrels per foot
drilled whereas by the 1990's yield was one barrel per foot.
The long term decline in productivity may be viewed as a Kondratiev wave
(see: Peak progress: 1870 to 1914) phenomenon. Modern Kondratiev wave
research gives a clearer link between actual historical innovation and
economic growth.
Productivity paradox
Despite the proliferation of computers, productivity growth was relatively
slow from the 1970s threough the early 1990s. One hypothesis to explain
this is that computers are productive, yet their productive gains are realized
only after a lag period, during which complementary capital investments
must be developed to allow for the use of computers to their full potential.
Another hypothesis states that computers are simply not very productivity-
enhancing because they require time, a scarce complementary human input.
This theory holds that although computers perform a variety of tasks, these
tasks are not done in any particularly new or efficient manner, but rather
they are only done faster. It has also been argued that computer automation
just facilitates ever more complex bureaucracies and regulation, and
therefore produces a net reduction in real productivity. Another explanation
is that knowledge work productivity and information-technology (IT)
productivity are linked, and that without improving knowledge work
productivity, IT productivity does not have a governing mechanism.
Factors Tending to Reduce Productivity
Excess Work Content added bv Defects in Desiqn. or Specification of
Product
• The bad design of the garment prevents the use of the most economic
25
26. 12/1/2010
methods of sewing.
• The lack of standardisation prevents the use of high speed production
processes.
• Incorrect quality standards cause unnecessary work.
• The design of the garment may mean that an excessive amount of
fabric has",to be wasted in cutting, due to the shape of the pattern parts.
• Large size ranges and colours reduce the number of sizes which can
be marked in, thereby increasing the cloth usage per size.
Excess Work content added bv Inefficient methods of manufacture or
operation
• The use of the wrong machine can cause reduced output.
• If the method is not being adhered to, then productivity will b,e,reduced.
• Bad workplace layout causes wasted movement.
• An operator's bad working methods cause wasted time and effort.
Ineffective Time due to Shortcominqs of Manaqement/Supervisors
• Excess Product variety adds to idle time, due to short runs.
• The lack of standardisation adds idle time due to changeovers.
• Design changes add ineffective time due to stoppages for re-training.
• Bad planning of the work and orders reduces efficiency.
• Lack of fabric due to bad planning causes waiting time.
26
27. 12/1/2010
• Badly maintained machines cause idle time.
• Machines in bad condition cause bad quality.
• Bad working conditions prevent the operator from working steadily,
feeling
comfortable and at home.
• Accidents cause lost time.
• Poor service operators cause delays, waiting for cotton etc.
Ineffective time within the control of the Operator
• Absence, lateness and laziness reduce productivity.
• Careless workmanship causes bad quality.
• Accidents due to carelessness cause absenteeism.
27
28. 12/1/2010
WORK MEASURMENT
Work Measurement is a term which covers several different ways of finding
out how long a job or part of a job should take to complete. It can be defined
as the systematic determination, through the use of various techniques, of the
amount of effective physical and mental work in terms of work units in a
specified task. The work units usually are given in standard minutes or
standard hours.
Why should we need to know how long a job should take? The answer to
this question lies in the importance of time in our everyday life. We need to
know how long it should take to walk to the train station in the morning, one
needs to schedule the day's work and even when to take out the dinner from
the oven.
In the business world these standard times are needed for:
1.) planning the work of a workforce,
2.) manning jobs, to decide how many workers it would need to complete
certain jobs,
3.) scheduling the tasks allocated to people
4.) costing the work for estimating contract prices and costing the labour
content in general
5.) calculating the efficiency or productivity of workers - and from this:
6.) providing fair returns on possible incentive bonus payment schemes.
On what are these standard times set? They are set, not on how long a
certain individual would take to complete a task but on how long a trained,
experienced worker would take to do the task at a defined level of pace or
performance.
Who sets these standard times? Specially trained and qualified observers set
these times, using the most appropriate methods or techniques for the
purpose i.e. "horses for courses".
How it is done depends on circumstances that obtain. The toolkit available
to the comprehensively trained observer is described below.
Selecting the most appropriate methods of work measurement
The method chosen for each individual situation to be measured depends on
several factors which include:
28
29. 12/1/2010
a.)the length on the job to be measured in time units
b.)the precision which is appropriate for the type of work in terms of time
units (i.e. should it be in minutes, hundredths or thousandths of a minute)
c.) the general cycle-time of the work, i.e. does it take seconds, minutes or
days to complete
The length of time necessary for the completion of the range of jobs can
vary from a few seconds in highly repetitive factory work to several weeks
or months for large projects such as major shutdown maintenance work on
an oil refinery. It is quite clear that using a stop-watch, for example, on the
latter work would take several man-years to time to measure! Thus, more
"overall" large-scale methods of timing must be employed.
The precision is an important factor, too. This can vary from setting times of
the order of "to the nearest thousandth of a minute" (e.g. short cycle factory
work) to the other end of the scale of "to the nearest week" (e.g. for large
project work).
These are the dominant factors that affect the choice of method of
measurement.
The methods
PMTS.
At the "precision" end of the scale is a group of methods known as
predetermined motion time systems that use measurement units in ten
thousandths (0.0001) of a minute or hundred-thousandths of an hour
(0.00001 hour).
The resulting standard times can be used directly, for very short-cycle work
of around one minute total duration such as small assembly work. However,
they often are used to generate regularly used basic tasks such using
assembling or disassembling nuts and bolts, using a screwdriver and similar.
Tasks of this type are filed as standard or synthetic data-banks.
Estimating.
At the other end of the scale (long-cycle and project work) we need
something which is quick to use. Such a method is estimating. This can
exist in three main forms.
a.)Analytical estimating relies on the experience and judgement of the
estimator. It is just of case of weighing up the work content and, using this
29
30. 12/1/2010
experience, stating a probable time for completion, such as "this job will
take about eight days to complete".
b.)Category estimating. This is a form of range estimating and requires a
knowledge of the work. Estimators may not feel comfortable with overall,
analytical estimates upon which may depend the outlay of a great deal of
money. They often prefer giving a range estimate such as "this job should
take between 12 weeks and 14 weeks to complete", which provides a safety
net should things go wrong. Such ranges are not just picked upon at random
but are statistically calculated and based on probability theory.
c.)Comparative estimating. This is another example of range estimating.
Again, estimators rely on experience of the work in order to produce
estimates. This experience can be augmented by the provision of each time-
range with a few typical, descriptive, jobs that would guide estimators to the
most appropriate range. The estimator would compare the work to be
estimated with those in the various ranges until the most appropriate fit is
found.
Timing.
The intermediate method between the two groups above, is timing the work
in some way, usually with a stop-watch or computerised electronic study
board. This method is retrospective in that the job must be seen in action in
order to be timed whereas the other methods are prospective and can be
used for timing jobs before they start.
The observer times each element of the work and obtains times that the
observed operator takes to do the elements. Each timing is adjusted (rated)
by the pace at which the operator was working as assessed by the observer.
This produces basic times for the elements and hence the whole job, which
are independent of the operator and can be used as the time for a trained,
experienced worker to carry out the same elements.
Another method of assessing the work is using activity sampling and rated
activity sampling. This is a method based on the observer making snap
observations at random or systematic sample times, observing what the
operator is (or operators are) doing at the times of those observations
30
31. 12/1/2010
Models:
A most useful method for standard or synthetic data-banks of job or element
times is using computer models of the jobs. These are generated as
mathematical formulae in which the observed data are inserted to compile a
time for completion of the task or project. It is a useful method for recycling
time standards for elements of basic work over and over again, only
changing the values of the variables to suit each project.
ACTIVITY SAMPLING
What is it ?
Activity Sampling is a statistical technique that can be used as a means for
collecting data. It is defined by BS 3138:41008 as:
A technique in which a large number of observations are made over a period
of time of one group of machines, processes or workers. Each observation
records what is happening at that instant and the percentage of observations
recorded for a particular activity or delay is a measure of the percentage of
time during which that activity or delay occurs.
It is normally used for collecting information on the percentages of time
spent on activities, without the need to devote the time that would otherwise
be required for any continuous observation.
One of the great advantages of this technique is that it enables lengthy
activities or groups of activities to be studied economically and in a way that
produces statistically accurate data.
Fixed and Random Interval Sampling
Activity Sampling can be carried out at random intervals or fixed intervals.
Random activity sampling is where the intervals between observations are
selected at random e.g. from a table of random numbers. Fixed interval
activity sampling is where the same interval exists between observations. A
decision will need to be made on which of these two approaches is to be
chosen. A fixed interval is usually chosen where activities are performed by
a person or group of people who have a degree of control over what they do
and when they do it. Random intervals will normally be used where there are
a series of automated tasks or activities as part of a process, that are have to
be performed in a pre established regular pattern. If fixed interval sampling
31
32. 12/1/2010
were to be used in this situation there is a danger that the sampling point
would continue to occur at the same point in the activity cycle.
Confidence Levels
Remember, that activity sampling is used for assessing the percentage of
time spent on activities.
Because activity sampling conforms to the binomial distribution it is
possible to use a calculation to determine how many observations will be
needed to operate within specified limits of accuracy.
The formula for the number of observations is as follows:
= 4 x p x (100 - p)
L2
Where p is the estimated % time spent on the activity
Where L is the limit of error, expressed as a %
Once the above calculation has been completed the observations can begin
and activities are recorded at the agreed time intervals. When they have been
completed a further calculation can be used to determine the error rate, as
follows:
Error Rate = ± 2 x √( p x (100 - p) )
Number of observations
This is very much an overview to the topic of activity sampling, with a
definition of what it is, its advantage over continuous observation and the
formulae that can be used to establish the confidence levels that can be
obtained.
DATA COLLECTION
What is/are data?
One definition of data is: "known facts or things used as a basis for inference
or reckoning":- The OED.
Another is: "facts given from which others may be inferred": - Chambers
Dictionary.
The term "data" more commonly is another word for "statistics" or
numerical facts. The UK Prime minister, Disraeli, is quoted as saying,
"There are lies, damned lies and statistics". Indeed, statistical data can be
32
33. 12/1/2010
presented to mean what you wish them to mean. ("Data" is a plural word, the
singular being datum. However, through American influence it is acceptable
to use "data" in the singular form rather than "data are".
Data into knowledge - a recap on fundamentals
Data are facts, for example the number of items counted, or measurements
of these items. To be of use we need to transform data into knowledge so
that inferences can be made from them, such as decisions as to whether or
not a component is capable of carrying out its allotted function.
Forms of data
Data can be separated into three categories of data (variables):
a.)discrete variables, which are numerical and can only be particular
numbers, such as the number of workers in an organization (i.e. they are
counted in single units)
b.)continuous variables, which are dimensions of items in units of
measurement such as metres, litres, volts and other units of length, volume,
time.
c.)attribute variables, which are descriptive e.g. a machine "on" or "off", or
an employee absent or present.
Important: It is crucial when dealing with any problems in which statistical
method is used, one can differentiate between the three types of data,
because the distinctions usually dictate which form of analysis is
appropriate.
The main phases in the collection of data using sampling methods are:
1.The purpose or objective for collecting the data,
2.identification of the entire "population" from which the data are to be
collected (e.g. a sampling frame).
3.decisions on:
o method of collection, or how the data are to be collected
o sample size (i.e. how many readings to collect), and
4.validation of the results, this being a vital part of the collection/analysis
process.
Note: whereas "population" once referred to people, the term is now used to
describe the whole situation to be sampled.
33
34. 12/1/2010
Sampling
One important thing to bear in mind is that something in the system must be
random. This could be the situation which is random or a sampling method
which contains a random element for picking the components of the sample.
Some of these follow.
The choice of sampling method depends on the type of data being sampled.
Random sampling:
A common method is simple random sampling or the lottery method. One
of the most convenient ways is to allocate numbers to all components of the
population to be sampled and obtain the required amount of numbers to
constitute the sample size. The ways of obtaining a random sample of
numbers range from drawing numbers blindly "from a hat", (or the
mechanized version of agitated balls being ejected from a drum), to the use
of computer generated numbers.
Systematic sampling.
Often known as the constant skip method, this form of sampling is based on
taking every nth reading from the random population. For example, in a
survey, taking every 9th house in a street, for example, numbers 3, 12, 21,
30, 39 and so on). Care must be taken to avoid bias, so in the UK, taking
every 10th house means they would all be on the same side of the road, and
this might be significant.
Stratified sampling.
In order to ensure that all groups in a population are properly represented,
this method separates the population into strata and allocates proportional
representation to each stratum. With people, the strata may be occupations,
or social classes, ages, or income groups for example. Once selected, one of
the other two methods may be used within the strata.
Other methods.
These include quota sampling, cluster sampling and multi-stage sampling.
Validation
It is of little use if the sample collected does not represent the whole
population. Clearly no sample can exactly reflect the true result had the
whole population been surveyed. Therefore, probably there the sample result
will differ from the true situation. What is important is that we are aware of
34
35. 12/1/2010
the probable statistical errors which inevitably arise because the whole
population was not investigated. Provided that the population is relatively
large, the magnitude of the statistical error depends not on the size of the
population but on the size of the sample. The error can be calculated (dealt
with elsewhere in this Managers-net Web-site) or alternatively, the sample
size can be calculated prior to data collection if we decide on the size of the
error which we can tolerate. If the subsequent error is too large, then a
bigger sample size must be taken, i.e. a further set of observations to add to
the existing ones. At least, we can be aware of the statistical error to which
our results are subject due to sampling and use the data appropriately.
STATISTICAL PROCESSING CONTROL
The fundamentals of Statistical Process Control (though that was not what it
was called at the time) and the associated tool of the Control Chart were
developed by Dr Walter A Shewhart in the mid-1920‟s. His reasoning and
approach were practical, sensible and positive. In order to be so, he
deliberately avoided overdoing mathematical detail. In later years,
significant mathematical attributes were assigned to Shewharts thinking with
the result that this work became better known than the pioneering
application that Shewhart had worked up.
The crucial difference between Shewhart‟s work and the inappropriately-
perceived purpose of SPC that emerged, that typically involved
mathematical distortion and tampering, is that his developments were in
context, and with the purpose, of process improvement, as opposed to mere
process monitoring. I.e. they could be described as helping to get the process
into that “satisfactory state” which one might then be content to monitor.
Note, however, that a true adherent to Deming‟s principles would probably
never reach that situation, following instead the philosophy and aim of
continuous improvement.
Explanation and Illustration:
What do “in control” and “out of control” mean?
Suppose that we are recording, regularly over time, some measurements
from a process. The measurements might be lengths of steel rods after a
cutting operation, or the lengths of time to service some machine, or your
weight as measured on the bathroom scales each morning, or the percentage
35
36. 12/1/2010
of defective (or non-conforming) items in batches from a supplier, or
measurements of Intelligence Quotient, or times between sending out
invoices and receiving the payment etc., etc..
A series of line graphs or histograms can be drawn to represent the data as a
statistical distribution. It is a picture of the behaviour of the variation in the
measurement that is being recorded. If a process is deemed as “stable” then
the concept is that it is in statistical control. The point is that, if an outside
influence impacts upon the process, (e.g., a machine setting is altered or you
go on a diet etc.) then, in effect, the data are of course no longer all coming
from the same source. It therefore follows that no single distribution could
possibly serve to represent them. If the distribution changes unpredictably
over time, then the process is said to be out of control. As a scientist,
Shewhart knew that there is always variation in anything that can be
measured. The variation may be large, or it may be imperceptibly small, or it
may be between these two extremes; but it is always there.
What inspired Shewhart‟s development of the statistical control of processes
was his observation that the variability which he saw in manufacturing
processes often differed in behaviour from that which he saw in so-called
“natural” processes – by which he seems to have meant such phenomena as
molecular motions.
Wheeler and Chambers combine and summarise these two important aspects
as follows:
"While every process displays variation, some processes display
controlled variation, while others display uncontrolled variation."
In particular, Shewhart often found controlled (stable variation in natural
processes and uncontrolled (unstable variation in manufacturing processes.
The difference is clear. In the former case, we know what to expect in terms
of variability; in the latter we do not. We may predict the future, with some
chance of success, in the former case; we cannot do so in the latter.
Why is "in control" and "out of control" important?
Shewhart gave us a technical tool to help identify the two types of variation:
the control chart .
What is important is the understanding of why correct identification of the
two types of variation is so vital. There are at least three prime reasons.
36
37. 12/1/2010
First, when there are irregular large deviations in output because of
unexplained special causes, it is impossible to evaluate the effects of
changes in design, training, purchasing policy etc. which might be made to
the system by management. The capability of a process is unknown, whilst
the process is out of statistical control.
Second, when special causes have been eliminated, so that only common
causes remain, improvement then has to depend upon management action.
For such variation is due to the way that the processes and systems have
been designed and built – and only management has authority and
responsibility to work on systems and processes. As Myron Tribus, Director
of the American Quality and Productivity Institute, has often said:
“The people work in a system.
The job of the manager is
o To work on the system
o To improve it, continuously,
With their help.”
Finally, something of great importance, but which has to be unknown to
managers who do not have this understanding of variation, is that by (in
effect) misinterpreting either type of cause as the other, and acting
accordingly, they not only fail to improve matters – they literally make
things worse.
These implications, and consequently the whole concept of the statistical
control of processes, had a profound and lasting impact on Dr Deming.
Many aspects of his management philosophy emanate from considerations
based on just these notions.
So why SPC?
The plain fact is that when a process is within statistical control, its output is
indiscernible from random variation: the kind of variation which one gets
from tossing coins, throwing dice, or shuffling cards. Whether or not the
process is in control, the numbers will go up, the numbers will go down;
indeed, occasionally we shall get a number that is the highest or the lowest
for some time. Of course we shall: how could it be otherwise? The question
is - do these individual occurrences mean anything important? When the
37
38. 12/1/2010
process is out of control, the answer will sometimes be yes. When the
process is in control, the answer is no.
So the main response to the question Why SPC? is therefore this: It guides us
to the type of action that is appropriate for trying to improve the functioning
of a process. Should we react to individual results from the process (which is
only sensible, if such a result is signalled by a control chart as being due to a
special cause) or should we instead be going for change to the process itself,
guided by cumulated evidence from its output (which is only sensible if the
process is in control)?
Process improvement needs to be carried out in three chronological phases:
Phase 1: Stabilisation of the process by the identification and
elimination of special causes:
Phase 2: Active improvement efforts on the process itself, i.e.
tackling common causes;
Phase 3: Monitoring the process to ensure the improvements are
maintained, and incorporating additional improvements as the
opportunity arises.
Control charts have an important part to play in each of these three Phases.
Points beyond control limits (plus other agreed signals) indicate when
special causes should be searched for. The control chart is therefore the
prime diagnostic tool in Phase 1. All sorts of statistical tools can aid Phase 2,
including Pareto Analysis, Ishikawa Diagrams, flow-charts of various kinds,
etc., and recalculated control limits will indicate what kind of success
(particularly in terms of reduced variation) has been achieved. The control
chart will also, as always, show when any further special causes should be
attended to. Advocates of the British/European approach will consider
themselves familiar with the use of the control chart in Phase 3. However, it
is strongly recommended that they consider the use of a Japanese Control
Chart (q.v.) in order to see how much more can be done even in this Phase
than is normal practice in this part of the world.
STATICAL SAMPLING FOR DATA COLLACTION
When it is possible to collect all the data for a population, the results (for
example the parameters like average (mean) or dispersion of the data values)
38
39. 12/1/2010
will accurately represent the situation. However, because the sampling
frame from which the sample is taken usually will be large, it is impossible
to measure all the data, so a sample must be obtained. Unfortunately,
because we cannot measure all of the data the sample parameters when
calculated probably will not accurately represent the whole data field. This
gives rise to what are known as statistical, or sampling, errors.
Two important points about sampling are that the sample must be (a)
representative of the situation and (b) usually random, in order to avoiding
the effects of bias. Random sampling is the most usual methods of obtaining
representative sampling.
Any such decisions on sampling depend on what one wishes to find out.
Methods of sampling 1. - random sampling
As already mentioned above, when taking a sample something within the
sampling frame must be random in order to avoid the effects of bias. Either
the situation must be random or the sampling must be on a random basis.
One of the most common, but not the simplest, is random sampling as used
in lotteries. Random samples may be taken by several methods including
thoroughly mixing up the items in the sampling field and then picking the
number of items in the sample size at random e.g. without selecting).
Another method is to number each item in the population of values and then
use randomly generated numbers to obtain the random sample. Many are
already numbered such as serial numbers on equipment, passports or
National Insurance numbers. Random numbers may be found in textbooks,
statistical tables or as computer programs.
The following example is not necessarily how it is done in practice but is
one method of sampling to illustrate the method in general terms.
Suppose an electricity supply organisation needs to assess the degree of
corrosion of its main power lines in various areas of the country in order to
find those areas which are prone to the worst corrosion and hence might
need more attention than other areas. It is an impossibly time-consuming
task to inspect every power line between every tower in every area and,
indeed, not necessary. Sampling can provide a sufficiently "accurate" or
reliable answer with a known degree of error.
Meanwhile, using a map of the grid system the researcher could divide the
territory into areas and the areas into smaller locations. Each power line
39
40. 12/1/2010
could be divided into smaller lengths (possibly "between each tower") and
each smaller length would be identified in some way (e.g. numbering or
coding).
In order to decide which of the thousands of lengths of cable are to be
examined, first of all the sample size (i.e. how many lengths to be inspected)
must be determined. It is the sample size that eventually determines the
degree of error in the result, when this is applied to the whole network
including those thousands of lengths which were not checked. Basically, the
larger the sample size the smaller is the statistical error. These statistical
errors are not to be confused with human error nor with measuring
equipment error.
When the sample size has been calculated (as dealt with in a later Topic)
The next stage is to identify which of the lengths are to be inspected.
For this purpose it is necessary to generate random numbers either from
tables available in many books on statistical method or from computer
spreadsheets (e.g. Lotus 1-2-3, or EXCEL). When the required number of
random numbers has been obtained these are used to identify the
corresponding numbers on the grid map as the ones to be inspected.
Figure 1 illustrates a very simplified, abridged example of this method in
diagrammatic form showing only 30 lengths of cable. These are numbered 1
to 30.
A sample size of eight is used in this instance. Random numbers, taken from
a random number table, are 18,28,5,13,16,9,26 and 21. These are indicated
in red on the "map" below. These numbered cables would be used as the
sample:
Cable numbers
1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3
1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0
Methods of sampling 2. - systematic sampling
Systematic sampling (or constant skip method) is not random. Nevertheless,
it can be used where the situation is random.
For example, suppose the objective of a large organization is to obtain a
random selection from the 800 employees to sit as representatives on a
40
41. 12/1/2010
management productivity group. Each has an employee staff identification
number issued randomly by Personnel Department. To collect a sample of
20 names, management could take, for example every 40th name from the
staff register (i.e. 800 divided by 20 equals 40, hence every 40th name).
Methods of sampling 3. - stratified sampling
This method is useful where the sampling frame has natural strata or
divisions. For example, to ensure that all occupations in a company are
equally represented the occupations could be the strata and within each
stratum, random or systematic samples could be taken. So, using the
example quoted for systematic sampling, if the employees consisted of 64
managers, 200 supervisors and 536 engineers (=800 employees) to obtain a
representative proportion from each employee grade (or stratum), the
proportions would be: for managers, 64 out of 800 total employees = 8%,
200 out of 800 = 25% and 536 out of 800 = 67%.
Therefore, 8% of the random numbers would be from management names,
25% from supervisors' names and the rest, 67%, from the engineers' names.
This ensures a representative proportion from each group.
Mystery shoppers
The "mystery shoppers" method of sampling is used in market research to
determine the quality of goods and services. With this method employees or
specially engaged agencies acting as "customers" make notes on the service
they receive in the environment being inspected.
This method can be used for testing the "ambience" of areas (e.g. "how
pleasant" is the area). For example, some rail services use the method for
inspecting their rolling stock and stations for litter, vandalism, malicious
damage, graffiti and the general appearance of the environment and "feel" of
their assets.
ANALYTICAL SAMPLING
What is it ?
Analytical estimating is a structured work measurement technique. The
formal BSI definition (22022) states that it is a development of estimating, in
which the time required to perform each constituent part of a task at a
41
42. 12/1/2010
defined rate of working is estimated from knowledge and practical
experience of the work and/or from synthetic data
An important feature of this technique, which helps to improve accuracy, is
that a whole job should be broken down into smaller individual tasks. This is
because any errors in the time estimates may be seen as random and will
therefore compensate for each other.
How can it be used ?
Analytical estimating would normally be used for assessing work over a
reasonably lengthy period of time, where it may be difficult and more
expensive to collect the information required using other measurement
techniques. Also, in some work environments the presence of an individual
carrying out work measurement in the work place could be unacceptable. In
these cases, analytical estimating may be an appropriate method to use,
assuming someone with experience of the work is available to apply their
experienced judgement. ( This may be work measurement personnel who
have previous experience of this particular work )
However, the work content of some jobs cannot be estimated in advance
because one is unclear about what is required until an assembly operation
has been tested or stripped down. For example, during the progress of repair
unforeseen and non standard difficulties can arise. Removing a wooden door
from its frame by unscrewing 8 or 12 screws could take five minutes if the
screws were recently inserted, or a great deal longer if the screws are rusted
and clogged with paint.
In summary, the technique is used most commonly in any work environment
where a lengthy time (and associated high cost) is needed to collect data.
Advantages & Disadvantages
Perhaps the most significant advantage of using anaytical estimating is its
speed of application and low cost. Using trained and experienced personnel
process and measurement data can be quickly assembled and applied.
However, the use of experienced judgement when determining the time
necessary to perform a task is the technique's most obvious source of
weakness when compared with a more precise technique such as time study.
This is why the technique would not normally be used when a more precise
and accurate alternative is a feasible and economic alternative, particularly
to highly repetitive, standardised operations. Many jobs, such as craft work
42
43. 12/1/2010
in the maintenance field, consist of a group of tasks which are periodically
repeated but the precise nature of each task varies each time in minor
respects ( see research on Natural & Normal Variation for further
explanation). In this example, since it is impractical, in terms of time and
cost, to allocate one time study observer permanently to each craftsman, the
alternative is to use a time-study basis plus the experienced judgement of an
ex-craft work-study observer to allow for detailed task variations.
BUSINESS PLANNING
Business (Corporate) Planning is the process of deciding what tactical action
and direction to take, in all areas of business activity, in order to secure a
financial and market position commensurate with the strategic objectives of
the organisation. To put it another way, it is the comprehensive planning for
the whole of the business and involves defining the overall objectives for the
organisation, and all the actions that must be adopted in order that those
objectives are achieved.
Illustration:
If only we spent as much time doing our jobs, as we waste in these budget
meetings, we would be a lot better off. This planning stuff is all very well,
but has anyone ever worked out how much it costs? Anyway, all we can ever
do is write down what we think will happen, then wait until it hasn‟t
happened, and finally argue about why it didn‟t. Sometimes I wonder if it is
all worthwhile.
Statements like these occur because:
No one has taken the trouble to explain the purpose and benefits of
planning;
The planning methods are wrong;
Plans are imposed from above, rather than worked out and agreed
with the people who are going to have to carry them out;
So-called planning is often no more than totalling up the various
departments‟ forecasts, and calling them the company plan.
In general it can be assumed that FIVE important features of Corporate
Planning prevail, they are:
1. Objectives and objective setting;
43
44. 12/1/2010
2. Flexibility - the ability to be adaptable within the plan;
3. Growth - anticipating opportunities for new markets;
4. Synergy - the sum of joint efforts being greater than either one;
5. Time span - the critical length of the plan - long termism is
increasingly risk managed in today‟s business environment.
Corporate planning is, like most business activities, only as good as the
people who do it. Its methods and approach do, however, stack the cards in
its favour. In nearly every business, competition and technical change has
increased, is increasing, and will continue to increase, and won‟t stop. It
cannot be ignored, so better to be part of a success story through effective
corporate planning than flounder with those competitors who have failed to
grasp the nettle.
It really is the case that “Failing to Plan is Planning to Fail”.
CORPORATE PLANNING
A planning technique that aims to integrate all the planning activities of an
organisation and relate them to the best overall objectives for the
organisation.
Explanation:
A large number of planning techniques has been extensively used in
business and commerce for a considerable time. Budgetary control (q.v.)
which involves a large amount of budgetary planning has been one of the
most wide ranging and successful, via its materials, labour, sales, overheads,
R&D, capital and cash budgets. A further development of this is the
technique of profit planning (q.v.), which considers a number of alternative
strategies on capital investment, expansion, diversification for example,
before setting a single preferred plan. Corporate planning represents a
further widening and, at the same time, a closer integration of earlier
techniques. As examples of the widening process, corporate planning would
normally include management development and training, environmental and
community plans in addition to operating plans. As an example of closer
integration, the technique would involve all managers and departments in
44
45. 12/1/2010
setting objectives and determining the means to achieve them, in relation to
the overall company plan.
Illustration:
The technique has found most favour with larger companies of mature
standing, i.e. those whose days of headlong growth are over, who are subject
to strong international competition and who wish to think out extremely
carefully their future investment projects and at the same time to harmonise
and integrate the policies, procedures and plans created in each country,
division and operating unit of the company.
Predetermined motion time system (PMTS)
Definition:
PMT Systems are methods of setting basic times for doing basic human
activities necessary for carrying out a job or task.
'Tables of time data at defined rates of working for classified human
movements and mental activities. Times for an operation or task are derived
using precise conventions. Predetermined motion time data have also been
developed for common combinations of basic human movements and mental
activities'.
Background
The principle of analyzing work into into basic actions was first published
by F. Gilbreth in 1920, as his Therbligs. The first commercial and
internationally recognized system was devised in the 1930's to circumvent
the banning by the government of the United States time study and the stop-
watch as the means of measuring work performed on US government
contracts. It was devised by Quick, Malcolm and Duncan under the title
Work-Factor and appeared in 1938. Other methods followed, the main one,
some ten years later, being Methods-Time Measurement (MTM). Both
systems share basic similarities but are based on different standards of time.
Outline description of PMTS
45
46. 12/1/2010
The concept of PMTS is to analyse a job into its fundamental human
activities, apply basic times for these from tables and synthesize them into a
basic time for the complete job. The basic elements include the following:
reach for an object or a location,
grasp an object , touching it or closing the fingers around it,
move an object a specified distance to a specified place,
regrasp an object in order to locate it in a particular way, usually
prior to:
release an object to relinquish control on it,
other elements for assembling to, or inserting an object into, its intended
location.
For each of these actions basic times are tabled. For example, in Work-
Factor the time unit is one thousandth of a minute (the Work-Factor Time
Unit) whereas in MTM the unit is one hundred-thousandth of an hour (time
measurement unit, tmu).
The times for basic actions are adjusted for other factors which take into
account such variables as:
distances moved, in inches or centimetres
difficulty in performing the actions, such as avoiding obstacles
during moves, closeness of fit during assembling, weight of the
object, all of which increase the times to carry out the basic actions.
The above basic motions cover most of the actions performed by humans
when carrying out work. Other basic activities include:
walking to a specified place
bending down and stooping
kneeling on one knee and kneeling on both knees
foot and leg motions
sitting down and standing.
Mental activities include times for: See, Inspect, Identify, Nerve Conduct,
React, Eye focus, Eye travel times, Memorize, Recall, Compute (calculate)
and others, mostly from Work-Factor.
Levels of detail in systems
46
47. 12/1/2010
In order to speed up measurement time the major systems all include
different levels of detail, such as:
1. most detailed systems: MTM and Detailed Work-Factor
2. Second level systems: MTM-2 and Ready Work-Factor (abridged
versions) achieved usually by the four methods of combining,
statistically averaging, substituting and/or eliminating certain basic
motions.
3. Third level systems: MTM-3 and Abbreviated Work-Factor (even
more abridged)
4. "higher level" systems, usually times for complete activities.
One example of simplifying in the second level system MTM-2 is the
combining of MTM elements reach, grasp and release to produce a new
MTM-2 element of "Get".
PMTS is often used to generate synthetic data or (standard data banks)
which are overall basic times for more complex tasks such as maintenance
or overhauling of equipment. This is achieved by synthesizing the hundreds
of small jobs measured using PMTS into a time for the complete project.
Basic times produced by PMTS need to have relaxation allowances and
other necessary allowances added to produce standard times.
An example of part of a typical analysis in MTM-2 is
An extract from an MTM analysis showing the first seven elements.
MTM Analysis
Job description: Analyst: EJH
Assemble r.f. transformer to
Date: 3 May
base-plate
El. Description LH tmu's RH Description
Move hand to
1 Move hand to washer R14C 15.6 R14B
transformer
2 Grasp first washer G4B 9.1 G1A Grasp transformer
3 Move hand clear of container M2B --- --- Hold in box
4 Palm washer G2 5.6 --- Ditto
47
48. 12/1/2010
5 To second washer R2C 5.9 --- Ditto
6 Grasp washer G4B 9.1 --- Ditto
7 Move washers to area M10B 16.9 M14C Transformer to plate
Notes on descriptions of some of the codes as examples.
The codes in the LH and RH columns refer to those in the MTM time tables.
For example: R14C is translated as "Reach 14 in. to an object jumbled with
other objects in a group, so that search and select occur" (Class C reach).
R14B is translated as "Reach 14 in. to a single object in location which may
vary slightly from cycle to cycle." G2 is a grasp Case 2 which is a Regrasp
to move the washer into the palm G4B is a Grasp Case 4B which is for
grasping *object jumbled with other objects so search and select occur.
Objects within the range 0.25 x 0.25 x 0.125 in. to 1 x 1 x 1 inch."
One tmu is one hundred-thousandth of an hour.
Time study
What is it?
Time study is a tried and tested method of work measurement for setting
basic times and hence standard times for carrying out specified work. Its
roots are back to the period between the two World Wars.
The aim of time study is to establish a time for a qualified worker to perform
specified work under stated conditions and at a defined rate of working.
This is achieved by a qualified practitioner observing the work, recording
what is done and then timing (using a time measuring device) and
simultaneously rating (assessing) the pace of working.
The requirements for taking a time study are quite strict.
Conditions:
the practitioner (observer) must be fully qualified to carry out Time
Study,
the person performing the task must be fully trained and experienced
in the work,
the work must be clearly defined and the method of doing the work
must be effective
the working conditions must be clearly defined
There are two main essentials for establishing a basic time for specified
work i.e. rating and timing.
48
49. 12/1/2010
Some terminology explained
Timing
The observer records the actual time taken to do the element or operation.
This usually is in centiminutes (0.01 min.) and is recorded, using a stop-
watch or computerized study board.
Rating.
When someone is doing work his/her way of working will vary throughout
the working period and will be different from others doing the same work.
This is due to differing speeds of movement, effort, dexterity and
consistency. Thus, the time taken for one person to do the work may not be
the same as that for others and may or may not be 'reasonable' anyway. The
purpose of rating is to adjust the actual time to a standardized basic time that
is appropriate and at a defined level of performance. Rating is on a scale
with 100 as its standard rating.
Elements
A complete job usually will be too long and variable to time and rate in one
go, so it would be analysed into several smaller parts (elements) which,
separately, will each be timed and rated.
Basic time
This is the standardised time for carrying out an element of work at standard
rating.
Example: An observer times an element as 30 centiminutes (cm) and
because it is performed more slowly than the standard 100, he rates it as 95.
Thus the basic time is 95% of 30 or 28.5 basic cm. The formula is: (actual
time x rating)/100.
Allowances
Extra time is allowed for various conditions which obtain, the main ones
being relaxation allowance for:
a. recovery from the effort of carrying out specified work under
specified conditions (fatigue allowance)
b. attention to personal needs
c. adverse environmental conditions,
49