enables sound decision making.
Building a strategically sound scorecard must be a top-down process. The
first step is defining the business unit
for which the scorecard is designed and
covering it wall-to-wall: its financial, customer, people, and process components.
Senior executives with their responsibility for setting strategy must be
interviewed and updated as the design
evolves to cover those components in detail. Their input must include the latest
mission statement as well as consensus
on the company’s key success factors:
✚ What financial targets must be
✚ What activities are required to
satisfy the customer?
✚ What processes must be done
✚ What people resources must be
With that knowledge in hand,
drafts of the theory of business flow
chart and a list of suitable metrics can
be compiled. Those documents should
then be refined to ensure accurate
cause-and-effect linkages between the
objectives (the whats) and activities
(the hows). Once that important check
is completed, performance targets for
each strategic activity and its associat-
ed metric can be identified. Responsi-
bility for achieving target performance
of every metric must be assigned to
the appropriate job position, and sub-
scorecards containing those relevant
metrics must be prescribed.
Throughout the design process the
company’s information technology
staff must be involved to ensure the
required data are readily available for
input into the scorecard.
Once ready for kick-off, all users and
information providers must be trained.
That effort must be on-going and include
discussions of performance with managers
and front-line workers as well as training
on problem solving and continuous process improvement.
A well-designed performance
measurement system features several
✚ Linkage of cause and effect must
be accurate – If you meet or exceed
a performance target without achiev-
ing the desired outcome, the theory
of business is flawed or your targets
are set too low. For example, regularly
meeting your on-time delivery target
should improve your customer satisfac-
tion score. If not, your customers may
not value your performance.
✚ All metrics must have realistic yet
moderately challenging performance
✚ Leading or predictor metrics
must be included – Leading metrics
are forward-looking and often report
acceptable performance before an
outcome measure does. For example a
reduction in returns and allowances for
defective products may not immediately
impact your customer satisfaction score.
✚ Financial and non-financial metrics must be included – For workers on
the plant floor non-dollar denominated
metrics are often more meaningful.
✚ Metrics must be actionable –
Wherever possible, workers must be
able to affect the ratio(s) or score(s)
used to judge their performance.
✚ Appropriate scorecards must
provide key metrics for every level of
the org chart from the corner office
to the plant floor – Key success factors such as revenue growth are often
driven by lower-level activity. For example in Figure 1 growth is reached by
reducing throughput time, an outcome
that results from plant floor initiatives.
While the scorecard design process
is top-down, the scores achieved on
high-level metrics are typically driven
bottom up by front-line workers focusing on set-up time, yield, feed rates,
Objectives must be matched with quantifiable metrics that will track progress.