Dr. Toby S. James, University
of East Anglia*
Overview
Both established and emerging democracies often struggle to run
elections smoothly. Common problems with
electoral management include low levels of electoral registration, miscounts,
lost ballot papers and delays in the announcement of results. A cause in some circumstances is the poor
performance of electoral officials. Officials
managing the local implementation of elections, for example, might not have followed
the guidance that their supervisors or international donors have provided or
they might have not undertaken sufficient planning for an election. Equally, systems might not be in place for
electoral officials to share ideas for best practice between themselves so that
they can improve their efficiency and the quality of delivery.
Fortunately, there have been many developments in the tools that
governments have used to increase the monitoring and performance of public
sector workers. One method is the use of
performance benchmarking. This involves
defining standards of best practice and then measuring different units of an
organisation against these standards.
The aim is to spread best practice and motivate better performance from
local officials. This article explains
in more detail what it is and provides a case study of Britain where it has
been useful for improving the quality of electoral management. The final
section of the article makes recommendations for how it can be adapted for use
in countries other than Britain and some of the dangers that need to be
considered before doing so.
What is Performance
Benching Marking?
Benchmarking is the measurement of performance across a range of
similar units or teams within an organisation.
The idea first emerged from the private sector in the 1950s as a means
to increase the quality and efficiency in the production process. The rationale is that comparative measurement
might allow best practices to be identified, shared and learnt. It is a tool to motivate workers to improve
their performance by either working harder or following the best practice
established elsewhere. It also provides
a method of control within organisations.[1] It has become commonly used worldwide in
government since the 1980s.
The UK Electoral
Commission’s model
The UK has a decentralised system for implementing elections. Although laws for elections are made in the
national Parliament, the task of implementing them has always been done locally. In the nineteenth century poor-law
administrators played a key role in compiling the electoral register. In the early part of the twentieth century
the task of compiling the electoral register was given to employees of local
government called electoral registration officers (hereafter ‘EROs’). Meanwhile, officials called returning officers
(hereafter ‘ROs’) have been responsible for running the poll and the
count. Often, these officials are the
chief executives of local government. The
ERO and RO are often also the same person.
Either way, they have overall responsibility for managing a team who
work within local government to deliver elections. Different arrangements are in place in
Scotland where Valuation Joint Boards, organisations who administer council
tax, are responsible for the electoral register. In Northern Ireland there is one central
organisation called the Electoral Office for Northern Ireland which runs
elections – but which is not considered in this case study.
Performance benchmarking has been widely used across the UK,
especially within local government for over three decades. It has not been publicly used in the
organisation of elections, however, until an internationally innovative scheme
was introduced in the mid-2000s. This
followed concerns, mostly raised by politicians, that the quality of electoral
management was better in some parts of Britain than others. The Labour government therefore included a
provision in the Electoral Administration Act 2006 to give the Electoral Commission
the power to implement a new performance management scheme. The Commission was given responsibility for
setting standards for electoral registration and the organisation of the poll. They also were given powers to publish whether
or not ROs and EROs met those standards.
Following the passage of the 2006 Act the Electoral Commission
undertook a consultation with stakeholders to decide upon the performance
standards. Ten standards for EROs were
published in July 2008 and seven for ROs in March 2009. These standards were processes (rather than targets for key performance indicators) which
the Electoral Commission, in consultation with stakeholders, thought reflected
best practice. They are listed in Table
1. For each standard, performance indicators
were designed to measure whether each local authority was ‘not currently
meeting the standard’, was ‘at the performance standard’ or was ‘above the
performance standard’. For example, the
first ERO standard required them to use information sources to verify entries
on the register. To ‘meet the standard’
EROs needed to identify and use records such as the annual canvass, to verify
and validate electors on the electoral register. To be ‘above the standard’ they needed to go
further and ‘identify and contact potential electors who may have moved into,
or within, the local authority area’, using additional sources such as ‘council
tax records to identify residents of newly occupied properties’. To demonstrate that they had met this they
needed to maintain records of which data sources were checked and when.
Undertaking neither of these exercises would mean that an ERO was ‘below the
standard.’[2]
Local authority officials were asked to self-assess their performance
and report it to the Electoral Commission. The Electoral Commission undertook a
sample-based verification exercise to ensure that the self-assessment forms had
been completed accurately. The results
were made available publicly via an online web-tool.[3]
The Electoral Commission subsequently changed the standards in 2013
to facilitate the introduction of individual electoral registration.[4] Different management systems were also in
place for the Welsh Assembly and AV referendums in 2011 because the Commission
has the power of direction in referendums. [5] This article, however, focuses on the impact
of the original indicators in use between May 2009 and May 2011.
Electoral Registration Officers
|
Returning Officer
|
1. Using information sources to verify entries
on the register of electors and identify potential new electors.
|
1. Skills and Knowledge of the Returning Officer
|
2. Maintaining the property database
|
2. Planning processes in place for an election
|
3.
House to House enquiries
|
3. Training
|
4. Maintaining the integrity of registration
and absent vote applications
|
4. Maintaining the integrity of an election
|
5. Supply and security of the register and
absent voter lists
|
5. Planning and delivering public awareness activity
|
6.
Public awareness strategy
|
6. Accessibility of information to electors
|
7. Participation
|
7. Communication of information to candidates and agents
|
8. Accessibility and communication of
information
|
|
9.
Planning for rolling registration and the annual canvass
|
|
10.
Training
|
|
Table 1: Performance Standards defined by the UK Electoral
Commission 2008 and 2009
Effects of the Quality of
Electoral Management
What effect did the performance standards have? Did they have the desired effect of bringing
about change? The author undertook an
evaluation of the performance standard scheme.
The methodology involved qualitative interviews with all 74 electoral
officials who were required to meet the standards. The results of these were published in an
academic article.[6] The remainder of this ACE case study
summarises these findings and considers whether the UK system could itself be
rolled out as good practice elsewhere.
Level of compliance
It is important to note that the Electoral Commission had no formal
powers to fine or otherwise punish those local authorities who did not comply
with the performance standards. Given
this, the level of adoption was high. In
the first year of the use 19.8% of RO standards were ‘below standard’ but this
dropped to 9.1% in the second year.
Meanwhile, only 3.3% of EROs standards were not met by 2010.[7]
|
Returning Officer
|
Electoral Registration Officer
|
Proportion of standards
|
2009
|
2010
|
2008
|
2009
|
2010
|
Below
standard
|
19.8%
|
9.1%
|
4.6%
|
8.5%
|
3.3%
|
At
Standard
|
60.1%
|
62.9%
|
61.7%
|
73.3%
|
68.6%
|
Above
Standard
|
20.1%
|
28.0%
|
33.7%
|
18.1%
|
28.1%
|
Number
of authorities below at least one standard
|
130
|
60
|
297
|
183
|
60
|
Table 2: Trends
in local authority performance results according to the Electoral Commissions
performance standards 2008-2010
Explaining compliance
Why did local
electoral officials comply with the standards if they had no legal or financial
reason to do so? The reasons for meeting
the standards varied. The standards made
officials aware of new ways of working or gave them the confidence to introduce
reforms that they had heard about elsewhere.
Sometimes the standards prompted either formal or informal reviews of
ways of working – practices which had otherwise been unquestioned for a long
period of time. Often they were adopted
because they were associated with professionalism – it just was ‘the right
thing to do’. They also provided a
template for organising elections in periods of change such as authority
mergers or the appointment of new members of staff.
However, the most
common reason for why the standards were adopted was that individuals or organisations
felt that they would suffer reputational loss if the standards were not
met. Middle managers commonly implemented
standards because the reputation of the Chief Executive (who is often also the ERO
and RO) was perceived to be at stake. However,
often the Returning Officer took action to ensure that changes had been
made. One junior official reported that
she was ‘roasted’ by her RO (who was also the Chief Executive of the authority)
because the authority did not meet the standards and this reflected ‘badly on
her’. ROs frequently knew their peers at
other authorities very well and are part of a closed knit network. The results from the standards were made
available publicly via an online web-tool and they would check how they fared
against their comparators. Where
individuals felt that their own reputation was not affected by the standards,
they were less likely to act.
Explaining
non-compliance
Those officials
who did not meet the standards explained their reasons for not doing so. The most important was that they questioned
the legitimacy or efficacy of the scheme.
Some thought that compliance would not actually improve elections in
their area and resources would be better placed elsewhere. Others stressed that they were already
accountable to the law and courts. The
Commission’s scheme therefore held little motivation for them. There was also evidence that some officials
deliberately marked themselves as not meeting the standards initially, even if
they thought that they were doing enough to meet them. They could then subsequently claim
improvements in future years.
The positive effects of
compliance
Table 3: The effects of
benchmarking
|
Improved confidence in
election administration within the council, candidates and amongst the
public.
|
More frequent evaluations of
electoral services.
|
More
consistent services were produced
|
Increased contingency plans
and risk management
|
Closer and more formal links
with other stakeholders in the elections process.
|
Increased individual and
team morale amongst well performing councils
|
There are some
strong reasons to think that the standards had very little effect on electoral
management. Indeed, many officials
stated meeting the standards was mostly a ‘box ticking’ exercise which didn’t
affect the way that they ran their services.
A common theme from the interviews was that meeting the standards
required them to document existing procedures but this did not change how they
worked. Some authorities even copied and
pasted plans from officials at other authorities and occasionally even forgot
to change the name of the authority on the plan. Others, who were initially above the
standards, reported that the standards encouraged them to drop their
performance to being at the standard.
However, while
many officials reported that there was no substantial change, many others said that
it had a positive effect (see table 3).
Importantly, these were not always a consequence of what the standards
were but a consequence of the presence of
a set of standards. One key benefit
was that having externally defined standards increased confidence in procedures
amongst local politicians and other elite stakeholders. This is significant because other research
shows how the public, knowing little about election administration themselves,
take cues from politicians about the quality of election administration. The presence of performance standards can
therefore be important for improving waning confidence in the administration of
elections.
Lessons for best practice
In conclusion, the performance benchmarking scheme used in Britain can
improve electoral management and offers a useful template that other electoral
management boards worldwide might want to consider if they want to improve the
implementation of elections.
Benchmarking provides an effective way of identifying best practices,
measuring the extent to which local officials are using them, and incentivising
the wider adoption of these best practices.
The Electoral Commission did not produce any official estimates of how
expensive it was to run. However, it
seems that costs were low since it involved only a small team of officials to
administer on a part-time basis.
Those seeking to adopt this system might wish to consider the
following recommendations:
- It is likely to be of greater
use within decentralised electoral management boards where policy makers are
concerned about variation in performance across the whole country.
- They are very likely to be more
useful during periods of rapid change or when staff lack experience.
- A well-functioning EMB website
is needed. Performance data will need to
be accessible, visible and updated on a regular basis.
- Standards should be regularly
reviewed. Once compliance for
‘base-level’ standards has been achieved, then there is scope for setting more
challenging standards.
- To work well the scheme needs
publicity. Explaining the scheme to
politicians and the media is important so that they can hold officials to account
for non-compliance and motivate them to change.
- Identifying ‘best in the class’
local electoral officials is important so that their experience and ideas can
be passed up and across rather than just down.
This will also increase the credibility of the standards amongst those
who have to meet them.
- There is a risk that the scheme
can also undermine staff morale by ‘naming and shaming’ officials who are
dedicated and working hard. ‘League
tables’ can put teams in competition against each other and this might have
negative side effects. The non-adoption
of best practice might be because of a variety of factors including
insufficient resources. A supportive
environment should be encouraged to help all officials achieve the performance
standards to begin with.
Efforts to improve electoral management often focus on the
legal-framework. This performance
management scheme, however, shows that the creative use of different managerial
systems could have a substantial effect.
It also points to the need for further research on the policies that can
be used to improve the delivery of elections.
* Dr. Toby S. James is a Senior Lecturer at the University of East Anglia. He has published widely on electoral
administration and management, including his book Elite Statecraft and Election
Administration (Palgrave, 2012). His
research has been funded by many research councils. He has given invited international
presentations to organisations such as the Korean Civic Institute on Voter
Education and Harvard University and given evidence to parliamentary
committees.
[1] James Arrowsmith, Keith Sisson and Paul
Marginson (2004). "What can ‘benchmarking’ offer the open method of
co-ordination?" Journal of European Public Policy 11(2): 311-328.
[2] Electoral Commission (2008). Performance
standards for Electoral Registration Officers in Britain. London, Electoral
Commission.
[3] Electoral Commission (2008). Performance
standards for Electoral Registration Officers in Britain. London, Electoral
Commission. Electoral Commission
(2009). Performance standards for
Returning Officers in Great Britain. London, Electoral Commission.
[4] Electoral Commission (2013). Performance
standards for Electoral Registration Officers. London, Electoral
Commission. Electoral Commission (2013).
Performance standards for Returning
Officers in Great Britain. London, Electoral Commission.
[5] Toby S. James (2013). ‘Centralising Electoral Management: Lessons
from the UK’, Pre-APSA workshop on
Electoral integrity. Chicago, 28 August 2013.
[6] Toby S. James (2013) ‘Fixing U.K. Failures of Electoral
Management’, Electoral Studies,
32(4), December 2013, p. 597–608.
[7] The increase in the proportion of standards below performance for
EROs between 2008 and 2009 owed much to the re-assessment of many authorities
by the Electoral Commission.