Chapter 2 - Learning the Way Forward? The Role of Monitoring, Evaluation and Review

Derek Gill* and Susy Frankel§

2.1 Introduction

In a variety of ways, various stakeholders provide feedback on regulation[1] and how well the regulation[2] is working in New Zealand. Regulation is constantly subject to interpretation, and thus a kind of partial review, by its stakeholders, including businesses and consumers, or courts and tribunals resolving disputes. Additionally, government and its executive with their significant role in creating policy and regulation may review regulation and occasionally evaluate its effectiveness. Sometimes the goal of the regulation or the problem it seeks to address is not necessarily clear. That can make it very difficult to assess whether regulation has achieved its goal. The success or failure of regulation may become apparent if there is some kind of a crisis which, with hindsight, more effective regulation may have averted. Examples include regulating to prevent leaky buildings.[3] Yet, there is no systematic and well-defined process to learn about regulation through monitoring, review and evaluation. There is, therefore, at best a patchwork of monitoring, and review of regulation, from time to time informed by evaluation. This leads to the questions this paper addresses as to whether and how there should be a better system to improve the effectiveness of regulation.

New Zealand, like other OECD countries since the mid-1980s, has been active in developing a regulatory management system. The prime focus of the central government system has been posing ‘big policy’ questions to assess and review the flow of new regulations, rather that reviewing how the stock of existing regulations is implemented, enforced, monitored, reviewed and the design consequently adapted. The central question of this paper is:

In New Zealand central government (including the courts and tribunals), what monitoring, evaluation and review occurs to generate learning about whether the policy objectives of a regulation have been achieved?

To address this question, after introducing the key concepts, the paper will address the following for New Zealand:

1. What monitoring, review and evaluation of existing regulations does the formal system of regulatory management currently require?

2. What monitoring, review and evaluation of existing regulations actually occurs?

The paper will conclude with a discussion of two policy questions:

3. What can monitoring, evaluation and review contribute to learning and consequently regulatory effectiveness?

4. What is to be done about monitoring, evaluation and review of regulatory effectiveness?[4]

2.1.2 Overview – The Concepts

The work of the New Zealand Law Foundation Regulatory Reform Project has identified a number of ways in which there is a lack of monitoring, review and evaluation of regulation. One paper examined the central government regulatory management regime[5] and identified how the regime is focussed on ‘big policy’ rather than detailed policy design, implementation, enforcement, monitoring, evaluation and learning. An emerging theme from the project’s research has been that evaluation is rare or has never taken place to inform reviews of specific regulatory regimes. There have, for example, been many changes to regulation of the telecommunications and electricity industries. Some of those changes were made before any information about the effects of the previous regulation could have been formally evaluated.[6]

There have also been examples of reviews of regulation to deal with ‘crisis’ situations where the political imperative for immediate action precluded the commissioning of a formal evaluation. On-going monitoring will be required for the regulatory regime created to respond to the crisis. The regulatory changes to deal with the many issues arising from why some homes are not weathertight is such an example.[7]

Before this chapter explores the research questions, we define some key concepts. The title has four key terms – monitoring, evaluation, review and learning. All of these ought to contribute to regulatory effectiveness.

‘Learning’ is used in the paper in the everyday sense of the act or process of gaining knowledge or skill. By contrast ‘evaluation’ is a more formal term with a precise meaning and a well-defined body of practitioners, supported by professional associations and journals. In this literature it is conventional to distinguish between ex ante impact evaluations and ex post evaluations. The latter take two main forms: a formative evaluation which provides information on improving a process, and a summative evaluation which provides information on the short term impact or long term effectiveness.[8]

The distinction in types of ex post evaluations is an important one. In formative evaluations the focus is on ‘are we doing things right’ while in summative evaluations the focus is on ‘are we doing the right things’. Summative evaluations can be used to assist in answering three questions: is the regulation still necessary, is the regulation efficient (does it achieve the objectives at a lesser cost than other alternatives) and is the regulation effective in achieving the objectives.[9] The focus of the questions that the evaluation is to answer will frame the approach taken, what data is privileged over other information and what the evaluation might conclude.

Unlike evaluation, there are no textbooks, journals of monitoring or a professional body for monitors. ‘Monitoring’ is a practice but not a profession. The plain English definition of monitoring is “to be aware of the state of a system and to observe for any changes that occur” which implies “surveillance that is arms-length from that responsible body.” [10] Monitoring collects data on what is happening that can be used to inform any review or evaluation that is undertaken. Monitoring can indicate what has happened and sometimes insights into ‘how’ it happened while evaluation provides information on ‘why’.

In this paper we use the term ‘review’ to describe a deliberative examination with a view to taking action. Reviews can occur at two levels. They can occur at the level of an individual case or transaction as a means of providing an assessment of procedure and fairness of process. Reviews can also be focussed on the overall regime and its effectiveness, drawing upon evaluations where these are available.

Effectiveness has two dimensions related to evaluation discussed above – have things been done right and have the right things been done. By regulatory effectiveness we mean two things. First, have regulations been implemented and administered properly.[11] Second, effectiveness also asks how well does regulation contribute to achieving impacts, such as altering the behaviour of citizens and businesses, which in turn influence the goals, both intended (and unintended) of the regulation.

2.2 Part One ─ What monitoring, review and evaluation of existing regulations does the formal system of regulatory management currently require?

“The current system for making [regulatory] choices is broken. It is largely based on faith, rather than evidence. The efficacy of many regulations is never assessed. Many others are only evaluated before they are implemented – the point we know the least about them” [12]

Regulatory intervention, as Michael Greenstone[13] observed above, “is largely based on faith rather than evidence.”

There are a number of institutions and an increasingly wide range of ex ante mechanisms that highlight potential problems that could reduce the intended effectiveness of regulation before the regulations are put into effect.

We turn to discuss the institutions involved in developing central government[14] regulations in New Zealand. In Figure 2.1 at the end of this part we discuss what constitutes evidence and how that evidence could be used and interpreted.

2.2.1 Institutions and Processes for Review of New Regulations

Within the New Zealand executive there is the well-established system in the policy phase of Cabinet and its committees. These are supported by officials’ committees, interdepartmental processes, and the Treasury Regulatory Quality Team. In the legislative design (little policy) phase there are the relevant government department(s) involved in the particular policy area, the Legislation Advisory Committee (LAC)[15] and the Parliamentary Council Office, which because of its expertise and the importance attached to the consistency in drafting has a monopoly of legislative drafting (with the exception of taxation law).[16] Parliament has a very active Select Committee system, as well as Parliamentary debates and the Regulations Review Committee (RRC).[17]

Second, while there is no generic process for the policy and legislative design phases, the policy development process typically includes consultation with stakeholders, preparation of a Regulatory Impact Analysis (RIA), inter-department challenge (departments embody particular perspectives), disclosure through the Official Information Act 1982 (OIA), lobbying, and involvement of traditional media, social media, academics, think tanks, Cabinet and its committees, and Parliament through its Select Committees. In addition to these arrangements, there are a number of ex ante procedures. In 2012 the major regulatory departments[18] were required, for the first time, to prepare Regulatory Review Plans, and all departments are required to prepare Annual Portfolio Regulatory Plans which foreshadow proposed new regulations. A specific regulatory proposal is meant to include a Preliminary Impact and Risk Assessment to determine whether a RIA is required. If required, the RIA framework is applied, culminating in the preparation of a Regulatory Impact Statement (RIS).[19] Cabinet papers for the Cabinet Legislative Committee typically include a section on compliance with LAC Guidelines, the principles of the Treaty of Waitangi, the New Zealand Bill of Rights Act 1990 and the Human Rights Act 1993, the principles and guidelines set out in the Privacy Act 1993, and the relevant international standards and obligations.[20] Draft legislation is also subject to Bill of Rights vetting under Section 5 of the Bill of Rights Act.[21]

What is less understood or documented is what happens after the ‘big’ policy questions have been decided, the ‘little’ policy design completed and Parliament has completed its deliberation of primary legislation (or Cabinet in the case of secondary legislation). This is important because with regulations there is generally not a clear distinction between policy design and administrative implementation and enforcement. Regulatory outcomes are inevitably co-produced in the black box involving the interaction between the regulators and the regulatees. As a result policy is made as it is rolled out and implemented.[22]

2.2.2 Institutions and processes once the regulations have been put into effect

In summary, the analysis suggests that in New Zealand there are a plethora of ad hoc review mechanisms at the varying levels but fewer formal requirements for monitoring and reporting, evaluation or review. In short the formal system of ‘policing’ regulatory regimes is built around ‘alarms going off’ rather than ‘regular routine patrols.’

Reviews can occur at a number of levels: including reviews of individual decisions or ‘cases’, a review of the individual regulatory ‘intervention’, or the review of a ‘regime’. A ‘regime’ refers to a cluster of related interventions in a particular domain such as securities markets or occupational regulation.

When regulations are being administered and enforced, a number of formal case review mechanisms exist that are focussed on addressing individual grievances and disputes. Public agencies such as Inland Revenue Department (IRD) and the Ministry of Social Development typically have internal quality assurance and review procedures as well as an internal complaints handling process. In addition there are specialist tribunals[23] (such as the Taxation Review Authority and the Social Security Appeal Authority) for handling serious complaints and adjudicating disputes. At the peak of the pyramid sits the courts. The courts’ primary function is to adjudicate disputes between parties. In the process of such adjudication, laws and regulations of all kinds will be interpreted and that interpretation will contribute to a kind of learning about the law. In some areas, bodies of knowledge about the outcomes of regulation will develop. This is not so in those areas where test cases will be few and far between.[24] As part of the courts’ functions, they are sometimes described as ‘supervising’ or ‘policing’ the administrative system[25] because they can hear appeals on administrative decisions or on the substantive effects of decisions by bureaucrats and ministers.[26]Appeals about the process of the decision and the substantive effect of the decision need to be distinguished. In addition to the courts, since 1962, individual citizens and businesses have had the right to complain about administrative decision making to the Ombudsman[27] and since 1985, to Parliament’s Regulation Review Committee.[28]

In addition, there are a range of specialist investigating institutions which can review cases from a particular perspective. These specialist reviewers include the Serious Fraud Office, Coroners, State Services Commission (predominantly on integrity and personnel issues), the Health and Disability Commission, and the Human Rights Commission. Augmenting these formal institutions are informal mechanisms such as traditional media, social media, leaks and whistle blowing.

All of the mechanisms discussed above primarily relate to addressing grievances and disputes rather than improving primary decision making. In other words, while the distinction is not hard and fast, the focus is on resolution of particular disputes (operator error) rather than looking at the system as a whole (system error).[29] Turning to the formal reviewers at the system level, again there are a plethora of institutions. In addition to the department of state or public agency which administers the regulation, there is a wide range of potential public oversight bodies depending upon the precise domain involved. In law reform foremost of these is the Law Commission, but in particular areas reviews could be undertaken by the Human Rights Commission,[30] the Parliamentary Commissioner for Environment, the Families Commission, the Commissioner for Children, the Health and Disability Commissioner, the Privacy Commissioner, the Ombudsman, the State Services Commission and the Office of the Controller and Auditor-General. In addition to the permanent organisations there is the possibility of one-off reviewers including Ministerial Reviews, Task Forces, Commissions of Inquiry, Royal Commissions, and Select Committee Inquiries. These reviews, to varying degrees, draw on the views of academics, policy staff employed in think tanks and consultancy firms as well as public employees.

2.2.3 Public Agency Monitoring and Reporting

The previous discussion focused on review. This section discusses monitoring and reporting as public agencies that undertake regulatory functions also face reporting obligations under the Public Finance and Crown Entities Act for the use of appropriations to fund administration and enforcement activities. These performance reporting obligations include ex ante forecasts of the quantity, quality, timeliness and cost of regulatory output classes and the requirement to give an ex post account against the forecast.[31] Meeting these reporting obligations requires agencies undertaking regulatory functions to monitor and measure activities. However, the reporting only covers the direct administration and enforcement activities undertaken by the agency and this generally makes up a small fraction of the total cost of regulation. The total cost of regulation also includes the administration and compliance costs faced by citizens and businesses, as well as the wider societal economic costs of distortions introduced and benefits generated by the regulation including deadweight costs and benefits foregone. [32] Neither the administration and compliance costs facing businesses nor the wider costs and benefits facing society are covered by the performance reporting requirements.

2.2.4 Public Agency Evaluation and Review

New Zealand prides itself on the clarity of responsibility in the formal public management system. Yet the Cabinet Manual is virtually silent on the Minister’s role in regulation, beyond a short reference to the introduction of new primary legislation.[33] In principle the doctrine of Ministerial responsibility applies – Ministers are responsible for regulatory policy, and are answerable to the house on policy and administrative and operational matters while public agencies are responsible for the actual administration of regulation.[34] In practice the mechanisms to make Ministers answerable for the management of the stock of regulations are highly attenuated. As is discussed below, there is no standard process to make the executive answerable for the administration and maintenance of legislation as a matter of course.

The Parliamentary Counsel Office, which has the key role in drafting all non-tax legislation, regards the issue of whether a new regulation should provide for a review as a policy matter on which they are guided by drafting instructions from the department. The policy questions that the department would need to address include whether to include a review, by whom should the review be conducted, about what (specific sections or whole legislation), when (how long after enactment), how (consultation requirements), to whom (the Minister, tabled in the House) and at what frequency. As the answers to these policy questions will vary by type of policy instrument,[35] the Drafting Manual of the Parliamentary Counsel Office is largely silent on the subject.

The Treasury’s (2009) Regulatory Impact Handbook provides guidance to staff developing new regulatory proposals. The handbook[36] contains a half page of aspirational material on the need for monitoring, feedback and review. There is no formal expectation that the introduction of regulations will be accompanied by programmed review, nor requirements to undertake formative or summative evaluations, nor monitoring and reporting requirements beyond that required for compliance with the Public Finance Act as discussed above.[37] There is no discussion in the handbook of how and when to develop evaluation plans,[38] monitoring and measurement plans, embedded review clauses, sunset clauses or even links to examples of review provisions such as those in Annex A.[39]

As discussed above, agencies are required to give an account for the resources that were appropriated for the development, administration and enforcement of regulation. By convention, every statute specifies one or more administering departments. There is, however, no statute which specifies what the role of the administering department is for primary or any secondary or tertiary legislation and the current State Sector Act 1988 makes no mention of this role.[40] Until the regulatory stewardship expectations were published in April 2013,[41] there was no administrative guidance on the duties and responsibilities of an administering department. The implication drawn in the absence of statute or guidance was that a Department’s role is limited to the responsibility of providing policy advice to the relevant Minister on whether and how the regulation needs amending and answering queries from the public, Parliament and Ministers about that piece of legislation.

The Government’s ‘Less Regulation Better Regulation’ programme initially addressed the gap in departmental responsibilities for the stock of regulation by requiring Departments to undertake regulatory scans. The focus of the scans was on the stock of primary, secondary and tertiary legislation at the level of regime (i.e. packages of related regulations). The departmental review through the Performance Improvement Framework includes one lead question (No 4 of 27) “How well does the agency’s regulatory work achieve its required impact?” along with two prompts on measurement and review.[42]

As this chapter was being finalised (April 2013), the Government announced proposal for potentially significant legislative changes. If the legislative proposals[43] are enacted, then departmental Chief Executives will have legislative responsibility for stewardship of the stock of regulation. Enhanced requirements are proposed for Regulatory Impact Statements including more emphasis on implementation plans and compliance and enforcement levels proposed. Enhanced disclosures to Parliament proposed by amendments to the Legislation Act 2012 are discussed in the next section.

Box 1 – Evidence Based Regulation?

The official guidance, the Treasury 2009 Regulatory Analysis Handbook, places considerable emphasis on encouraging “an evidence-based approach to policy” (p10) This raises questions about the what constitutes evidence and once evidence is found how that evidence should be used and interpreted?

Different disciplines have different approaches and make different value judgements about the weight to be attributed to different types of evidence. In many areas of regulation much weight continues to be attributed to economic evidence. Less weight may be given to other types of evidence. Graeme Austin’s paper in this project “The Regulation of Consumer Credit Products: The Effects of Baseline Assumptions” looked at how the regulation of consumer debt to showed that attention to evidence of psychological harm could lead to a different model to regulate consumer lending. Susy Frankel and Megan Richardson’s paper “Trans-Tasman Intellectual Property Coordination” in this project showed that the process of international negotiation that is used can affect whether we have laws that are evidence-based in their own right or whether they are laws that are trade-offs for other gains.

In sum, different types of evidence provide the policy underpinnings for different regulations and the use of evidence from evaluations and reviews will vary. The use of evidence can appear to be ad hoc, both because the amount and type of evidence will vary and the use of it will vary. A lack of data or evidence is initially more problematic that how it is used. But which evidence is important and when it is used is critical to systems of evaluation and review.

Guidelines and principles of regulation tend to answer this question by giving greater weight to some types of evidence. The guideline that indicates the regulation should be economically effective is such an example. But that is not an explicit process of weighing and evaluating evidence. Rather, at best guidelines and principles may signal what type so of evidence should be used. Such signals do not lead to a systematic review or indeed clear rules about when evidence is good evidence and how to prioritise or weight competing or even contradictory evidence.

The way in which adjudication functions brings something potentially unique to weighing evidence. Legal regimes have rules, which are detailed and tested about the way to determine what evidence prevails. This is not to suggest that legal rules for weighing evidence are necessarily appropriate in policy making decisions. The criminal standard of proof, of beyond reasonable doubt, is quite obviously inappropriate when making regulations to govern future behaviour. The civil standard of proof, on the balance of probabilities, is arguably useful in evaluating evidence of what regulation has achieved and on the balance of probabilities what changes can be made or what aspects of the status quo should be kept to change or maintain future behaviour. This is what courts and tribunals do, and the rules are vast. Policymakers can be more flexible because they are not primarily resolving disputes, but the legal rule of weighing evidence might give a framework to weighing evidence in policymaking decisions.

It is not only the way in which evidence is used, but of course who and for what purpose the evidence is being used. What institution, if any, has the responsibility for monitoring and evaluation is also relevant.

2.2.5 Parliamentary Review

The House plays an active role in approving and amending primary legislation, and in April 2013 the Cabinet endorsed a set of legislative amendments that would enhance the information disclosure to the house supporting each Government bill. It is proposed to amend the Legislation Act 2012 to provide for basic disclosures including:

· a statement of the policy the legislation aims to achieve

· the quality assurance products that exist (including the RIS, independent assessment of the RIS, consistent with the New Zealand Bill of Rights 1990, reviews or evaluations)

· the processes followed (e.g. consultations and actions taken as a result)

· Significant or ‘unusual’ feature warranting scrutiny.[44]

In contrast to new legislation there is no standard operating procedure that requires the executive to be answerable to the legislature for the administration of legislation as a matter of course.[45] Select committee reviews of departments potentially make them answerable to Parliament for the function of administering legislation. Parliament is assisted by staff from Parliamentary Services and the Office of Controller and Auditor General (OAG) but they have no access to independent evaluation expertise. Unlike the General Accounting Office in the United State, the OAG’s focus is limited to financial and non–financial performance measures in undertaking any regulatory functions, unless the scope of an effectiveness audit were to include the efficacy of a regulatory regime.

2.2.6 Summary

Thus in summary, across the executive and legislative branches of government in New Zealand, the primary focus of the formal system of the regulatory management system is on ex ante review of new legislation.

Figure 2.1 –

The open loop of the formal system of regulation

Source: The authors

Figure 2.1 above summarises the formal regulatory development process. In summary it shows that, rather than being a closed system where feedback from monitoring, evaluation and review shape the design of new regulation, the formal system is an open loop with limited formal responsibility assigned to monitor and review existing regulatory interventions beyond regulatory scanning and performance reporting of administration and enforcement. Instead, at the level of individual statutes or secondary regulations, there is a range of ‘alarm type’ review mechanisms and relatively few ‘regular patrols’ or programmed reviews. The next section explores how the formal system for managing existing regulations works in practice. It examines the real system of regulatory management and what reviews of existing regulations actually occur.

2.3 Part Two - What monitoring, review and evaluation of existing regulations actually occurs?

“The lack of any systematic process for post-enactment scrutiny means that routine maintenance of some very major pieces of legislation rarely happens. We paint our houses and service our cars, but we don’t look after our laws in the same way.”[46]

The previous section discussed how the formal system was an open loop shown in Figure 2.1. This is consistent with the view that the impacts of regulation are knowable in advance, stable and predictable. A key insight from the Regulatory Reform project and from other research is to consider how often regulation is a process of learning and experimentation.[47] One of the defining characteristics of much regulation is that the purpose is to change behaviour of regulatees but over time that behaviour and that of regulator will adapt to the regulation. As well as the adaptation of the behaviour of regulatees, the design of regulations needs to allow for changes in the tastes, technologies and methods of production. Regulations based on particular technologies and tastes are less likely to be resilient than those that enable adaptation. In particular, changes in technology are virtually impossible to predict in advance. [48] Health and safety are often more heavily regulated areas. It is beyond the purview of this paper to fully discuss what is known as the precautionary principle and the effects of that in New Zealand law.[49] However, one point of relevance is that regulation created in part or whole out of precaution arguably needs very thorough monitoring and evaluation. All regulation has the element of guesswork about the future, but regulating out of precaution inherently involves even more, even if well-informed, guesswork. The effects on safety and businesses and Maori values, for example, of the New Zealand Hazardous Substances and Organisms Act needs far better monitoring and evaluation than currently exists.[50]

Regulation effectiveness involves all the phases in Figure 2.1 above, as regulation aims to change behaviour through a long chain of complex interactions (shown in Figure 2.4 below). Changing behaviour through regulation generally requires:

· Clarity of policy objectives

· Policy advice on selection and design of policy intervention

· Regulatory design which allows the instruments to be nested into the rest of the regulatory regime

· An adequate theory of change that clearly specifies the transmission mechanisms by which the policy interventions are intended to achieve the desired impacts and final goals

· High standard of implementation

· Adequate resource and appropriate differentiated enforcement

· Ensuring minimal non-compliance by citizens and businesses

· Feedback loops to enable adjustments to administrative practices, business practices, legal interpretation and policy learning.

2.3.1 The Policy Imperative

The first stage of the open loop is an imperative ‘to do something’. Regulatory problems often do not emerge at the start from an evidence-based process such as a formal summative evaluation, but because an issue gains political salience. There is competition to frame the problem and hence the potential solution. As a result how the problem should be framed is often contested. John Kingdon’s work suggests that the interaction of three separate streams - problems, policies, and politics - results in issues attaining a high priority and moving onto the “decision agenda” for legislative enactment.[51] The high priority issues get developed and enacted while other issues are focused on to a lesser extent, and some are completely side-lined.

2.3.2 Policy Design to Enforcement

Alongside the political contest is the more technocratic challenge of the design of the regulatory intervention. The second and third stages of the open loop involve the policy design and legal development phases. Following parliamentary deliberation is the implementation phase, and then administration and enforcement. The latter phase is something of a ‘black box’ as generally relatively little is known about how regulations are administered and enforced[52] Enforcement is particularly complicated when, as with, for example, the Resource Management Act, there are multiple parties as decision-makers.[53] The final phases are monitoring & reporting and review, which is the particular focus of this paper.

2.3.3 Types of Reviews

The previous section discussed that while there are a plethora of potential reviewers, there is relatively little regular institutionalised review of regulatory regimes in New Zealand. The Australian Productivity Commission, in its survey of Australian state and federal regulatory practices, suggests that there are three types of reviews of regulatory regimes:

· Stock management – RIA, red tape reduction, Regulatory Budgets, in/outs

· Ad hoc – stock-takes, principle-based, benchmarking, in-depth reviews

· Programmed reviews – sun-setting, embedded in statute, post implementation reviews.[54]

Relatively few of these reviews are required in New Zealand either by law or in the regulatory management guidance. On stock management devices, New Zealand has been an early adaptor and leader on RIA but has declined to adopt regulatory budgets, in-out requirements[55] or red tape reduction targets. On ad hoc reviews, as was discussed in the previous section, the main focus is on in-depth reviews, as stock takes of burdens on business, and principle based and benchmarking reviews are rare.

2.3.4 Programmed Reviews

On programmed reviews there is no requirement for sun-setting, post implementation reviews are only required where a RIS was inadequate or not supplied and embedded reviews are relatively rare. To illustrate the latter, Annex A shows the result of a key word search of all 1872 statutes in force in August 2012 initially using the combination ‘review’ ‘operation’ ‘act’ and then other combinations of words. The first search, which focussed on review provisions that applied to the operation of the entire Act, found 1.2% of 22 statutes with ‘embedded’ reviews. Follow up searches found another 0.6% of 11 statutes that had review provisions that applied to the operation of specific sections of the Act. Subsequent searches did not identify any further statutory review provisions. A similar search of regulations found no cases of embedded reviews. As a result we can safely conclude that the number of statutes or secondary regulation with embedded review provisions in New Zealand is very low.[56]

This lack of programmed evaluation and review is not particular to regulation in New Zealand[57]. There is no tradition of embedded review and evaluation in New Zealand outside of regulation either.[58] A 2003 review of evaluation suggested the dearth of evaluation and review reflected the lack of a culture of inquiry, a lack of capability and poorly prioritised and co-ordinated evaluative effort.[59] As an illustration, the 2003 report discusses an informal scan of all Cabinet papers undertaken by the State Services Commission in June 1998 and repeated in 2000. The 1998 study concluded that only 7% of papers included proposals for a formal evaluation or review and only a handful of papers contained sufficient detail for an evaluation to able to be undertaken after the event. The follow up review in 2000 found a slightly higher percentage but reached similar conclusions.[60] The first author of this paper would observe, based on his recent work within the public service, that if a similar assessment was repeated in 2012 the results, with one exception, would not be significantly altered. The exception, interestingly, is regulation. The introduction of the RIA process has brought some structure and discipline to the development of regulatory proposals. So while evaluations and programmed reviews are not required as a part of new regulatory proposal, at least evaluations could be undertaken if required in the cases where the RIS is an adequate standard.

That is not to say that evaluations do not occur. All rules generate exceptions. Where evaluations have been undertaken there are cases where it was shown to be effective. One example is the evaluation and review of the parallel importing regime.[61] The evaluation there showed that parallel importation would result in reduced costs to the New Zealand consumer and no overall deleterious effects on New Zealand based business. In fact, there were some benefits for New Zealand businesses identified. Relevant laws, copyright, and at a later date trade mark law,[62] were amended to allow of parallel importing. Some years later the effects of the law changes were evaluated and they were found to have been largely successful.[63] Other examples can be found[64] – but these examples remain exceptions to the rule that in general and on average regulations are not subject to evaluation or systematic review in New Zealand.  

2.3.5 Stock Management

The most basic step in managing a stock is to know what the stocks are. The introduction of more systematic regulatory planning processes revealed that a number of departments initially lacked basic information on the stock of regulations they had administrative responsibility for. Authoritative registers of legislation and regulation were lacking or incomplete and often did not include all tertiary regulations.

Treasury has recently published its first ever analysis of regulatory practice and capability for regulatory regimes. The analysis is the Treasury’s assessment of 56 regimes against seven ‘best practice regulatory principles’ and the results are shown in Box 2. The publication is of interest for the signal that it sends, as well as the content. The signal that it sends is that regulatory management responsibilities, including those for adequately managing the stock of existing regulations, are to be taken more seriously with the risk of ‘naming and shaming’.

Box 2 - Treasury Assessment of Regulatory Regimes – Implications for Regulatory Stock Management

Treasury has assessed 56 regulatory regimes against seven ‘best practice regulatory principles’. What is important for these purposes is the principle of durability and associated performance indicators supporting this:

1. Policies and procedures are in place to ensure that it is administered flexibly

2. Non-regulatory measures, including self-regulation, are used wherever possible

3. Feedback systems are in place to assess how the law is working in practice

4. Decisions are reassessed at regular intervals and when new information comes to hand

5. The regulatory regime is up to date with technological and market change, and evolving societal expectations.

Each of the six principle areas (flexibility and durability were collapsed into one) were assessed against a four point scale – no significant concerns, possible areas of material concern, strong indication of material concern and not known. In the case of flexibility/durability, 34 regimes (63%) were assessed as ‘no significant concerns’, 22 (33%) as ‘possible areas of material concern’, 2 (3%) as ‘strong indication of material concern’ and there were no ‘not knowns’. There was no significant difference between the results for flexible/durable and all principles combined.

The content of Box 2 is interesting because it paints a surprisingly positive picture of the state of regulatory stock management in New Zealand. Treasury assessed each regime using the so called ‘best practice regulatory principles’.[65] Using the flexibility/durability principles as a proxy for stock management, the recent Treasury assessment would suggest that the concerns about stock management are not significantly different to those relating to regulation as whole. The assessment, however, needs to be interpreted with some caution as it is based on the judgement of Treasury Vote Analysts rather than a rigorous expert analysis of the results of the department regulatory plans and scans, and how the plans compare with actual departmental practice.

The initial regulatory scan undertaken in 2010 revealed that a number of departments were not aware of the actual regulations they had responsibility for and few departments had actively monitored the operation of the legislation. As discussed, departments are now required to undertake regulatory scanning of the stock of regulations and Annual Portfolio Regulatory Plans covering all proposed changes to primary, secondary and tertiary regulations. In addition, departments face performance reporting requirements under the Public Finance Act and may be required to undertake post implementation reviews of legislative changes if the RIS is inadequate.

Relatively little is known outside the individual agencies concerned about the extent of monitoring undertaken by regulatory agencies and other stake-holders. Discussions with staff with monitoring responsibilities suggested the extent of regulatory monitoring is very uneven – highly active in some domains, limited in others. There are also concerns that such monitoring processes may be interfering with doing important grass roots analysis.[66]Monitoring of regulations can also be undertaken by specialist providers of advice such as academics, think tanks, law firms and lobbyists as well as ‘peak’ bodies. Peak bodies representing regulatees include entities such as the Law Society and Business New Zealand. Interpreting the results from monitoring is complicated because of the difficulty of separating ‘noise’ from ‘signals’ when assessing responses from regulatees. Arguably the current default approach of many departments is to say ‘no noise, no problem’. Absence of noise may simply mean that the costs are dispersed rather than concentrated. The presence of noise does not ensure that there is a clear signal. Instead the noise could be driven by private interests, so noise may not be an indication of a valid public policy regulatory issue.

2.3.6 Review of Individual Cases

Another possible source of feedback and learning is individual cases. Both the courts and the Ombudsman have roles in ‘making the line calls’ by reviewing individual cases and in shaping the ‘rules of the game’ by influencing the interpretation and application of the law to particular cases. The courts are the primary formal institutions that give definitive interpretations of legislation. Judicial and quasi-judicial interpretation and the Ombudsman’s role can influence how those outside of any particular dispute should interpret and apply the law.

Putting to one side initially the ways in which courts might shape regulatory regimes, there is a question of the relative importance of courts in resolving individual disputes (‘making the line calls’). By way of example, Figure 2.2 below shows the pyramid of complaints and disputes using data from the tax system (including Child Support) to illustrate the flows in 2010/11. IRD has formal processes for addressing complaints and disputes on tax administration and Child Support grievances. If these processes do not reach a resolution, the grievance can be escalated.

Inland Revenue contact centres, processing centres and over-the-counter services deal with just below 6 million contacts with citizens and businesses per year. At the next level, IRD processed over 8,000 complaints and 400 disputes, and investigated around 6,750 child support cases where a parent is unhappy with their level of obligation or entitlement. Complaints and disputes are subject to layers of internal reviews which resolve almost all grievances. At the peaks of the pyramids sit the Ombudsman, the Taxation Review Authority, and the Courts. The Ombudsman received 58 complaint cases from IRD. 43 disputes were escalated to external review by the Taxation Review Authority – a specialist tribunal. Also, 12 disputes were heard by the High Court along with 7 Judicial Reviews. In Child Support cases, the peak of the pyramid is the Family Court, a specialist court with procedures tailored to family issues, which heard 83 cases in 2010/11.[67]

Figure 2.2 – The real system – the case of IRD

 Figure 2.2 – The real system – the case of IRD

Source: IRD

The IRD is not an exceptional case in terms of the number of cases. For all the millions of transactions that citizens and businesses have with the New Zealand state each year, each year less than 100 result in a High Court decision. A search using Brooker's database searching “administrative – judicial review” found 89 cases heard by the High Court in 2008 and 73 in 2011.[68]

2.3.7 Review of Individual Cases by the Ombudsman and the Courts

Another avenue in which there is a kind of review at least of decisions of parts of the executive is in the work of the Ombudsman. In 2011, the Ombudsman received 7146 grievances, conducted 570 Formal Investigations, and made 217 final opinions. Equally importantly, there were the 4562 cases where no investigation was conducted by the Ombudsman as they were returned to the department for resolution (374), resolved by informal intervention (617) or advice and assistance given (3474).

Based on the simple weight of the numbers in the pyramid in Figure 2.2, it is difficult to sustain the notion that as a general rule the courts are supervising individual administrative decision making, directly through judicial review or generally through the appeal mechanism applying to administrative decisions. However the numbers do not tell the whole story for several reasons. First, some claims, whether by way of appeal or review, are commenced and resolved before final judgment. Second, but perhaps primarily, the role of the courts is as an ultimate final decider of disputes. Courts are of course also responsible for the development of the common law.

While it is not the main role of court reviews, they do indirectly influence the administration of the law by influencing ‘the rules of the game’.[69] There are two mechanisms. First, court judgments have a precedent effect, and second judgments (and analysis of them) affect the advice lawyers give.

It is instructive to compare the courts and the Ombudsman and explore the implications of different types of review for learning and regulatory effectiveness. The courts have wide powers and limited jurisdiction while the Ombudsman has wide jurisdiction, limited formal power but significant influence. The courts have the power to interpret legislation and regulation in order to determine their application to individual cases. To the extent that bodies of precedent build up around particular statutes and regimes, the operation of regulation is informed by the courts. Any such review is necessarily a reactive response to the particular case before the court, and may therefore be correspondingly partial and episodic. A body of case law and resulting legal precedent, however, is less partial and not episodic.

An important aspect of the courts’ role is administrative review, but that role is circumscribed to exclude matters of policy and could not be described as wholesale responsibility for supervising the stock of regulation across the system. The discussion in Part 2 above revealed administrative law review by courts is just one of many influences upon administrative decision makers.[70] As Cane and McDonald observe “a recurring theme of empirical research…is that judicial review appears to have relatively little impact on bureaucratic behaviour.”[71]

By contrast the Ombudsman process[72] is more inquisitorial than adversarial and the office lacks the ability to command an action, award damages or order the payment of compensation. The remedy recommended by an Ombudsman will frequently extend beyond the immediate redress of the complainant and can often recommend changes to policies and standard operating procedures. In other words, the remedies offered by the Ombudsman often hold greater potential to rectify the underlying maladministration that gave rise to the complaint in the first place.

The power of the Ombudsman lies in their ability to persuade. A key part of the effectiveness of the office is the ability to get a party under investigation to ‘own the need to change’. The positive perception of their office and their underlying reputation for being ‘realistic, fair, reasonable and grounded’ places them in position of great influence. The breadth of the Ombudsman’s jurisdiction enables the office to tackle systemic issues. The discovery process is broad, and the focus is not limited to a specific grievance. This allows the Ombudsman to influence organisational learning and processes more directly than the courts. The Ombudsman, therefore, has more extensive influence on how administrative decision-makers treat individual cases and develop standard operating procedures. In the New Zealand context it seems likely that the case law (including but not only administrative or judicial review) is greatest in particular areas such as competition law, and overseas investment rulings. In these latter areas the potential returns from the ‘prizes’ at stake make it worthwhile for businesses to invest in the costs of litigation possibly as a form of review of their own.

2.3.8 Conclusion and Summary

In summary while there are a plethora of potential reviewers in New Zealand , there is relatively little regular institutionalised review of existing regulations at the level of individual provisions, interventions, regimes or the overall system. The focus of both case reviews and most system reviews is on whether regulations have been implemented and administered properly. Other than ad hoc reviews, and the role of researchers in analysing reviews and writing articles, there are few mechanisms to learn about the effectiveness of regulations in contributing to achieving near term impacts and final goals. Unlike comparable Westminster jurisdictions, there is no requirement for programmed reviews of existing regulations nor has there been a concerted effort to increase the supply of evaluative evidence to enable more evidence based policy decisions.

Figure 2.3 maps the real system of regulation. The messiness is deliberate as policy development and implementation do not fit into a tidy policy cycle. Figure 2.3 eschewed a conventional policy cycle in favour of a partly open loop because the notion of a completed circle is misleading – regulation is more like a game of ‘snakes and ladders’.

Figure 2.3 – The real system of regulation

Figure 2.3 – The real system of regulation

Source: The authors

2.4 Part Three - What could monitoring, evaluation and review contribute to learning and consequently regulatory effectiveness?

Policy decisions will typically be influenced by much more than objective evidence or rational analysis…in short democracy determines what actually happens. But evidence and analysis can nevertheless play a useful, even decisive, role in informing policy-makers judgments. Without evidence policy makers must fall back on intuition, ideology or conventional wisdom. (Banks 2009 p3) [73]

The discussion of the formal system in Part 2 showed that the formal requirements for review of existing regulations are limited relative to other comparable jurisdictions and underdeveloped relative to the provisions applying to new regulations. Little formal effort is devoted in New Zealand to monitoring, evaluating and reviewing existing regulatory regimes. Instead there is a patchwork quilt of ‘alarm type’ review mechanisms and relatively few ‘regular patrols’ or programmed reviews. The discussion in the last section explored how the real system for managing existing regulations works in practice and suggested that there were negligible formal evaluations, uneven monitoring and few regular reviews beyond regulatory scanning and reporting. Instead the practice is generally to rely on ‘ad hoc’ reviews to generate learning about the effectiveness of administration and hence indirectly whether the policy objectives of a regulation have been achieved. In this part of the paper we first discuss the reasons it is important to monitor, evaluate and review regulation, and second, review the barriers to learning about regulatory effectiveness, before turning to what might be done.

2.4.1 Why monitor evaluate and review – the imperative to learn the way forward

There are a number of reasons why regulation once introduced needs to be monitored, evaluated and reviewed. The first reason relates to the unintended consequences of regulation as a result of adaptation of behaviour by regulators and regulatees.

As illustrated in Figure 2.4 and discussed above in Part Two, ex ante screening of new regulation occurs at the point when decision makers know least about effectiveness. There is considerable scope for regulatory drift between the original design, and how regulators apply the rules and how citizens and businesses respond to the regulations that are finally rolled out. Over time the behaviour of regulators and regulatees will morph so the costs, benefits and effectiveness of regulation will change. In short unintended consequences are inevitable

Figure 2.4 – Regulatory Effectiveness and Unintended Consequences

Figure 2.4 – Regulatory Effectiveness and Unintended Consequences

Source – the authors drawn from Coglianese, G. Evaluating the Performance of Regulation and Regulatory Policy (2011) OECD

The second reason relates to uncertainty and complexity. Pervasive uncertainty about the impact of regulations means that complex dynamics are unleashed by regulation.[74] Monitoring, evaluation and review can contribute to potentially reducing uncertainty by improving administration and signal the need to redesign the regulation.

The third imperative is the lack of transparency about the effects of regulation. Put simply, the problem is that the counterfactual never happens. Inefficient and ineffective regulations – so-called ‘silent killers’ can persist for considerable time without review or modification. The ‘silent killers’ are difficult to detect due to long latency and can be physical or financial. In either case they impose costly regulations that constrain choices available to citizens and business for no significant public benefit.

The fourth imperative for more systematic review is because of the high ‘noise’ low ‘information’ content of existing ad hoc review mechanisms. In short there is not a priori reason to assume that the ‘fire alarms’ going off will signal the most costly and least effective regulations. Table 2.1 provides four stylised examples of the effect of regulations depending upon the extent that costs and benefits are concentrated or dispersed.[75]

Table 2.1 – Regulation – The differentiated concentration of benefits and costs

Table 2.1 – Regulation – The differentiated concentration of benefits and costs

Table 2.1 explores whether a regulation, which has become ineffective with the passage of time, is likely to trigger an ad hoc review. It suggests that the likelihood of an ad hoc review is:

· highest where both the costs and benefits of regulations are concentrated (workplace safety)

· high where the costs are concentrated and the benefits are dispersed (leaky buildings)

· lower where both the costs and benefits of regulations are dispersed (general standards like weights and measures) and

· lowest when benefits are concentrated and costs are dispersed (occupational regulation).

The intuition behind this simple matrix is the same as that underpinning the tragedy of the commons – even when society would benefit, it is not in the interests of any one person or group to initiate the change. Where the benefits from regulations are concentrated, unless they also face concentrated costs, interest groups will mobilise to protect those benefits.

In reality, the real world can’t be reduced to a simple two by two matrix as the context surrounding the introduction of regulations is more complicated. For example, participants in the workshop that reviewed an earlier draft of this paper noted that it was much easier to generate political commitment to amending ineffective regulations if the details were not enacted in primary legislation. Amending statutes requires a slot in the legislative programme and this absorbs scarce house sitting time, as well as consideration by Cabinet. By contrast changing secondary legislation only requires a paper to Cabinet.[76]

The fifth imperative is to improve the effectiveness of policy design and the ex ante appraisal of new regulations. There is evidence that both costs and benefits of regulations tend to be overstated in ex ante RIA, because ex post it turns out that the policy is often implemented less intensively than predicted.[77]

2.4.2 The barriers to monitoring, evaluation or programmed review

The previous part discussed an apparent paradox: a paucity of monitoring, evaluation or programmed review in the face of unintended consequences and uncertainty about regulatory effectiveness. The key insight from the Regulatory Reform Project and from other research was to consider regulation as a process of learning and experimentation. This perspective requires a focus on whether behaviour will adapt to the regulation and scanning for unforeseen consequences. This section explores why in the face of the uncertainty about the effects of regulation, there is a paucity of monitoring, evaluation or programmed review. The first part explores the proposition that regulation is expensive and complex to monitor, evaluate and review, so learning about effectiveness is difficult. In subsequent parts to this section we explore what might be done to augment the capability to undertake more systematic reviews and formal evaluations and to ensure that the information generated will actually be used to improve regulatory performance.

First, obtaining and monitoring data concerning the direct administration activities of regulators is easier than monitoring the administration and compliance costs of citizens and businesses. In some instances what processes need monitoring can be easily defined such as the number of patents or companies registered, or the share of appliances sold that are energy efficient. However, monitoring processes and impact is harder when quantitative data is less readily available from administrative or transactional datasets.

Second, the difficulty of conducting an evaluation will differ depending upon the regulatory intervention or regime being evaluated. Collecting data and evidence, through monitoring or review, can be costly and time consuming. It often needs to be found or generated by policymakers or by stakeholders who may, or may not, have the relevant resources. Sources indicated the full cost of a formative evaluation of a regulation would be of the order of $NZ 50,000 to $100,000. A full blown randomised double blind summative evaluation would cost in excess of $10 million. The costs of the evaluation and review of regulation will need to be proportionate to the nature of the regulation.

Third, added to the cost is the shortage of resources – practitioners advise that it is easier to get resources for evaluation tagged as part of a transfer payment programme than for regulatory programme– and often this is compounded by a shortage of data.

The fourth difficulty is path dependence. It is difficult to undertake and evaluation in order to review a regulation after it has been in operation if the design has not considered how the effectiveness of the administration and enforcement, and the effectiveness in achieving the desired impact is to be assessed. It is far easier and less costly to design the evaluation and review plan as part of the development of detailed legal policy and implementation. For example an evaluation plan has 6 key stages – purpose, information requirements, choice of method, collection of data, data analysis and formulation of conclusions and recommendations. Collection of data for learning about effectiveness would be far easier if it was an integral part of detailed design of the regulation.

The fifth difficulty is more technical. Regulations provoke differences in views. Evaluations involve constructing the counterfactual or control for regulatory interventions. Reasonable analysts will have different views on the efficacy of different regulations based on different judgments about the extent to which markets, governments and courts fail.[78] As a result it is critical to have the institutional capacity in place to ensure robust evaluations and reviews are undertaken where the counterfactuals and controls are effective and credible. Lack of credibility can be a particular problem where the regulator conducts a self-evaluation. It is more likely that an evaluation which is independent will be considered fair and reliable.

The sixth difficulty is institutional. Learning is particularly complicated where there is no end to end integration because of the separation of policy from delivery, and the separation of powers between the executive and the judiciary. Regulatory interventions (other than disclosure) suppress information rather than generate it as hierarchies and markets do. As a result information in distributed legal systems is suppressed (relative to markets and hierarchies).

Seventh, there are difficulties in interpreting evidence[79] concerning regulation and its effects. While there is a general preference for regulations to be developed using evidence-based policy, questions arise about how evidence should be interpreted and weighed-up even when it is collected. Box 1 above discusses the vexed issue of interpretation of evidence in detail.

Eighth, evaluating or reviewing regulation is made harder because of uncertainty. Nevertheless, if the purpose of the regulation or the behaviour that the regulation targeted was known and reasonably well defined, some of that uncertainty may well be ameliorated. However, as discussed below the purpose of the regulation and the problem it is aiming to solve may be ill-defined.

The ninth difficulty is due to complexity. Learning about the effect of regulations through feedback loops is difficult because regulatory effectiveness is complex:

· Effects are dependent on good regulatory design

· Effects depend on how regulators apply rules and use their discretion

· Effectiveness is co-produced by regulatees

· Regulatees are heterogeneous and require different enforcement strategies

· Monitoring is difficult because ‘signals’ often contain more ‘noise’ than ‘information’

· Information on regulations is not widely available as quantitative data is scarce and qualitative data often covers administrative inputs, process compliance and is censored as it often relies on regulatees’ compliance

· Evaluation is difficult with attribution problems & effects are often separated in time and space.

The tenth and last barrier is the lack of demand for review and evaluation of regulations. Accentuating the technical difficulties listed above is an authorising environment that relies on ‘fire alarms’ rather than ‘routine patrols’. The formal systems in place in both the legislative and executive branches in New Zealand focus almost entirely on the ex ante review of new regulations. In summary, this section has discussed how regulation is often contentious, the effect is often uncertain, evaluating regulations is difficult and there is a lack of effective political demand for review. While a counsel for caution when considering proposed actions, it is not a counsel of despair about what is to be done.

2.4.3 What might be done to augment the capability to undertake more systematic reviews?

There are three main policy arguments about why the stock of regulation deserves special measures when compared to other interventions. The first is the risks to liberty and human rights[80] because regulations generally impose constraints on citizens and business. These can be buttressed by concerns about the lack of transparency of regulation, the political bias in favour of regulation over other interventions, and the uncertainty around the effect of regulation. [81] In other words managing the stock of regulation requires special measures because regulation has special features that set it apart from other forms of policy intervention.

The business case for more active management of the stock of existing regulations is essentially the same as the business case for management of the flow of new regulations through for the Regulatory Impact Assessment (RIA) process. In a paper at an earlier stage of the New Zealand Law Foundation Regulatory Reform Project Gill [82] provides a stylised Cost-Benefit Analysis (CBA) for the (RIA) process applying to new regulations in New Zealand. This analysis suggested that the RIA does little harm and has the potential to pay its way by identifying more effective interventions. Occasionally the RIS process may result in significant harm being avoided.[83] The business case for stock management is an entirely analogous argument. The payoff from closer management of the regulatory stock is like an insurance policy with a low deductible; one pays a regular premium, in order to receive a sporadic series of small claims, but with the added potential for a very large payoff thereby averting some significant damage.[84]

The first and crucial step to improving the contribution of monitoring, evaluation and review to regulatory effectiveness is to establish a much clearer statement of the roles and responsibilities of departments administering legislation and regulation. This would be analogous to department’s role in the maintenance and care of physical assets on the agency’s balance sheet or the assets that they are administering on behalf of the Crown. If the legislative proposals (discussed in Part One of this paper) are enacted, this role will need to be further articulated and monitored through the Performance Improvement Framework. Without this role clarity other areas, where standards and expectations are clearer, will continue to gain resources and attention.

On its own, legislating for this role is not enough. Braithwaite[85] reminds us that as with any regulation there will be willing compliers, muddled non-compliers, as well as wilful non-compliers. Capacity building helps the ‘willing’ and the ‘muddled’ to work out how to manage the stock of regulation. Capacity building could include:

· Extended guidance on what is expected but more importantly more support in terms of layered knowledge with links to cutting edge thinking and practical examples

· Building communities of practice around evaluation and monitoring regulations

· Access to knowledge packs on when to use and how to design sunset clauses, review clauses, monitoring plans, evaluation plans etc.

· Greater use of non-government expertise

· Greater use of effective public participation[86]

· Greater professional leadership by double-hatting senior people to be practice leaders as well as holding line position. [87]

Building capability requires getting the ‘hard’ and ‘soft’ components of working together as a system. The hard stuff (which is generally the easy components) includes staff skills and capability and the formal systems for collecting, storing and tracking regulations. The soft components (which are the hard components to introduce) include the behavioural elements identified by the 2003 report[88] such as a culture of inquiry, better prioritised and co-ordinated evaluative effort, and professional leadership of capability development.

2.4.4 What might be done to build the demand for improved management of the stock of regulation?

The list of initiatives listed above involves working on the supply side to strengthen capability. The history of evaluation suggests that a supply of programme evaluation studies being produced does not necessarily result in these studies being used directly to redesign programmes.[89] The ideal might be for evaluations to be designed as part of the introduction of regulations and used to assess what changes in policy design and administrative practices are required. To warrant the costs of evaluation, the knowledge generated needs to be used by directly or indirectly contributing to learning about regulatory effectiveness. This requires building the capability of key decision-makers to actually use information if it was available to them. This needs to be reinforced by building the demand for improved information about the management of the stock of regulation.

Treasury has taken the important first step in enhancing demand with the publication of its analysis of regulatory practice and capability for regulatory regimes. If the language of Braithwaite’s enforcement pyramid[90] is used, the publication of these assessments will operate as ‘persuasion’ and ‘negotiation’ devices. This of course leaves open the question of what the high powered sanctions are for the wilful non-complier who doesn’t manage the stock.

The second step, therefore, is to introduce new compulsory requirements for all proposals for new regulations. Different jurisdictions suggest different models. The Federal Government and a number of Australian states have legislated requirements for embedded reviews as part of any significant new regulatory proposal. In Canada there is a legislative requirement for an evaluation plan for any significant new regulation.[91]

In New Zealand’s case, as agencies are encouraged to develop Regulatory Impact Statements as part of the policy development process, it would be premature to require plans for monitoring, evaluation and review as part of the policy development or be included with the initial Regulatory Impact Statement that accompanies the Cabinet policy paper. It is however reasonable to expect such planning to occur as part of the legal development and design of the implementation phases. Accordingly, requirements for plans for monitoring, evaluation and review could be required as part of the paper for Cabinet Legislation Committee and thereafter included along with the Regulatory Impact Statement[92] which normally accompanies the introduction of legislation. Any requirements would need to be proportional to the significance of the regulation.[93]

If Parliament also played a more active role, it is likely that there would be a greater focus by the executive on how well the existing stock of regulation is being managed. Currently Parliament receives little regular assurance that the legislative powers that it has delegated, and the rights and obligations that laws create are being administered correctly or are actually having their intended effect. Parliament could enhance its role by more systematically reviewing how the government agencies in the executive are exercising their role in the oversight of primary secondary and tertiary legislation. Enabling this to happen would require revising the mandates of Offices of Parliament such as the Office of the Auditor General to include more regular investigation and reporting to Parliament on the regulatory questions.[94]

This section has explored the relative paucity of monitoring, evaluation or programmed review. In part this is because regulations are costly and difficult to evaluate and review so learning about effectiveness is difficult. Thereafter we explored the scope to augment the capability to undertake more systematic reviews and formal evaluations. We also explored how to increase the demand for improved information about the management of the stock of regulation through requirements for monitoring, evaluation and review.[95] The suggestions leave something of a puzzle: how to ensure that the information generated will actually be used to improve regulatory performance?[96]

2.5 Part Four – Conclusion: What is to be done about monitoring, evaluation and review of regulatory effectiveness?

Regulation is often driven by a messy political imperative to do something rather than a robust policy solution to a clear policy problem based on robust data and evidence. In practice regulations are developed as a response to a fuzzy problem and ambiguous objectives and operate largely in the absence of formal feedback loops. The cost of change is rarely factored in because the political objective is often to make change – the policy problem is how to act.

Regulation once introduced needs to be managed and evaluated. Incentives, however, to manage the stock of regulations are weak and there is limited formal responsibility assigned to monitor and review individual regulatory instruments and regimes. There are limited incentives and few resources to commission evaluations and reviews so there are weak feedback mechanisms to ensure regulations remain fit for purpose and effective. Even if there is an evaluation there is limited data, the evidence may be ambiguous and the focus may be on effectiveness of process and enforcement rather than effectiveness in achieving the desired end state.

Currently there is no systematic end-to-end integration or learning because of the separation of policy from delivery, and the separation of powers between the executive and the judiciary. The capability to monitor, evaluate and review regulations is dispersed and there are no functional leaders providing professional leadership for regulators, evaluators, monitors etc.

The general case for more effective management is that regulation requires special measures because regulation has special features that set it apart from other forms of policy intervention. Active monitoring, evaluation and review is required because of uncertainty. As the introductory quotation from Michael Greenstone reminded us “The efficacy of many regulations is never assessed’ but to the extent they are it is before they are implemented “the point we know the least about them.” The business case for more active management of the stock of regulation is because the costs and risk of harm is low but the potential payback from removing ‘hidden killers’ is significant. Uncertainty means that regulation is a process of learning and experimentation.[97] Yet paradoxically, despite the need for scanning for unforeseen consequences, there is a paucity of monitoring, evaluation or programmed review.

The formal requirements for review of existing regulations are limited in New Zealand relative to other comparable jurisdictions and underdeveloped relative to the provisions applying to new regulations. To create a system that encourages learning about regulatory effectiveness requires working on the supply side to build capacity including by:

· Building communities of practice around evaluation and monitoring regulations

· Access to knowledge packs on when to use and how to design sunset clauses, review clauses, central repositories of regulatory evaluations etc.

· Extended guidance and more support in terms of layered knowledge with links to cutting edge thinking and practical examples

· Greater professional leadership by ‘double-hatting’ senior people to be practice leaders as well as holding line positions.

On the demand side, a first and crucial step is to establish a much clearer statement of the expectations of the roles and responsibilities of a department administering primary, secondary and tertiary legislation. Central agencies through Performance Improvement Framework lead by SSC and the Treasury’s analysis of regulatory regimes will encourage agencies’ conformance in order to improve their ratings if not their performance. Pressure from central agencies needs to be augmented by more systematic post-legislative scrutiny by Parliament. Pressure from both the legislature and the executive will increase the pressure on potential non-compliers to more actively manage the stock of regulation.

This suggests the need for more regulation of meta-regulation by introducing compulsory new requirements for all proposals for new regulations to include a discussion of review options and for a monitoring and measurement plan, as well as an evaluation plan to be prepared for public release along with the Regulatory Impact Statement. The extent of the plans required would need to be proportional to the significance of the regulation. The Australian Productivity Commission suggests Australian experience is that there is a lack of follow-up ensuring that the reviews proposed in the RIS are actually undertaken. They identify the need for a mechanism to track proposed reviews and review findings as well. [98]

Annex A – Review Clauses Identified

In this Annex we discuss the result of a key word search of all 1872 statutes in force in August 2012 initially using the words ‘review’ ‘operation’ ‘act’ and then other combinations of words. [99]The first search found 1.2% (22) statutes with ‘embedded’ review provisions that applied to the operation of the entire act. The follow up search found another 0.6% (11) that had review provisions that applied to the operation of specific sections or parts of the act. Subsequent searches did not identify any further other statutory review provisions. A similar search of regulations found no cases of embedded reviews.

There was a high degree of consistency in the provisions. To illustrate this consistency, two representative examples of an embedded review provision applying to the whole act as well as an example of a review applying to a specific part of an act are included below.

Evidence Act 2006 S202 - Periodic review of operation of Act

The Minister must, as soon as practicable after 1 December 2011 or any later date set by the Minister by notice in the Gazette, and on at least 1 occasion during each 5-year period after that date, refer to the Law Commission for consideration the following matters: (a) the operation of the provisions of this Act since the date of the commencement of this section or the last consideration of those provisions by the Law Commission, as the case requires: (b) whether those provisions should be retained or repealed: (c) if they should be retained, whether any amendments to this Act are necessary or desirable.

Health and Disability Commissioner Act 1994 S18 − Review of operation of Act

(1) As soon as practicable after the expiry of the period of 3 years beginning on the commencement of this Part of this Act, and then at intervals of not more than 5 years, the Commissioner shall—

a) Review the operation of this Act since—

(i) The date of the commencement of this Part of this Act (in the case of the first review carried out under this paragraph); or

(ii) The date of the last review carried out under this paragraph (in the case of every subsequent review); and

b) Consider whether any amendments to this Act are necessary or desirable; and

c) Report the Commissioner's findings to the Minister.

Births, Deaths, Marriages, and Relationships Registration Act 1995

78J Review of operation of sections 74 to 75G, 78F, and 78H

(1) As soon as practicable after the expiry of the period of 5 years beginning on the commencement of the Births, Deaths, Marriages, and Relationships Registration Amendment Act 2008, the Minister must—

a) review the operation of sections 74 to 75G, 78F, and 78H since the date of that commencement; and

b) prepare a report of the review that includes recommendations on whether any amendments to those sections are necessary or desirable; and

c) present a copy of the report to the House of Representatives.

Other statutes which have embedded review provisions applying to the whole act:

Canterbury Earthquake Recovery Act 2011 s92 Annual reviews of Act

Climate Change Response Act 2002 s160 Reviews of operation of emissions trading scheme

Electricity Act 1992 s158

Financial Advisers Act 2008 s161

Health and Disability Commissioner Act 1994 s18 Review of operation of Act

Health Practitioners Competence Assurance Act 2003 s171 Review of operation of the Act

Insurance Companies (Ratings and Inspections) Act 1994 s24

Local Government Act 2002 s32 Review of operation of Act and Local Electoral Act 2001

Maori Television Service (Te Aratuku Whakaata Irirangi Maori) Act 2003 s56

Motor Vehicle Sales Act 2003 s163

NZ Sign Language Act 2006 s11 Review of operation of Act

Plumbers, Gasfitters, and Drainlayers Act 2006 s187 − Review of Act

Privacy Act 1993 s26 − Review of operation of Act

Prostitution Reform Act 2003 s2 − Review of operation of Act and related matters

Protected Disclosures Act 2000 s24 − Review of operation of Act

Search and Surveillance Act 2012 s357 − Review of operation of Act

Social Workers Registration Act 2003 s104 − Review of operation of Act

Terrorism Suppression Act 2002 s38 (2)

Wheat Industry Research Levies Act 1989 s30

Other statutes which have embedded review provisions applying to sections or a part of an act:

Animal Welfare Act 1999 s78 (1)

Immigration Advisers Licensing Act 2007 s14

Waste Minimisation Act 2008 s58 − Review of bylaws

Customs and Excise Act 1996 s282C

Employment Relations Act 2000 s69AAL − Review of operation of Part after 2 years

Financial Service Providers (Registration and Dispute Resolution) Act 2008 s45

Income Tax Act 2007 RP 17B (11)

Protection of Personal and Property Rights Act 1988 s108AAB

Reserve Bank of New Zealand Act 1989 s157ZZ

Tax Administration Act 1994 81BA (4)

Television New Zealand Act 2003 s29 Review of Part X

Walking Access Act 2008 s80



* Principal Economist, NZIER & Senior Associate, Institute for Governance and Policy Studies, VUW School of Government. Primary author of this paper.

§ Professor of Law, VUW and Project Leader, New Zealand Law Foundation Regulatory Reform Project.

[1] In the New Zealand Law Foundation Project we do not use a singular definition of regulation, but broadly the project has adopted the definition that regulation is all laws and rules that affect behaviour, see generally Julia Black “Critical Reflections on Regulation” (2002) 27 Australian Journal of Legal Philosophy 1 at 25-26. Black defines regulation as “the sustained and focused attempt to alter the behaviour of others according to defined standards or purposes with the intention of producing a broadly identified outcome or outcomes, which may involve mechanisms of standard setting, information gathering and behaviour modification”, and see also the introduction to this volume.

[2] The term regulation will be used in this paper as an instrument using primary, secondary and tertiary rule making. This instrumental definition is consistent with that used by the Treasury in the RIS Handbook (2009). See D Levi Faur “Regulation and Regulatory Governance (Jerusalem Papers in Regulation and Governance Working Paper No 1, 2010) at 4-9 for discussion of alternative definitions of regulation.

[3] Mike Hensen and James Zuccollo “Weathertight Buildings and Performance-based Regulation: What Lessons can be Drawn from a Complicated and Evolving Situation?” in Susy Frankel and Deborah Ryder (eds) Recalibrating Behaviour: Smarter Regulation in a Global World (LexisNexis, Wellington, 2013) 449.

[4] The authors are grateful to Nigel Salmons for research assistance with this paper. The paper benefited greatly from discussion of an earlier version of this paper at a workshop held at Victoria University on 20 August 2012. We are also grateful for comments received on the earlier draft from members of the project team – Helen McQueen, Graeme Austin, Petra Butler and Deborah Ryder and from anonymous reviewers. The preparation of the paper was assisted by discussions with a wide range of Government officials and other participant in the project workshops. Responsibility for errors and omissions remain with the authors.

[5] Derek Gill “Regulatory Management in New Zealand: What, How and Why?” in Susy Frankel (ed) Learning From the Past, Adapting to the Future: Regulatory Reform In New Zealand (LexisNexis, Wellington, 2011) at 173.

[6] See Alec Mladenovic “Network Industries: Electricity and Telecommunications” in Susy Frankel (ed) Learning From the Past, Adapting to the Future: Regulatory Reform In New Zealand (LexisNexis, Wellington, 2011) at 339 for an account of the evolution of network industry regulation in New Zealand.

[7] Mike Hensen and James Zuccollo “Weathertight Buildings and Performance-based Regulation: What Lessons can be Drawn from a Complicated and Evolving Situation?”, above n 3

[8] Carol Weiss Evaluation: Methods for Studying Programs and Policies (Prentice Hall, 1998). For a useful practitioner guide see HM Treasury “Magenta Book Guidance for Evaluation” (2011) <http://www.hm-treasury.gov.uk>.

[9] Victorian Competition and Efficiency Commission Strengthening Foundations for the Next Decade: An Inquiry into Victoria’s Regulatory Framework: Final Report (2011) at 6.1.

[10] Oxford English Dictionary “Monitoring” <http://oxforddictionaries.com>.

[11] Without effective administration and enforcement, regulations lack legitimacy and buy in from citizens and businesses. Thus effective administration and enforcement is a pre-condition for effectiveness but it is not a sufficient condition. Even well administered regulation may not achieve the intended outcome and/or may have unforeseen negative consequences.

[12] Michael Greenstone “Towards a Culture of Persistent Regulatory Experimentation and Evaluation” in David Moss and John Cisternino (eds) New Perspectives on Regulation (The Tobin Project, Cambridge (MA), 2009) at 111.

[13] Michael Greenstone “Towards a Culture of Persistent Regulatory Experimentation and Evaluation”, above n 12 at 111.

[14] See Productivity Commission “Local Government Regulatory Performance” (Issues Paper, July 2012) at ch 3 for a discussion of regulation in Local Government. As Local Government is relatively small share of general Government in New Zealand, in the rest of this paper references to government refer to central government.

[15] Legislation Advisory Committee “Legislation Advisory Committee Guidelines” (2006) Ministry of Justice <http://www.justice.govt.nz>.

[16] Susy Frankel and John Yeabsley “Features of the Uniqueness of New Zealand and their Role in Regulation” in this volume.

[17] See Mai Chen Public Law Toolbox (LexisNexis, Wellington, 2012) for a discussion of how these institutions and processes operate.

[18] The eight departments are departments of Building and Housing, Internal Affairs and Inland Revenue and the Ministries for Economic Development, Justice, Labour, Primary Industries and Transport.

[19] Treasury analysis (unpublished) suggests that in 2011 that around 2/3 of government bills were accompanied by a RIS and only 50% of those without RIS’s had received a formal exemption. Castilia estimates that in 2011 36% fully meet, 50% partially met and 14% did not meet Cabinet RIS requirements (detailed in Cabinet Office Circular “Regulatory Impact Analysis Requirements” (10 November 1999) CO 09/8).

[20] Cabinet Office Wellington “Cabguide” <cabguide.cabinetoffice.govt.nz>. See also Petra Butler “When is an Act of Parliament an Appropriate Form of Regulation? – Regulating the Internet as an Example” in Susy Frankel and Deborah Ryder (eds) Recalibrating Behaviour: Smarter Regulation in a Global World (LexisNexis, Wellington, 2013) 489.

[21] For a discussion of NZBORA vetting see Legislation Advisory Committee “Legislation Advisory Committee Guidelines”, above n 15, at ch 4. See also Petra Butler and Andrew Butler The New Zealand Bill of Rights Act 1990: a commentary (LexisNexis, Wellington, 2006) at 85-117; and Claudia Geiringer "The Dead Hand of the Bill of Rights? Is the New Zealand Bill of Rights Act 1990 a Substantive Legal Constraint on Parliament's Power to Legislate?" (2007) 11 Otago Law Review 389.

[22] Elizabeth Eppel, David Turner, and Amanda Wolf “Experimentation and Learning in Policy Implementation: Implications for Public Management” (Institute of Policy Studies Working Paper 11/04, 2011). On co-production see John Alford Engaging Public Sector Clients: From Service Delivery to Co-Production (Palgrave-Macmillan, New York, 2009).

[23] Law Commission Tribunals in New Zealand Reform (Issues Paper, 2008) at 135 identifies 47 Tribunals.

[24] See the companion paper on uniqueness: Susy Frankel and John Yeabsley, “Features of the Uniqueness of New Zealand and their Role in Regulation” in this series.

[25] The amendments to the Judicature Act in 1972 (s4) established the presumption that all administrative actions are subject to judicial review unless a statute explicitly provided otherwise.

[26] See David Goddard “Public Law and Regulation” in Administrative Law - The public law scene 2011 (NZLS, Wellington, 2011) at 113 a discussion of the ‘policing’ role of the courts. For a discussion of supervision see Rayner Thwaites and Dean Knight “Review and Appeal of Regulatory Decisions: The Tension between Supervision and Performance” in Susy Frankel (ed) Learning from the Past Adapting to the Future: Regulatory Reform in New Zealand (LexisNexis, Wellington, 2011) at 215; and Rayner Thwaites and Dean R Knight “Administrative Law through a Regulatory Lens: Situating Judicial Adjudication within a Wider Accountability Framework” in Susy Frankel and Deborah Ryder (eds) Recalibrating Behaviour: Smarter Regulation in a Global World (LexisNexis, Wellington, 2013) 529.

[27] In addition to the Parliamentary Ombudsman, which is the focus of this paper, there are a number of specialist Ombudsman, such as the Banking Ombudsman with defined jurisdiction.

[28] See Mai Chen Public Law Toolbox (LexisNexis, Wellington, 2012) at ch 21 for a discussion of the role of the Regulations Review Committee.

[29] If the courts are involved they may comment on faults with the law and are, therefore, sometimes characterised as having a discourse with parliament about possible changes to the law. Of course, Parliament may or may not take any notice of what the courts say.

[30] New Zealand sometimes has international obligations such as the process under Article 33 of the Convention of the Rights of Persons with Disabilities where pro-active monitoring is part of the State’s duty under the Convention.

[31] For a good discussion of appropriations and performance reporting see New Zealand Treasury “Putting It Together: An Explanatory Guide to New Zealand's State Sector Financial Management System” (2011) <www.treasury.govt.nz> at 11 and 73 respectively.

[32] See Australian Productivity Commission “Identifying and Evaluating Regulation Reforms” (Commissioned Study, 15 December 2011) at 9 for a discussion of the multiple costs of regulation.

[33] Cabinet Office Cabinet Manual 2008 at [7.8.2].

[34] Cabinet Office Cabinet Manual 2008 at [3.5].

[35] The Parliamentary Counsel Office would expect certain regulatory instruments such as takeover exemptions to have a sunset clause.

[36] The Treasury Regulatory Impact Handbook 2009, Section 2.9 at 16.

[37] See Australian Productivity Commission “Identifying and Evaluating Regulation Reforms”, above n 32 at 93, for a discussion of evaluation methods.

[38] See Victoria Competition and Efficiency Commission “Strengthening Foundations for the Next Decade: An Inquiry into Victoria’s Regulatory Framework”, above n 10, at 135 for a useful discussion of how to undertake evaluation and review.

[39] Similarly the Legislation Advisory Committee guidelines devotes the whole of chapter 13 to the individual right of review but there is no sustained discussion of whether to include a review of legislation or regulation, by whom, about what, when, how, to whom, at what frequency? Legislation Advisory Committee “Legislation Advisory Committee Guidelines”, above n 15.

[40] The State Sector and Public Finance Reform Bill introduced into the House in August 2012 in Clause 25 proposes to amend the State Sector Act S32 Principal responsibilities as follows “(1) The chief executive of a department or departmental agency is responsible to the appropriate Minister for... (d) the stewardship of— (i) assets and liabilities on behalf of the Crown ….; and (ii) the legislation administered by the department or departmental agency”.

[41] For a listing of stewardship expectations see Annex One http://www.treasury.govt.nz/economy/regulation/inforeleases/pdfs/reg-2597298.pdf. These include the responsibility to “monitor, and thoroughly assess at appropriate intervals, the performance and condition of their regulatory regimes to ensure they are, and will remain, fit for purpose.”

[42] State Services Commission “Performance Improvement Framework” (17 December 2012) <http://www.ssc.govt.nz/>.

[43] See above n 40.

[44] Treasury (2013) Regulatory Systems Paper Two: Improving New Zealand’s Regulatory Performance Annex Two <http://www.treasury.govt.nz/economy/regulation/inforeleases/pdfs/reg-2597298.pdf>.

It is also proposed that there be an independent review “of the operation of the proposed disclosure requirements” after 5 years.

[45] See discussion in Mark Prebble With Respect Parliamentarians Officials and Judges Too (Institute of Policy Studies, Wellington, 2010).

[46] Email from George Tanner (former Chief Parliamentary Counsel) to Derek Gill regarding the maintenance of laws (20 June 2011).

[47] Peter Mumford Enhancing Performance Based Regulation Lessons from New Zealand’s Building Control System (Institute of Policy Studies, Wellington, 2011).

[48] Daniel Gervais “The Regulation of Inchoate Technologies” (2010) 47(3) Houston Law Review 665.

[49] Broadly the precautionary principle allows regulation of potential harm out of precaution, when full scientific evidence of harm or likely harm may not be agreed upon. For a comprehensive discussion see Jonathan B Wiener, Michael D Rogers and James K Hammitt and Peter H Sand The Reality of Precaution: Comparing Risk Regulation in the United States and Europe (RFF Press, Earthscan, 2011).

[50] See NZIER “Cost of Compliance” (Report to the Ministry of Agriculture and Forestry, November 2001); NZIER “Appropriate precautions? A Scoping Study of the Impacts on the Business Environment of the Hazardous Substances and New Organisms Act” (Report to the Ministry of Economic Development, June 2006); and Waitangi Tribunal Report Ko Aote Aro Atēnei A Report into Claims Concerning New Zealand Law and Policy Affecting Māori Culture and Identity Te Taumata Tuarua (Wai 262) at Vol 1, Ch 2.

[51] John W. Kingdon, Alternatives, and Public Policy (Harper Collins College Publishers, New York, 1995).

[52] Derek Gill “Regulatory Management in New Zealand: What, Why and How?”, above n 5 at 204-207. While there is little known systematically, there is patchy information generated by ‘fire alarms’ such as the pattern of law enforcement.

[53] See Mark Bennett and Joel I Colón-Ríos “Public Participation in New Zealand’s Regulatory Context” in Susy Frankel and Deborah Ryder (eds) Recalibrating Behaviour: Smarter Regulation in a Global World (LexisNexis, Wellington, 2013) 181.

[54] See Australian Productivity Commission “Identifying and Evaluating Regulation Reforms”, above n 32.

[55] ‘In out’ requirements refer to the notion of tit for tat. Each new regulation requires that an existing rule needs to be removed.

[56] Participants at the workshop to review an earlier draft of this paper observed that provisions for a review were often included as part of a compromise in the Select Committee hearings. As a result there was a lack of clarity about the purpose of the review and no detailed plan to undertake the review was developed. To be effective, the evaluation strategy needs to be developed as part of the detailed design before the regulations are enacted. Guidance needs to provide a sustained discussion of when to include a review of legislation or regulation, by whom, about what, when, how, to whom, at what frequency?

[57] For an account of the rise and fall of evaluation in Australia see Keith McKay “The Performance Framework of the Australian Government 1987 to 2011” (2011) 11(3) OECD Journal of Budgeting 1.

[58] State Services Commission Looping the Loop: Outcome Evaluation and Other Risky Feats (State Services Commission, Occasional Paper 7, June 1999).

[59] New Zealand Treasury and State Services Commission “Doing the Right Things and Doing Them Right - Improving Evaluative Activity in the New Zealand State Sector” (State Services Commission, 12 September 2003).

[60] New Zealand Treasury and State Services Commission “Doing the Right Things and Doing Them Right - Improving Evaluative Activity in the New Zealand State Sector”, above n 57, at 12.

[61] Copyright Act 1994, ss 12 and 35.

[62] Trade Marks Act 2002, ss 97 and 97A.

[63] Yeh-ning Chen and Ivan Png “Parallel Imports and Music CD Prices” (SSRN Working Paper Series, January 2004) examined CD prices in ten jurisdictions between 1990 and 1999. In some of these jurisdictions (including New Zealand) parallel importing restrictions had been removed. In jurisdictions that removed parallel importing restrictions CD prices dropped marginally (US $1). David Moore, Michael Volkerling and Bastiaan van der Scheer “MED Parallel Importing Review: impact upon creative industries” (Law and Economics Consulting Group Report, November 2007) examined books, computer software, film, and music. They found little evidence that removing parallel importing restrictions caused harm to sectors affected by parallel importing. See Susy Frankel, Chris Nixon, Megan Richardson and John Yeabsley “The Challenges of Trans-Tasman Intellectual Property Co-ordination” in Susy Frankel and Deborah Ryder (eds) Recalibrating Behaviour: Smarter Regulation in a Global World (LexisNexis, Wellington, 2013).

[64] See the Commerce Commission’s review of the effectiveness of light handed regulation for airports: Commerce Commission Report to the Ministers of Commerce and Transport on how effectively information disclosure regulation is promoting the purpose of Part 4 for Wellington Airport (Commerce Commission, 8 February 2013).

[65] See introduction to this volume for a discussion of the problems of principles and guidelines.

[66] For a discussion of how formalised evaluation may lead to elements of regulatory ritualism see the paper Susy Frankel and John Yeabsley, “Features of the Uniqueness of New Zealand and their Role in Regulation”, in this volume.

[67] In an address to the NZLS Family Law Conference the Principal Family Court Judge Peter Boshier stated that “the Family Court now stands as one of the country’s busiest, as one of its most important and as one of its most credible”: Peter Boshier “The Three Changes I Would Most Like to See Occur in the Next Three Years” (speech to the NZLS Family Law Conference, Auckland, 17–18 September 2009). See also the discussion in B D Inglis New Zealand Family Law in the 21st Century (Brookers, Wellington, 2007) at 90-91.

[68] Michael Taggart "Introduction to Judicial Review in New Zealand" [1997] JR 236 at 237, footnote 14 quotes Taylor providing an estimate of 84 judicial review cases decided in 1984; see Graham Taylor, "May Judicial Review Become a Backwater?" in Michael Taggart (ed) Judicial Review of Administrative Action in the 1980s: Problems and Prospects (Oxford University Press, Auckland, 1986) at 153.

[69] See The Judge Over Your Shoulder: A Guide To Judicial Review of Administrative Decisions (Crown Law Office, Wellington, 2005).

[70] See the discussion of the factors identified by Peter Cane and Leighton McDonald Principles of Administrative Law (Oxford University Press, Melbourne 2008) at 330-331 that increase the impact of administrative law review by courts on bureaucratic behaviour.

[71] Cane and McDonald, above n 70.

[72] This section of the paper benefited greatly from a discussion with the Chief Ombudsman and her staff and from a conference paper – Beverly Wakeham “Achieving Administrative Justice and Procedural Fairness in Ombudsman Investigations Conference (paper presented to the Inaugural ANZOA Conference (Australian and New Zealand Ombudsman Association) in Melbourne, 2008) <http://www.ombudsman.parliament.nz/newsroom/item/achieving-administrative-justice-and-procedural-fairness-in-ombudsman-investigations>.

[73] Gary Banks Challenges of Evidence-Based Policy-Making (Productivity Commission and Australian Public Service Commission, 2009).

[74] See the companion paper Daniel Kalderimis, Chris Nixon and Tim Smith “Certainty and Discretion in New Zealand Regulation”, in this volume.

[75] Mancur Olson The Rise and Decline of Nations: Economic Growth, Stagflation, and Social Rigidities (Yale University Press, 1982).

[76] Petra Butler “When is an Act of Parliament an Appropriate Form of Regulation? – Regulating the Internet as an Example”, above n 20.

[77] Winston Harrington Grading Estimates of the Benefits and Costs of Federal Regulation: A Review of Reviews (Resources for the Future, 2006). See also Winston Harrington, Richard D Morgenstern & Peter Nelson “On the Accuracy of Regulatory Cost Estimates” 19 Journal of Policy Analysis and Management 297.

[78] Andrei Shleifer Efficient Regulation (National Bureau of Economic Research, NBER Working Paper No. 15651, 2010).

[79] For a discussion of how formalised evaluation may lead to elements of regulatory ritualism see Susy Frankel and John Yeabsley “Features of the Uniqueness of New Zealand and their Role in Regulation” in this volume.

[80] See discussion in Petra Butler “When is an Act of Parliament an Appropriate Form of Regulation?” above n 20.

[81] Derek Gill “Applying the Logic of Regulatory Management to Regulatory Management in New Zealand” in Susy Frankel and Deborah Ryder (eds) Recalibrating Behaviour: Smarter Regulation in a Global World (LexisNexis, Wellington, 2013) 559.

[82]Derek Gill “Applying the Logic of Regulatory Management to Regulatory Management in New Zealand”, above n 81.

[83] See also Chris Nixon and John Yeabsley “Voyage of Discovery: How do we Bring Analytical Techniques to State Driven Behaviour Change?”, in this volume.

[84] As an empirical matter Abusah and Pingiaro report that in Victoria while sunsetting regulations accounted for 40% of RIAs, they only resulted in 14% of the total gross savings. Nonetheless the benefits from sunsetting still exceed the costs. S. Abusah and C. Pingiaro Cost effectiveness of regulatory impact assessment in Victoria (Victoria Competition & Efficiency Commission, Staff Working Paper, 2011).

[85] Ian Ayres and John Braithwaite Responsive Regulation: Transcending the Deregulation Debate (Oxford University Press, New York, 1992).

[86] See Mark Bennett and Joel I Colón-Ríos “Public Participation in New Zealand’s Regulatory Context”, above n 53.

[87] This proposed double-hatting is similar to the Heads of Profession proposed in the Scott report on Policy Advice. The Committee on the Value of Policy Advice “Improving the Quality and Value of Policy Advice” (New Zealand Treasury Review of Expenditure on Policy Advice, December 2010).

[88] New Zealand Treasury and State Services Commission “Doing the Right Things and Doing Them Right - Improving Evaluative Activity in the New Zealand State Sector”, above n 59.

[89] See the discussion by Henry Rossi, Mark W. Lipsey, Howard E. Freeman Evaluation: A Systematic Approach (SAGE Publications, California, 2004) at 411. In addition to direct use for instrumental purposes there is also indirect or conceptual use of evaluations to influence thinking about issues in a general way and use for used in limited ways for symbolic or rhetorical purposes. The discussion suggests while there is a wide degree of ‘pessimism’ about direct instrumental use of evaluations, generally in addition to indirect use there is a modest amount of direct use as well.

[90] Ian Ayres and John Braithwaite Responsive Regulation: Transcending the Deregulation Debate, above n 85.

[91] The Canadian Government requires rolling five year evaluation plans and evaluations of both the stock and flow of regulation in its 2007 Cabinet Directive on Streamlining Regulation. Treasury Board of Canada Secretariat Cabinet Directive on Streamlining Regulation (Treasury Board of Canada Secretariat, BT22-110/2007, 2007).

[92] New Zealand encourages agencies to develop the RIS as part of the policy development progress but, unlike other jurisdictions, there is no requirement to update the RIS to provide an assessment of the final policy adopted. This results in a ‘version control’ problem as the RIS that is publicly released with the introduction of the legislation can relate to an earlier version of the policy. Given the low extra costs involved, consideration should be given to updating the RIS to provide an assessment of the final policy adopted. Plans for monitoring, evaluation and review could also be prepared as part of this update.

[93] The principle of proportionality applied to reviews of existing regulations would suggest that the burden of any review should be proportionate to the benefits that are expected to result. While beyond the immediate scope of this paper, the analysis in figure 3 would suggest that the returns from programmed review would be greatest for those regulations where the benefits are concentrated and the costs are dispersed. While the returns are also likely to be higher on reviews of secondary regulations, which get less legislative scrutiny than primary regulations, the costs of amending secondary regulations are also lower. As noted it is much easier to generate political commitment to amending ineffective regulations if the details were not enacted in primary legislation.

[94] See the proposals developed by the UK Law Commission Post-Legislative Scrutiny (Law Com Report No 302, 2006), which recommended a suite of changes to increase Parliament’s role in increased scrutiny of legislation after enactment.

[95] This requirement would bring into require addressing what the focus of any review should be. The experience with Consistency 2000 – a major review of legislation for compliance with Humans Rights Act principles – suggests big reviews are extremely resource intensive but enable a focus on a set of principles rather than one particular discrete piece of regulation.

[96] US is experience is instructive. For example Robert W Hahn and Paul C Tetlock “Has Economic Analysis Improved Regulatory Decisions?” (2008) 22(1) Journal of Economic Perspectives 67 found ‘little evidence’ that evaluations of regulatory decisions over a number of decades had had a ‘substantial positive impact’.

[97] See Joel I Colón-Ríos “Experimentation and Regulation”, in this volume.

[98] Australian Productivity Commission “Identifying and Evaluating Regulation Reforms”, above n 32, at LII.

[99] Thanks to Julie Melville at PCO for her help in identifying potential key words and a number of the statutes containing review provisions and to Nigel Salmonds for his research assistance with this annex.