Outcome-Based Policymaking Master Post

Written by Alana Dass

1: Evidence-Based, Outcome-Focused Policymaking

1: “Evidence-Based Policymaking: Outcome Monitoring”:

Five Components of evidence-based policymaking: 

  1. Program assessment: Be critical of the programs already in existence: where does government funding go? How effective are the programs? Will the benefits of the program end up compensating for the cost?   

  2. Budget development: Create networks of information such that agencies receiving funding output data of how the funding is used and governments that provide funding are able to access information about the programs receiving funding (governments should also start demanding evidence-backed proof of program effectiveness from requesting agencies). To establish a network of information, comprehensive lists should be made of funded programs as well as research-based evaluations of current effectiveness. When requesting funding from the government, agencies should provide evidence-based proof of program effectiveness, which would encourage the use of current research by these agencies to inform program development. Once funding is granted, agencies should be held accountable for both implementation of research and appropriate/designated budget use through a contract with the government.

  3. Implementation oversight : For the government to establish outcome-based systems with full cooperation of agencies and independent organizations, several steps can be taken to create a smooth implementation: involved agencies should determine the needs of the community and develop plans for evidence-based practices to address said needs; policymakers should supplement implementation by creating supportive policies and processes; the government should offer technical training and education for service providers; and a system for monitoring performance and outcomes should be established as well. It is imperative that implementation plans are designed with practicality such that services can be delivered effectively, even at the beginning stages. Partnering with a research university means that access to current research would be a given; development of evidence-based practices would become substantially easier. Alternatively, governments can seek to create a department or a division devoted to the development of evidence-based practices and oversight of outcome monitoring systems. 

  4. Outcome monitoring: Outcome monitoring allows the government and partnered agencies to track performance data and evaluate program effectiveness based on the outcomes. By integrating a multitude of benchmark assessments at various intervals along the timeline for a plan of implementation, agencies can actively assess the direction their programs are taking. Additionally, with so many opportunities to evaluate program progress, goals can be adjusted and program implementation can be refined. Altogether, data from outcome monitoring can be used by agencies to improve their programs, by the government to allocate funding, by policymakers to create new processes to help transitions and by inter-organizational communications to create accountability within agencies. 

  5. Targeted evaluation: In order for the government to measure the effectiveness of any agencies’ programs, evaluations must be regularly done. With both increased access to information, partnerships and technology and decreased time, resources and ability to accommodate extra responsibilities onto existing government agencies, the government could run the necessary evaluations in several ways: by either training and/or hiring staff to evaluate agencies on program performance; consulting the knowledge and abilities from partnered service providers; reinvigorating existing data systems and remodeling them for higher utilization; and supporting performance evaluation, specifically impact evaluations, through policies and funding. Through analyzing impact evaluations of various programs, the government can further invest in programs that show promising outcomes and can expand the reach of certain affiliated initiatives (or can cut/reform programs that do not show desired results). 

A: “The Role of Outcome Monitoring in Evidence-Based Policymaking”:

Evidence-based monitoring systems are effective ways for state governments to achieve priorities which are agreed upon by the state, stakeholders and the public. To create efficient and self-sustaining performance-monitoring systems, outcome tracking is necessary for policymakers to observe the direct results of any given program. Being able to assess outcome data for government programs allows policymakers to ask critical questions about the effectiveness and utility of the programs; additionally, policymakers would be able to come up with more precise, streamlined program criteria, implementations and goals to achieve a desired outcome. An added benefit of data monitoring is that government programs must detail the exact uses of grant money, which creates accountability on the agencies’ part since the data can be seen by other agencies and policymakers. In order to utilize the data from outcome monitoring, policymakers and agencies can evaluate programs through these four steps: 

  1. Decide on pragmatic outcomes as well as methods of measurement and goal checkpoints within the implementation.

    1. States need to maintain a set of performance-measuring metrics that are reflective of the current times. Plans of implementation and realistic goal points should also keep demographic and/or regional differences in mind to achieve equitable, comparable results across the board.

    2. To maintain a well-informed program, the state must use current research as it is published to update relevant aspects of the systems. 

    3. To determine the long-term effectiveness of any given program requires a substantial amount of running time, assuming that there are no other measures in place to gauge performance. To prevent wasting time or resources in the case where a program is not effective, checkpoints should be established over smaller intervals of time within implementation to measure whether a program is meeting the criteria of performance when it should be predicted to or to make predictions about the program’s future performance. 

    4. In line with maintaining an updated set of evidence to inform implementation, goals should also be updated to reflect the long-term interests of the state. Application of logic models to loosely-evaluated programs will be able to establish connections between the program features and the intended outcome, if any.

    5. To further evaluate if shorter-term goals are being met, “benchmarks” are suggested. In other words, to objectively measure performance, program data at a goal point should be compared against either other states with similar programs (and populations) or standards and best practices approved by the federal government and/or developed by research-based advocacy organizations.

  2. Collect, publish and evaluate key information about program performance.

    1. Where possible, states should separate data into smaller categories (eg. “geographic region, provider, or population characteristics”) such that differences within categories can be identified and addressed.

    2. Comparisons of data can be done across different jurisdictions (levels of government), providers (stakeholder organizations), and population groups (demographics).

  3. Utilize outcome data further by incorporating it into any pertinent projects. 

    1. For policyholders to effectively use the data, they must be able to meet regularly and often enough with relevant parties in order to discuss the nature of the data and the needs of the programs. 

  4. Increase communication between other programs/agencies using evidence-based monitoring and policymaking systems to share information and discuss best practices. 

    1. While performance data is important and reflective of many systems’ moving parts, it cannot be the only set of measurements used in the decision-making process. To make the most informed decisions, policyholders should be able to consider the nature of all programs involved as well as the importance and function of these programs to the communities they serve. 

    2. Performance data can be used to identify programs with subpar performance, the information about a program’s effectiveness (measured against related evidence-based outcome research), develop budget schemes (outcome data with program evaluations and cost-benefit analysis to determine grant allocations), monitor implementation, and evaluate outcomes. 

Example of Identifying Issues: 

New Mexico’s Legislative Finance Committee [LFC] and governor’s budget office receive annual performance reports from all agencies. Through LFC’s analysis, the public and LFC’s own constituency are given information on program performance (which includes aspects of performance that may be either exceptional or underwhelming). In cases where a program’s performance may be lacking, the LFC will contact the agency responsible to identify the root cause of the program’s issues. To ensure accountability once the agencies and the LFC agree on how to improve a program, the agencies will publicly testify to explain their performance reports and plan for improvement.

Example of Formulating Improvement Plans: 

Colorado’s Department of Human Services [CDHS] uses C-Stat (performance management system) to monitor its five divisions through over 75 performance measures. Leaders from each of the five divisions meet with agency leaders on a monthly basis to analyze C-Stat data for the sake of identifying practices with good performance as well as issues. Once the data is analyzed, the groups come up with ideas and strategies meant to improve outcomes. 

Example of Allocating Resources:

Idaho’s Department of Health and Welfare, Division of Public Health, created the initiative called “Get Healthy Idaho: Measuring and Improving Health,” which is the state’s health assessment and improvement plan. The priority of the initiative is to simultaneously reduce the rates of diabetes, tobacco use, and obesity and increase public access to medical care. By partnering with public health organizations and health care agencies, the state has expanded accessibility of services to the public and uses data from the agencies to track adherence to the priority. Population data on the Idaho Leading Health Indicators is used by the Division of Public Health’s Population Health Workgroup to determine the success of the initiative such that decisions about future budget allocation are informed. 

Example of Tracking These Plans:

Connecticut's Department of Public Health developed a performance-monitoring system known as “Health Connecticut 2020” to aid in the implementation of priorities listed in the state health improvement plan [SHIP], which was created in conjunction with over 100 partnered organizations. The general goal of the SHIP is to promote health and increase disease prevention measures. Healthy Connecticut 2020 contains a publicly-accessible dashboard of organization performance data. This data is used in occasional meetings between “action teams” to check for the state’s adherence to the SHIP goals. When any deviation from the expected performance is observed, the action teams devise new plans to correct that or, in the case where there is no deviation, to further accelerate the state’s achievement of the goals.  

2: The Promise and Peril of An “Outcomes Based” Mindset: 

Stanford Social Innovation Review 2016: Patrick Lester, director of the Social Innovation Research Center, draws a parallel between outcome-based policymaking and a “pay-for-success” strategic motif observed at various points in post-1930’s politics. He lists several causes and reasons for the repeated failures or lack of results associated with outcome-based policymaking with the first being that the data produced from outcome-based programs is often skewed by the presence of factors such as socioeconomic disparities, demographic differences and employment rates. With so many factors at play behind a single outcome, Lester argues that determining a single, actual cause for an outcome would be impossible. Combined with government incentivization for providers, not knowing the actual cause behind an outcome would result in potential incentives for misleading program outcomes (“inappropriately rewarding or punishing providers for factors beyond their control”). Continuing with his point about incentivization, Lester essentially points out that greater positive outcomes are associated with greater incentives for providers -- providers are motivated to work with (advantaged, privileged) populations that are “easier to serve.” In doing so, communities and populations that actually need the providers’ services are neglected. Similarly, Lester highlights that when backed into a corner with the difficult reality of solving community issues, some organizations are willing to falsify outcome data in order to meet the pressure of performance standards. Lester adds that some organizations devote all their resources and efforts into achieving a single outcome objective while neglecting other outcomes while other organizations produce unsustainable, transient outcomes. 

2: Pew Trusts Models 

Pew-MacArthur Results First InitiativeModels of Outcome-Based Systems: 

  1. Maryland: Montgomery County Department of Health and Human Services [DHHS]

  • New guidance program for youth-mentoring services: [2018] the DHHS created a standardized guideline for all county youth mentoring services to adopt. The guideline’s standardization allows for outcomes to be monitored according to evidence-based results. Applying these standards to every county service through the guideline is the DHHS’ way of producing consistency in monitoring outcomes for all services of these types. 

  • Basic components of the guideline such that evidence-based practices are emphasized: 

    • Definition of Important Terms:  The DHHS developed a definition of what constitutes “mentoring” and to further distinguish between different types of mentoring programs. 

    • Using the Current Research to Compare to Inform Program Development: The National Mentoring Partnership and National Mentoring Resource Center [NMRC] both have models of effective mentoring services that use evidence-based practices and outcome monitoring. By comparing the models from these organizations to the county services, DHHS compared and revised components of county services to match the definitions they devised and the standards of the models.

    • Design and Implementation of Measurements and Scales: The NMRC also had metrics and measurements used to gauge national outcomes; the DHHS reformatted some of these metrics to better fit the needs of the county services. 

    • Outcomes are measured through a set of tools that are outlined in the DHHS Guidance Document

Maryland Department of Budget and Management [MDBM]: 

  • New system for performance measurement and strategic planning, Managing for Results: [1999] This system helped agencies in the state government plan a budget and measure the effectiveness of their programs using a standardized metric for outcome monitoring. Each agency was responsible for designing their own implementation plans as well as measures for efficiency of the plan and goalposts throughout the intended term. [2017] The MFR system is updated to reflect modern changes and the progress of the agencies such that it can keep up with modernization and still be able to assist. This means that the collection of measurement documents was condensed to contain relevant metrics/guidelines for data collection, which is monitored by the budget office throughout the collection process for accuracy -- through audits. The final collection of measurement data is reported in an annual report of the budget book published by the state for comparisons. Once the data is processed, budget office analysts contact the organizations and offer research-based advice for outcome monitoring (the agency decides whether or not to implement the advice).

b) Minnesota: Minnesota Management and Budget Agency [MMB]

  • Online Dashboard: [2018] The MMB established an online directory to monitor outcomes for 40 different measurements in eight target areas that the state wants to improve (priorities). Having the online directory encourages interagency communication and provides a convenient, well-maintained site for policymakers to gather data about the functionality of government organizations or agencies. All data points considered in the online dashboard are collected from across the state and are separated into demographic and regional distinctions. To supplement the existence of the online dashboard and its data, MMB also provides training to staff of other government agencies to teach them how to utilize evidence-based data in reforming their budgets. Additionally, MMB convenes with the heads of these agencies regularly to reassess priorities and discuss improving outcomes for existing priorities. MBB has also moved on to encourage the use of collected data beyond outcome monitoring to include forecasting predictions for future outcomes based on data trends. 

    • Results for Children Dashboard: MBB and several other organizations arranged a dashboard to monitor specific populations of children from childhood up until postsecondary education. The data informed policymakers and stakeholders in other government agencies about the needs of these populations and the utilization of resources for the issues they face, as well as the outcomes. 

    • Homework Starts with Home: [2018] Derived from two programs meant to reduce homelessness for children in public or charter schools, this initiative utilizes data gathered by the two programs. More specifically, the initiative is based on the outcomes of increased stable housing situations for children (which generally includes better attendance and higher family income). Using that premise, the initiative ultimately aims to continue the reduction of child homelessness; University of Minnesota will evaluate proposals while Minnesota’s Education Department would be in charge of implementing proposed plans. Proposals, on the other hand, are submitted through providers at the request of partnered state agencies and nongovernmental/nonprofit charities.

    • Family Home Visiting: Due to an increase of positive outcomes, Minnesota’s family home visiting program was made into a priority for the state. Subsequently, the program was allotted a greater number of resources within the 2018-19 FY ($12mil increase) to invest in services for teenagers experiencing pregnancy or going through parenthood. By expanding the available resources, the program intends to reduce the number of children born with low birthweight, encourage a greater number of mothers to breastfeed until eight weeks, and lower rates of postpartum depression.  

c) Illinois: Illinois General Assembly / Budgeting for Results Commission 

  • Budgeting for Results: [2010] The Illinois Budgeting for Results Commission [BRC] implemented an outcome-based budgeting system after its namesake because a law was passed requiring agency budgets to be evaluated, decided and allocated based on the importance/priority of the components as opposed to funding received during the previous year. The Budgeting for Results system monitors agencies’ programs to provide data on effectiveness to policymakers such that budget allocations can be made on the observations of achieving desired outcomes. The overall intention of this system is to optimize Illinois’ budgeting allocations based on measurements by outcome-based standards in regards to chosen statewide priorities. Data is gathered and evaluated from more than 70 organizations including universities, state agencies, commissions and boards to inform policymakers in their budgeting decisions. 

    • The Budgeting for Results tool is comprised of three features: the Illinois Performance Reporting System, the Pew-MacArthur Results First cost-benefit analysis, and the State program Assessment Rating Tool [SPART]. Through the Illinois Performance Reporting System [IPRS], data from over 400 state organizations and programs is collected and outcomes are tracked. The Pew-MacArthur tool uses data from within the state to identify programs that use evidence-based practices and then calculates a return on investment in those programs. The SPART tool is used to evaluate a program’s performance and the level of adhesion to the intended implementation of best practices. 

[2018] The Budgeting for Results Commission, in conjunction with the Corrections Department, implemented the system in Illinois’ criminal justice facilities for adults. Together, the two organizations compiled a comprehensive list of currently-funded programs aimed at reducing recidivism rate for people held in state prisons. Using that list with IPRS, performance measures were streamlined for each organization to emphasize the statewide desired outcomes. Additionally, national data on recidivism programs was received from Results First Clearinghouse Database and compared to the programs listed in-state to determine the ones that matched national programs with strong evidence of reducing recidivism. Once matching state programs were identified, a cost-benefit analysis was used to determine the rate of return for potential investments in the programs (in this case, return on six programs that matched effective ones). Together, SPART, IPRS data and the cost-analysis was used to evaluate program effectiveness. The reports generated from the data and analysis suggested implementing a stronger outcome-based program design; in other words, precise goals with planned checkpoints.   

d) Washington:  Washington Governor Jay Inslee 

  • Results Washington: [2013] A performance-monitoring system and a corresponding administrative team were instituted by Governor Inslee to create conditions for self-generating improvement for all government agencies. Results Washington was formatted to monitor five state priorities using more than 190 performance measures: rejuvenated economy, strong education system, healthier and safer communities, efficient and responsible government, and increased sustainable energy and environmental protection. Inslee created a system where Results Washington is a centralized source of data shared between community leaders and government agencies in a way where interagency information is constructively transferred and discussed.

    • Within each of the five priorities, target demographics are tracked and regular meetings between government agencies and community leaders occur to discuss the advancement of program initiatives and overall progress. Feedback from meetings is used for the improvement of services both from the government’s end and within the community. 

      • Community input allows for the integration of real-life context and a direct human element to the purpose of the data. At the same time, the data tracked for government agencies is constantly reviewed such that the state priorities are the focus every performance measure. While multiple agencies may be working on a state priority, the data distributed also contains indicators (generated by the administrative team) for adhesion to the priority so agencies can monitor more specific demographic data within the data.

    • Each priority is divided into smaller sub-goals that specific government agencies and/or partnered community organizations can collaboratively monitor and manage. 

Each month, “results review” meetings are held for community stakeholders, government agencies and the governor to formulate holistic action plans to any given issue. The data from Results Washington is used to inform the process by means of illuminating progress on a state priority as well as further issues to address, potential solutions, and status of current initiatives. With the improvements to data sharing and interagency communication from 2013 to 2016, Washington State experienced an approximate 72% increase in vaccinations provided to toddlers (35% →  approx. 60% vaccinated) and a rise in statewide high school graduation rate from 76% to 80%. Within fiscal year 2017, “37 percent [of state agencies] improved the quality of services and 28 percent reduced or avoided costs totaling $61 million.” 

e) Colorado:  Office of State Planning and Budgeting [OSPB]

  • Lean performance improvement program: [2011] The Lean performance improvement program uses nine activities to promote self-perpetuating improvements to efficiency in state programs, elimination of wasted resources, and increased return on government investments. Since its implementation, Lean has refined the efficiency of state government systems such that the time has been halved for processing admissions to public mental health hospitals and receiving a teaching license from the Department of Education. 

  • State Measurement for Accountable, Responsive and Transparent Government Act [SMART]: [2013] This policy is a continued implementation of outcome-based practices as it necessitated that every department outline measurable, attainable goals and an action plan as part of a larger objective to achieve. An online dashboard was also created to publicly monitor progress of each department’s progress on achieving improvements on the state’s five priorities: “health, economic and infrastructure development, environment and energy, workforce development and education, and quality government services.” In establishing this online dashboard, policymakers are able to see the exact uses of each department’s budget, creating transparency and accountability. 

  • Results First: [2014] Legislators strengthen the outcome-based focus (Lean and SMART programs) by implementing the Results First approach. In following Results First, the government can find and invest in cost-effective programs or policies through the use of thorough research and cost-benefit analysis. A team consisting of the heads of Colorado’s legislative and executive branches (and two full-time staff in the OSPB paid for by the branch administrators) was created to oversee the implementation of Results First. To supplement Results First, OSPB created an evidence-based policy initiatives team [EBPI] in order to carry out research and data-based projects to advance state priorities.  

    • The EBPI team created a set of inventories to track “program description, location, cost, and number of clients served, and assesses it against the national evidence base” for state programs. The Results First cost-analysis model is used to predict and publish the government’s possible return on investments for evidence-based programs of interest. Through these tools, Colorado has been able to invest in many evidence-based programs that provide desired outcomes. 

    • Office of Community Corrections [OCC]: The OCC’s former program director used Results First to analyze the programs that were running at the time and received a report that the programs were virtually ineffective. This resulted in needing to change the direction of the OCC’s focus: cognitive behavioral therapy (CBT), a suggestion from the Colorado Commission on Criminal and Juvenile Justice, was being considered as a replacement. CBT was shown by Results First to have a high return on investment, which also gained the support of legislators, who approved funding for an additional implementation of a measurement tool to monitor the CBT program’s alignment with other evidence-based practices in community correctional settings. 

    • In order for programs to receive funding though, Results First is used to gain information on the evidence for any particular model while cost-analysis is used to discern the projected longitudinal earnings or savings for the government. Later on, the OSPB added a clause for an evidence requirement (e.g. consequences on outcome or estimated return on investment) for program approval; this meant that data and evidence would have to be integrated into any new proposals, thus securing funding for new investments. 

f) Santa Barbara:  Santa Barbara County Probation Department

  • Program Funding Redesign: [2016] The Santa Barbara County Probation Department [SBCPD] wanted to overhaul the funding process for its programs such that more evidence of ensured program success would be included before funding is granted. To start the overhaul, the Community Corrections Partnership [CCP], which evaluates all funding requests, and the SBCPD would have to communicate on integrating greater levels of data-driven models; to create accountability in providing evidence-based proposals, the SBCPD formulated a tool that CCP can use to screen for evidence-based validity. 

    • SBCPD created the tool as an extension of the partnership with the Pew-MacArthur Results First Initiative. It feeds into the entire process the Probation Department established through its creation and maintenance of county-wide inventories of current adult criminal justice programs. The inventories are routinely subject to cost-analysis assessments and evaluations for evidence of effectiveness. The documented evaluations help to create a continual set of data to inform future budgeting decisions. 

Funding requests for initiatives are processed through a criminal justice funding opportunity form which is comprised of these elements: “project description, target population, individual needs to be addressed to reduce the risk of criminal behavior, anticipated measurable outcomes, evidence of program effectiveness, [and] the program’s cost-benefit ratio, where possible” [format altered]. Once the CCP receives the form, it is reviewed at a monthly meeting wherein the CCP suggests improvements to the provided information and validation of the proposal’s status of being evidence-based. Aside from constructive criticism, the CCP also suggests grant or funding applications to the board. This process ultimately helps the county submit stronger applications for federal funding due to the evidence-backed nature of their arguments.

Previous
Previous

Ban the Box Master Post

Next
Next

Community Outreach Potential