Data Project Background

Timeline and Board Actions

July 2013 (Tab 11)

Board updates data manual development timeline and processes.

March 2013 (Tab 3)

Presentation to Board on programs’ reports from data collected during the 2011-2012 academic year and a draft tool for assessing data quality of report submissions.

Board received Data Project status update, conducted work session around Goal 3, and approved moving forward with the annual data cycle plan.
Board provided an update on progress of plan outlined in working paper, “Data Systems in Support of Program Accountability and Continuous Improvement”.
Board approves working paper, “Data Systems in Support of Program Accountability and Continuous Improvement”.
Board examines states that have developed frameworks or limited state reports / report cards of key indicators of program quality.

2010

PESB launches the preparation program data site and adopts changes to preparation program Standard 2 requiring programs to comply with requirements of the data memorandum of understanding.

March 2009

Board moves to begin process of redesign of current preparation program accreditation system.


Data Systems in Support of Accountability and Continuous Improvement 

Excerpts (p. 2-7) from Data Systems in Support of Accountability and Continuous Improvement (aka 'Metrics White Paper - Draft') describing Data Project intent. The full draft was presented to the PESB at its November 2011 regular meeting.

PESB Directive for Change

In 2010, the Board directed staff to launch a work plan toward redesign of our system of preparation program approval, review and support. In addition, the PESB adopted revisions to Standard II, reflecting a new expectation that programs participate in collection of data requested by the PESB. This expectation includes the requirement that all institutions sign a data-sharing Memorandum of Understanding (MOU) with the PESB, the purpose of which is to:
  • Evaluate and determine, pursuant to RCW, whether each institution’s program is in compliance with the program approval standards of PESB WAC;
  • Develop guidance for the PESB and programs related to planning and offering of educator preparation programs; and
  • Compiling aggregate non-personally identifiable information for the general public.

That same year, the PESB launched the preparation program data project, engaging program leaders as well as staff within programs with key responsibilities related to program level data collection, organization, and review.

This commitment to redesign is further reflected in the PESB’s 2011-2015 strategic plan, which states as goal and outcome:

“Establish transparency and public accountability for preparation program quality and program approval that is clearly linked to the success of program completers, as measured by student-based evidence.”

“By 2015, PESB teacher and principal preparation program oversight and approval will incorporate measures of educator effectiveness, including aggregate results from the statewide evaluation system and the Washington Teacher Performance Assessment”...

What’s the solution?

A system that relies on data collection, submission, and review as a basis for regulatory decisions will depend on well-kept and timely records.

Such a system contains records that must;
  • be accurate
  • be the same across institutions (compare apples to apples)
  • be quickly accessible by programs (reports)
  • be countable
  • reflect the practices of the program.

New System – Our Vision 

Data systems suggested by literature and emerging in states are a significant departure from the “reporting” frameworks of the past. A common question reflecting the past model is how each data element being collected will be used; essentially, will it “count”. Our goal is to develop an accountability system based on a a common framework of evidence used across programs. Policy supporting this system would distinguish between the outputs we will collect for program approval and those that support meaningful dialogue and action resulting in program improvement.

Our current system of program review uses the same questions and requests the same range of data from every program at the same set points in time, regardless of indicators of a program’s current or past performance. This design weakens any opportunity to guide systemic inquiry or provide assistance to programs that are struggling. The purpose of building a framework of evidence is to support both program-level and state-level inquiry. For example, a program may collect information about recruiting practices that increase the diversity of the program’s field staff. This information could be used to determine how this change influences the program completion of candidates with diverse backgrounds. At the same time, the PESB may look research outcomes to discuss changes to effectiveness measures being reviewed for approval. Such a system is intended to be responsive to the questions that arise as programs continuously review and inquire about their practice and will mean that answers to the program review questions change over time.

The PESB does want to eliminate unnecessary reporting, but a data system useful for ongoing program improvement as well as accountability cannot just be about streamlining reporting. It must move beyond the current system where the orientation focuses on “how will this data be used / reported?” and “how will it ‘count’?” rather than “how will we use this data to improve our program?” and “how can we demonstrate to PESB our program’s successes?” In doing so, more data will be collected by and from programs over time. The transition to a new system is significant; but over the long term programs and PESB will use key indicators of this system to: drive deeper inquiry into programs that are demonstrating strong candidate outcomes; allow greater regulatory flexibility for programs demonstrating solid performance on key indicators; and provide information to the general public.

Program standards will continue to serve as expectations by which programs are evaluated, but over time the evaluation tool will be increasingly populated with new metrics of evidence related to the standards, with more data available on an ongoing basis - annually or as requested - rather than presented only as part of program site visits. The diagram below depicts new inquiry cycles; both in relationship to providing the PESB ongoing insight into program quality and how to support its continuous improvement, as well as the manner in which we envision the PESB’s cycle of inquiry and that of individual programs naturally intersecting.

How Do We Get There?

Achieving this vision means a deliberate focus on both:

1. Deciding on the data elements in the new framework / establishing a system for incorporating newly defined measures over time

PESB has expectations for programs and data, but many of these expectations are implied rather than explicit. For instance, PESB has implied that data surrounding the WEST tests are important, but we have not explicitly outlined what elements should be captured or what role that data plays in the review or approval process. We have implied expectations for other information, such as entrance requirements, “signature” assessments, elements about the clinical experience, and elements to be collected and maintained about a candidate’s success after completion. This data is largely unstructured and as such does not lend itself well to research or evaluating programs or systems.

2. Establishing new data systems to produce the data

Explicitly defined and structured data, i.e. what to collect and how it will be used to inform the approval process, can provide information that can be more easily collected, stored, and accessed, as well as support research to define best practices and evaluation based on predictable indicators and clearly defined outcomes. But data can only be collected, stored and accessed when appropriately structured and properly maintained.

Moving from protocols to a framework of evidence

The move to a framework of evidence resets the program approval process. It replaces infrequent program reviews with annual data submissions and examples of evidence with common data points for all programs. These data points will describe indicators to assess program effectiveness in selection, recruitment, formalized learning opportunities, field experience, and placement. Program effectiveness will continue to be based on the standards.

Programs neeed to develop the capacity to collect, access, and retain this evidence over multiple years. The metrics for meeting each standard will be clearly specified, and programs will be able to determine at, or before, the data submissions the degree to which the program has made progress toward or met each standard. Any improvements or technical support deemed necessary will be data driven.

While the PESB program reapproval cycle may continue to occur on the five or seven year cycle, annual review of data points and trends in results can serve to flag areas earlier in that cycle at which time PESB program staff can contact those programs for further inquiry into these indications. Such inquiry may result in a) no action by PESB, i.e. program leadership established internal responses to address the flag; b) technical assistance from PESB to assist the program in determining actions to address the flag (such a request could be program initiated or PESB initiated); or c) Board action according to WAC 181 78A-110 (3) .

Use of the framework of evidence model standardizes the metrics gathered from each program and provides clear comparison of information across programs for consumers, e.g. potential candidates, funders, etc. The model also customizes the regulatory oversight function of the PESB to the needs of each institution. Examples of this standardized customization include:

● A program’s data submissions do not trigger any flags over the course of its approval status (five or seven years). In such a case, PESB’s reapproval review will look different than the review needed for a program for which a pattern of assistance has been indicated.

● A program makes a change that results in a flag. PESB queries the program about the flag.

○ The program’s responses could include

a) “we were expecting the data outcome variance due to the programmatic change and this is our plan for response”;

b) “we were expecting a change, but not this one, and can you help us identify assistance about next steps”;

c) “we weren’t expecting any change; now what?”; or among others

d) “change? what change?”.

○ Each of these program responses to the PESB query could result in different PESB action depending on the programs’ status in the review cycle.

● PESB anticipates a number of requests for assistance to support programs’ data collection, review, and submission capacities. Should the reapproval review occur for these programs for whom this assistance is needed occur while the program continues its capacity building in this area, the reapproval review team may need to include a member with specific data systems experience. Whereas, a program on the same approval timeline but without flags around this indicator would not necessarily require a similarly composed review team.

Such customization of PESB responses to the needs of the program reduces the costs of reapproval review for the agency and the program as well as increases the efficiency and effectiveness of the reapproval process.