Staff Resources

Standard 2 (2010) - Accountability and Program Improvement

Approval: November 2010

Implementation: January, 2012

Building on the mission to prepare educators who demonstrate a positive impact on student learning, the following evidence shall be evaluated to determine whether each preparation program meets program approval standards of WAC 181-78A-220(2).

The site visit team will determine a finding of “met,” “unmet,” or “exemplary” for each of the two components of Standard 2:

            2.A.  Assessment system

            2.B.  Participation in memorandum of understanding

Each component contains one or more criteria that collectively comprise the expectations for that component. Descriptive language for “met,” “unmet,” and “exemplary” on each criterion has been provided for guidance purposes; however, the team will report findings only for the overall component.

Ratings in Standard 2:

Met: In judging a standard to be “met,” the site visit team is indicating that there is clear and convincing evidence. “Clear and convincing” means that:

  • The evidence is credible; i.e., it bears a clear relationship to the standards being assessed
  • The evidence is representative of the program (e.g., evidence from an elective course taken by a small minority of candidates would not, by itself, be persuasive)
  • The evidence comes from multiple sources
  • Where appropriate, the evidence includes examples of candidate-based and student-based evidence
  • The evidence, taken as a whole, would persuade a  reasonable person that the standards are being met

These criteria do not assume that every element of the standards is present to an equal extent. There may be areas of weakness within a standard that do not preclude an overall rating of “met.” However, those areas of weakness should be identified by the team in the narrative and may also lead to a recommendation.

Unmet: In judging a standard to be “unmet,” the site visit team is indicating that there is significant doubt that the program meets the specified criteria.

The evidence may fall short for a number of reasons:

  • It is not credible; i.e., it does not seem closely related to the standards
  • It is sporadic or fragmentary, or may come from a single source 
  • There is no connection between the evidence and a positive impact on the candidates
  • Taken as a whole, it would leave significant doubt that the standards are being met 

These criteria do not assume that every element of the standards is absent. There may be isolated “islands of excellence” within a standard that deserve commendation, but do not preclude an overall rating of “unmet.”  However, those areas of strength should be identified by the team in the narrative and may also lead to an accolade.

Exemplary: In judging a standard to be “exemplary,” the site visit team is indicating that the evidence meets a higher standard than it does for “met.” The evidence is:

  • Both pervasive and consistent, showing that the standards are deeply embedded within the culture of the program
  • There are no discernible areas of weakness within the standard, and the evidence may include examples of innovative practices

 STANDARD 2.A.  Assessment System Each approved educator preparation program shall maintain an assessment system that:

STANDARD 2:  ACCOUNTABILITY AND PROGRAM IMPROVEMENT

Criteria

Unmet

Met

Exemplary

Examples of evidence

 

 


1. Assesses outcomes in alignment with the program’s conceptual framework and state standards

 

 

The assessment system does not consistently generate data that allow reasonable inferences about the degree to which program goals are being achieved.

Data collected and utilized by the assessment system is clearly aligned with the program’s conceptual framework and with the appropriate state standards, allowing reasonable inferences about the degree to which program goals are being achieved.     

Data in the assessment system can be easily disaggregated by program goals, and such disaggregated data are easily accessible by faculty and administrators. Faculty can speak knowledgeably of the degree to which program goals are supported by data from the assessment system.

--A written assessment plan that clearly articulates key program outcomes and  the associated assessments

--Rubrics and other assessment tools that clearly provide data related to key program goals

--Interviews with faculty demonstrating understanding of the links between key assessments and program goals

 

2. Systematically and comprehensively gathers evidence on (i) candidate learning and (ii) program operations, including placement rates, clinical experiences, and candidate characteristics.  

Data may be collected, but it is not systematic or comprehensive enough to allow reasonable inferences about program effectiveness or to support decisions for program changes. Data may be collected sporadically or may be narrowly focused.

Data gathered by the assessment system is systematic (collected deliberately and consistently for candidates in all programs) and comprehensive (providing evidence of key candidate outcomes and program operations). Data are collected from multiple instruments sampling candidate knowledge, skills, and dispositions. Collection of candidate data is tied to major transition points such as admission to program, entry to student teaching, and program completion.

The program continually reviews the effectiveness of the data collection system and adds or deletes assessments as needed.

--A written assessment plan that clearly articulates what, how, and by whom key data will be collected  

--Interviews with faculty demonstrating understanding of and involvement in the data system

--Data on program operations, such as clinical placements in high-needs schools, effectiveness of advising services, and qualifications of admitted candidates  

 

3. Collects candidate work samples that document positive impact on student learning

The program has not consistently collected candidate work samples that demonstrate the ability to assess and document positive impact on student learning.

The program has collected representative candidate work samples that demonstrate the ability to assess, document, and reflect on positive impact on student learning.  

Candidate work samples documenting positive impact are pervasive, demonstrating ability to assess positive impact in multiple contexts.

--Candidate work samples

--Interviews with faculty, candidates, and P-12 partners

 

4. Aggregates key data over time

Data are collected but not aggregated in a way that shows results over time or that allows analysis of subgroups.

Data are aggregated in a way showing results over time and allowing analysis of results for subgroups. The program has developed an effective and efficient means of electronically entering aggregated data into its system and tracking it over time. Aggregated data are regularly communicated or easily accessible to faculty..

The program’s data system is sufficiently powerful and flexible to allow faculty to conduct their own analyses of trends, patterns, and questions.

--Tables, charts, and other displays of aggregated key data (as identified by the program in 2.1(a) above)

--Description and demonstration of how data are aggregated and accessed

--Interviews with faculty

 

5. Incorporates perspectives of faculty, candidates, and P-12 partners

Data are based solely or mostly on faculty perspectives and/or judgments.

The program regularly gathers data reflecting the perspectives of faculty, candidates, and P-12 partners.  

Faculty, candidates and P-12 partners engage in collaborative review and reflective analysis of data.

--Examples of assessment data incorporating judgments of faculty, candidates, and P-12 partners

--Interviews with faculty, candidates, and P-12 partners

 

6. Includes processes and safeguards that ensure fair and unbiased assessment of candidates

There is little or no evidence that the program has examined key assessments to assure fairness and lack of bias.

The program has taken steps to ensure that key candidate assessments are fair and unbiased. Faculty have reviewed rubrics for clarity, checked assessments for inter-rater reliability, and provided common training for evaluators. There is evidence that candidates have been provided opportunities to learn the knowledge and skills being assessed. 

Review of assessments for fairness and lack of bias is systematic and recurring, and leads to continuous improvement of assessment instruments as well as the assessment system itself.

--Description of how assessments are reviewed for fairness and lack of bias

--Interviews with faculty and candidates

 

7. Provides for regular analysis of assessment results

 

The program does not systematically engage in focused discussion of key assessment results. 

The program systematically engages faculty and others (including the PEAB) in focused discussion and analysis of key assessment results.   

The program regularly engages candidates and school partners in discussion and analysis of selected data.

--Record of data retreats or other meetings at which assessment data are analyzed

--Interviews with faculty and other stakeholders

 

8. Is systematically linked to program decision-making processes 

There is little or no evidence that the program has used data from key assessments in making decisions about program changes.

The program systematically uses data from key assessments in making decisions about program changes.   

The program regularly engages candidates and school partners in identifying implications of key assessment data.      

--Interviews with faculty and other stakeholders

--Documentation of program changes based on assessment data

 

STANDARD 2.B. Participation in memorandum of understanding:

Criteria

Unmet

Met

Exemplary

Examples of evidence

1. Each approved educator preparation program shall engage in data collection and reporting as specified in the memorandum of understanding approved by the professional educator standards board.

The program has not fully provided data as outlined in the annual memorandum of understanding with the Professional Educator Standards Board.

The program has provided data as outlined in the annual memorandum of understanding with the Professional Educator Standards Board.

The program has used the data partnership and the memorandum of understanding to expand its own  capabilities for data collection, analysis and use.

--Copy of MOU

--Evidence that data have been reported per the MOU