Skip to main content

PAR Training

Peer Review of PAR Reports

Boise State’s process for program review, in the tradition of academic scholarship, includes peer review for Program Assessment Reports (PAR). When reports are completed and submitted teams of faculty conduct the document reviews. Reviewers, working in interdisciplinary teams, apply a standard rubric to each report and generate a response for each department. The intent is to accurately identify the strengths and weaknesses of the assessment reports and processes for our peers. Ultimately, programs will have a clearer vision of needed improvements for teaching and learning to provide an ever-better degree for our students. See the application form to become a program reviewer.

Peer Review Training

Peer review training will take place each year in May and will be designed to introduce peer reviewers to the protocol for review and to conduct evaluation norming.

 

Program Learning Outcomes Primer

Understanding how to write Program Learning Outcomes for Program Assessment is required for peer evaluators. If you haven’t attended the CTL workshops on Program Learning Outcomes, you can do so through CTL event calendar and registration website.

 

Curriculum Mapping Introduction

PARs include a curriculum map and a discussion about the curriculum map. Familiarize yourself with approaches to curriculum mapping as well as the uses and benefits of curriculum mapping by enrolling in the CTL workshop.

 

Direct and Indirect Assessment Measures

The PAR evaluation rubric references the inclusion of direct and indirect assessment measures.

 

Evaluating Key Findings Criteria

Evidence for the key findings criteria is included on report templates 1 and 2. This module walks the peer reviewers through the criteria and where to look for evidence regarding how they have been met.

 

Actions Taken or Planned based on Findings

Evaluating Action Plans included in assessment reports focuses on the presence of well documented and logical plans. The emphasis in the rubric is placed on highlighting evidence that departments are acting upon the evidence they gather. As you look closely at the criteria, you will note that there is a bias toward this type of connected action. Those programs who have brilliantly designed mechanisms to gather evidence that is not used do not score any higher than those who have a very rudimentary program to gather evidence.

Developing and Proficient reports can be distinguished by acting on some but not all of the assessment findings.
Beginning reports can be distinguished from the Developing level because the plans are not actionable (perhaps they are too detailed, or there is not enough evidence). Regardless of the cause, the outcome is that there have not been actions taken. Programs with new assessment plans may also have a lack of evidence leading you to evaluate these criteria as Beginning.

It should be noted that No Evidence or Beginning are not “bad” evaluations—they describe the state of the assessment plans.