The following resources offer suggestions and guidance for designing effective evaluations of SL student learning.
Evaluating the Impact of Informal STEM Education (K-12 and Public Outreach)
- InformalScience.org: A resource and online community for informal learning projects, research and evaluation
- Center for Advancement of Informal Science Education: designed for NSF principal investigators, ISE professionals, and STEM researchers, this site offers evaluation/assessment resources based on NSF guidelines
- Evaluation Framework for Informal Science (pdf): Published by NSF, offers recommendations and steps for developing effective summative evaluations, including definitions of key terms, examples, and planning worksheets.
- Broader Impacts Criterion: Published by NSF, provides examples of considerations used in assessing the broader impacts of the proposed activity.
Evaluating the Impact of Student Learning on College Student Learning
- Assignments and Rubrics for evaluating Service-Learning
- SoTL Scholarship for Teaching and Learning resources (from Boise State CTL)
Getting Started
(The following is adapted from the “Evaluation Framework for Informal Science: Report from a National Science Foundation Workshop”)
Designing Summative Evaluations (assessing impact of STEM education)
Impact Category | Public Audiences | Professional Audiences |
---|---|---|
Awareness, knowledge or understanding (of) | STEM concepts, processes, or careers | Informal STEM education/outreach research or practice. |
Engagement or interest (in) | STEM concepts, processes, or careers | Advancing informal STEM education/outreach field |
Attitude (towards) | STEM-related topic or capabilities | Informal STEM education/outreach research or practice |
Behavior (related to) | STEM concepts, processes, or careers | Informal STEM education/outreach research or practice |
Skills (based on) | STEM concepts, processes, or careers | Informal STEM education/outreach research or practice |
Other | Project specific | Project specific |
It is critical to be able to clearly describe what one is actually attempting to accomplish by using a backward research design approach (Wiggins and McTighe, 2001). Some of the questions a project team should be able to answer at the outset of initiating a project using a backward research design approach include:
- What audience impacts will this project facilitate?
- What approach/type of project will best enable us to accomplish these goals and why do we feel that this is the best approach to take?
- How will we know whether the activities of the project accomplished these intended goals and objectives and with what evidence will we support the assertion that they did?
- How will we ensure that unanticipated outcomes are also documented?
All forms of evaluation play an important role in planning, enabling “reflective practice” and facilitating project team/institutional learning. Since evaluation is a process that contributes to decision-making at key points of project development and implementation, and evaluation can be used to ensure success throughout the process of project development, it is important to include a comprehensive plan for evaluation. At a minimum, that includes front-end formative and summative evaluation, and ideally also includes remedial efforts to tweak and improve projects as they are initially implemented. Utilizing all forms of evaluation helps to ensure the progress and success of your efforts.
Issues of Particular Interest to those Evaluating Youth and Community Programs
While the following issues can apply to a variety of different types of formal and informal science programs, they are especially frequent in and of particular concern to those evaluating youth and community programs.
Maturation
Maturation, or just getting older, is a key issue for evaluations of youth programs. As children age, they learn and change independent of any programs in which they participate, and do so more rapidly than they will as adults. Because of this, a design where over time young people in programs are tested twice (pre/post design) or more than twice (times series design) is not an adequate measure of change for most evaluations. Any changes of youth in programs need to be compared to changes of young people of similar ages and in similar environments to better see if any changes are due to the program rather than to maturation.
Real vs. Ideal
Many curriculum development projects funded under youth and community programs provide those who are piloting the curriculum with benefits such as training, materials and other resources that are not part of the final curriculum as marketed. Evaluations of the curriculum and its impact are most often done under more ideal circumstances, with people who have been trained and provided other resources. However, most informal science education curricula will be used primarily by people with no special training, who will be providing their own materials. The results of curriculum evaluations done under the more ideal conditions may not hold when the curriculum is used in more realistic environments. Evaluations may want to include a component that tests the usability and impact of the curriculum in more realistic situations.
Informal Science Education vs. Formal Science Education
There is often interest in finding the impact of informal science education on formal science education, especially student achievement. If this is done, then it is important to look at the content covered by any of the formal education measures/tests used. The question to be answered is whether the content of the formal education measure/test reflects the content of the informal science education program. Another concern is that there is a risk of alienating young people coming to an informal science education program by having one of their initial program activities be a formal science test. Care can be taken to devise the assessment tool so it feels like part of the program itself. For example, the evaluator can use a typical project activity to see if skills practiced earlier in the program are used spontaneously by the participants in the test activity at the end.
Sustainability
In youth and community programs, sustainability, that is the continuation of the program and its impact, can pertain to individual or to institutional change. Without studies done over a period of years, it is very difficult to assess the sustainability of individual change, particularly in geographic areas where there is a great deal of mobility. Sustained change is easier to track for institutions, including community-based organizations, science centers, museums, colleges, and universities. Indications of institutional change may include:
- Reallocation of resources;
- Continuation of program activities;
- Changes in professional development;
- Changes in mission;
- Continued changes in institutional practices and policies.
– Adapted from the “Evaluation Framework for Informal Science: Report from a National Science Foundation Workshop”