Did outreach really work? Cornell team will develop tools to evaluate science and technology education

Almost every research grant these days includes an "outreach" component: As a condition of their federal government funding, researchers are expected to inform the public about their findings and support science and technology education in general. Most researchers enthusiastically embrace this part of their job.

Down in the fine print, though, is a requirement to evaluate the results of this outreach, which most researchers consider an onerous chore.

A new project at Cornell University aims to make that task a little easier and more fruitful. With a two-year, $605,000 grant from the National Science Foundation (NSF), researchers led by principal investigator William Trochim, Cornell professor of policy analysis and management and director of extension evaluation for the Colleges of Agriculture and Life Sciences and of Human Ecology, and Stephen Hamilton, professor of human development and associate provost for outreach, will create a set of tools to evaluate NSF's science, technology, engineering and mathematics (STEM) education programs. Laura Colosi, extension associate in policy analysis and management, is director of the project, officially named "Systems Evaluation."

"Evaluation is merely feedback and organizational learning," said postdoctoral associate Derek Cabrera, a co-investigator . "We want to shift people's thinking from seeing evaluation as just a grant requirement to seeing it as feedback that will assist other researchers."

Evaluation is a social sciences field with its own specialized methods and language, but Cabrera said the goal is to produce a "paint-by-numbers" system that anyone can use. "We also hope to make significant contributions to the field of evaluation," he added, "in particular, we hope to develop theoretical foundations that relate the field of evaluation with the ideas of systems thinking. For example, when you have a large project with multiple stakeholders, multiple programs and multiple evaluations, how do you coordinate them into a system that works in a coherent way?"

Along with a protocol, or set of procedures, the project will create Web-based networking tools that will allow researchers in similar fields to report their results and compare notes, he said.

Scientists tend to think of evaluation in terms of difficult and time-consuming controlled experiments, Cabrera explained, but it can operate on much simpler levels, beginning with -- in the language of the field -- "explorative" or "descriptive" evaluation, which might consist of a simple process, such as asking participants, "What did you learn?" This can be followed by a "correlational" stage that might look for relationships between an educational program and various behaviors; then the "experimental" stage to compare program groups with control groups; and finally a "translational" or "implementational" stage, in which pilot programs move into wider use, and the evaluation process begins all over again.

The research team will spend its first year developing tools, and the second year testing them with three existing STEM programs: the Cornell Center for Materials Research outreach program; the Paleontological Research Institution's Museum of the Earth in Ithaca; and the Complex Systems Summer School, a program for graduate students at the Santa Fe Institute in Santa Fe, N.M.

The research team also includes graduate research assistants in policy analysis and management Jennifer Brown and Claire Lobdell, and undergraduate researchers Sabrina Rahman and Anna Hays.

Media Contact

Media Relations Office