Educational Assessment Tools

Using Second-chance Testing to improve student learning

NSF funded: DUE 1915257.

Exams are still a de facto standard for assessing students in STEM, yet we have few evidence-based practices to make them effective for student learning. In this project, we are exploring how to best use second-chance testing to improve student learning, increase student motivation, and improve retention in STEM disciplines.

Investigation Team

PI: Craig Zilles (Computer Science)
Co-PI: Geoffrey Herman (Computer Science)
Co-PI: Matthew West (Mechanical Science and Engineering)
Co-PI: Tim Bretl (Aerospace Engineering)

Cybersecurity Assessment Tools

Department of Defense funded: BAA-00-15, NSA H98230­15­1­0273 and CNAP-CAE Grant# H98230-17-1-03473
NSF funded: DGE 1820531.

In the coming years, America will need to educate an increasing number of cybersecurity professionals. But how will we know if the preparation courses are effective? Presently there is no rigorous research-based method for measuring the quality of instruction. Using a panel of experts, this project has first developed quantitative rankings for two sets of fundamental concepts and topics: one appropriate for students in a first course in cybersecurity, the other appropriate for students graduating from college headed for a career in cybersecurity. Next, this project will develop assessment tools for these two student populations, to measure how well students understand these basic concepts and how well prepared they are to begin work in the field.

This project will provide infrastructure for a rigorous evidence-based improvement of cybersecurity education by developing the first Cybersecurity Assessment Tools (CATs) targeted at measuring the quality of instruction. The first CAT will be a Cybersecurity Concept Inventory (CCI) that measures how well students understand basic concepts in cybersecurity (especially adversarial thinking) after a first course in the field. The second CAT will be a Cybersecurity Curriculum Assessment (CCA) that measures how well curricula prepared students graduating from college on fundamentals needed for careers in cybersecurity. Each CAT will be a multiple-choice test with approximately thirty questions. Finally, the project will perform psychometric evaluations of these two CATs to demonstrate their quality and value.

Investigation Team

Illinois PI: Geoffrey L. Herman
UMBC PI: Alan T. Sherman
UMBC Co-PI: Linda Oliva
UMBC Co-PI: Dhananjay Phatak
UMBC Grad Students: Travis Scheponik

Development of Concept Inventories for Computer Science

NSF Funded – Grant Number: CCLI 0618589

Project Overview

This project was primarily focused on creating three assessment tools for courses in computer science. These assessment tools took the form of concept inventories: multiple-choice tests that force examinees to choose between common misconceptions and accepted conceptions. The Digital Logic Concept Inventory has been developed, deployed, and validated on a national scale.

The development of the concept inventories began by interviewing students as they solved traditional homework problems. We analyzed these interviews to document students’ misconceptions and conceptual difficulties. These documented misconceptions were then used to create multiple-choice questions that assess students’ conceptual understanding.

Investigation Team

Craig Zilles (University of Illinois)
Michael Loui (University of Illinois)
Cinda Heeren (University of Illinois)
Geoffrey Herman (University of Illinois)
Lisa Kaczmarczyk (Independent Researcher)

Projects

Innovation in Engineering and STEM Education
SIIP_WIDER network
How Students Learn Engineering and Computer Science
ConceptualChange
Designing Educational Assessment Tools
cybersecurity
Intrinsic Motivation Course Conversions
IMCC