Nils McGee
Inquiry plan outline
Evaluation of the effect of an audience response system on student performance
1. The title of the paper I am using as a model is “The impact of the use of response pad system [sic] on the learning of secondary school physics concepts: A Singapore quasi-experiment study” (Mun, Hew, & Cheung, 2009, p. 840). The theoretic framework for this study is that learning will improve when students participate, receive frequent feedback, and when instructors can accurately assess the progress of their students. An audience response system (ARS) has the potential to increase the the frequency and quality of these three factors when it is used to pose questions during class, collect answers from students, and provide visual feedback for discussion.
In this study, a quasi-experimental pre-test/post-test design was used to test students on the recall and application of physics concepts. The pre-test used 10 written questions to determine students knowledge of a specific set of physics and their ability to apply these concepts. The pre-test was used to determine if there was a sginifcant difference between the treatment group and the control group. Four one hour lessons were given. The treatment group used an audience response system while the control group did not. Both groups were asked the same questions during the lessons. In the treatment group, the questions were projected on the screen, students responded to the questions using their response pads, after all students had responded, the teacher gave feedback to correct errors and misconceptions. The teacher posed the questions verbally to the control group, and chose seven students to provide answers to the questions. The teacher then gave feedback to correct errors and misconceptions. The post-test consisted of 10 questions of similar types to those on the pre-test to assess the students recall of concepts and ability to apply the concepts.
The researchers used several techniques to evaluate the construct validity of the pre-test and post-test. Scores from the pre-test and post-test were analyzed using independent sample t-tests and the effect size was calculated using a Cohen’s d statistic.
The researchers found that there was no significant difference between the participants in the two groups. They found that the treatment group performed significantly better than the control group and the effect was large.
2. There are several recent studies that have attempted to describe the relationship between the use of an audience response system and student performance. The theoretical framework states that more learning well occur when there is more participation, when feedback is given frequently, and when instructors can accruately judge the progress of their students.
Recent studies reported the use of an ARS in the classroom increased the percentage of students responding to each question. One of these studies used a true-experimental design in which the participation rates under four different treatments were measured, either by the ARS, or by trained viewers who tallied participation rates (Stowell & Nelson, 2007). A second study used a quasi-experimental design. Surveys completed by instructors and tutors of students either used or did not use an ARS indicated that the ARS students had greater participation rates (Mula & Kavanagh, 2009). Another study used a quasi-experimental design to test the attitudes of undergraduate psychology students toward the use of an ARS system in the classroom. A third study found that participants in the ARS group rated the lecture as more effective, involving, and intellectually stimulating than the participants in the non-ARS group (Shaffer & Collura, 2009). Several recent studies have also shown that the use of an ARS improves student attitudes toward participation (Barnes, 2008; Mun, et al., 2009; Shaffer & Collura, 2009; Sprague & Dahl, 2010; Graham, Tripp, Seawright, & Joeckel, 2007), while two of these studies reported that the use of an ARS helped to improve the attitudes of reluctant participants toward participation relative to their peers (Sprague & Dahl, 2010; Graham, et al., 2007).
One study reported on the reliability of student answers for formative assessment as a function of the method used to collect student responses to in class questions. Four methods of questioning were used. The first group used a traditional lecture setting where the instructor asked the students if they had any questions. The second group used hand raising to indicate the response. The third group used answer cards which they held up to indicate their response, and the fourth group used an ARS system. The post-lecture quiz scores for the students in all four classes were not significantly different. The scores on their in class questions however were. The scores on in-class questions for the hand-raising and response card groups were both significantly higher than the scores for the ARS group. What was most interesting was that the scores for these two groups were also much higher than their scores on the post-lecture quiz. The scores for the ARS group on the in-class questions were a more reliable indicator of student learning. The authors hypothesized that this was due to the ability of students in the hand-raising and response card group to view other students’ choices before committing to a choice of their own (Stowell & Nelson, 2007).
Several studies have also described the relationship between the use of an ARS and student learning. The most compelling of these studies used a non-equivalent groups model with a pre-test/post-test design to test the effect of an ARS on learning secondary school physics concepts in Singapore. The results showed a significant improvement at the 0.05 level of significance with an effect size (Cohen’s d) of 0.851 (Mun, et al., 2009), which is considered to be large. A second study compared the post-test results of students in an ARS class to students in a non-ARS class. Both treatment groups were presented with the same questions in class. The results of the post-test showed no difference in the scores between the ARS group and the non-ARS group however, the scores of the ARS group were significantly greater for different, yet related questions (Mayer, et al., 2009)
3. The problem is that in a class of 24 students, a minority of students participates when responding to in class questions. Various techniques have been used to solicit responses from all students. These include hand raising to indicate choice, writing their answers while the instructor circulates giving individual feedback. Using the hand-raising technique results in bias toward correctness when used as formative assessment. Formal collection of student responses using this technique would also be time consuming. The written answer and circulation method is time consuming.
4. A TurningPoint ARS system will be introduced to the two high school honors biology classes. The system will be used throughout the year. Students will be assigned a response pad that will be registered to their student ID number. The first quarter (Units one, two and three) will not be used, except to familiarize students and the instructor with the technology. During the second quarter (Units four, and five), The TurningPoint system will be used for one section which will be the treatment group, but not for the other which will act as a control group. The class section used as the control group will switch for each unit in order to reduce the selection-history threat to internal validity.
Test-scores from the first three units and demographic data derived from Pearson PowerTeacher and Pearson Inform will be analyzed to determine if there is a significant difference in student performance and characteristics
During each non-lab class, four or five questions related to the content will be displayed within the PowerPoint presentation. These questions will be the same for all classes. For the treatment group, student will respond using their response pads, after all students have been given an opportunity to respond, the answers will be displayed in bar graph format from within the PowerPoint presentation. Different answers will be discussed to determine why the class thinks the answers were chosen. After the initial discussion period, the correct answer will be revealed, and further discussion will occur. In the control group, the technique will be the same, except responses will be solicited through hand raising, and the instructor will write down a quick approximation of the number of students who answered correctly.
Test scores from the two units of the second quarter will be analyzed to determine if there is a significant difference in the test scores between the treatment group and the control group.
5a. The audience will be honors level biology students in three sections taught by Mr. McGee. The students in this class are primarily freshmen, some students are sophomores who were placed into honors biology after performing well during their freshmen physical science class. These students generally are comfortable with technology, although their use of applications and hardware is usually limited to video games, general word processing applications, presentation applications, and web-browsers. Approximately 20% of the students in this course do not meet the pre-requisites for the course, but have been placed into the course either through an error on the part of their guidance counselor, an override by their parent(s), or an override by the guidance department.
5b. The null hypothesis is that the polling and feedback technique will not have an effect on student performance on the unit tests.
6. The computer used will be a Hewlett Packard laptop running Windows XP Professional OS. The computer will be using Micorosft Office 2010 Professional. TurningPoint 4.2.2 will be used to integrate the questions and feedback into PowerPoint Presentations. A set of 30 Turning Technologies ResponseCard RF will be used for polling students, and a Turning Technologies RF USB device will be the interface between the response cards and the computer. A Panasonic ceiling mounted projector will be used to project the presentations onto a white board in the front of the classroom. Pearson PowerTeacher and Pearson Inform will be used to collect demographic and past performance data.
7. Data will be collected from the unit tests.
8. T-tests will be used for analysis of the unit tests.
9. Unit 4-Principles of Genetics begins on day 50 of the school year and lasts 10 school days. Unit 5-The Molecular Basis of Inheritance begins on day 61 and lasts 10 school days.
10. An article will be written and submitted for publication (possibly). The proposal and results will be shared with my supervisors (department chairperson, principle, and assistant superintendant), and the results will be shared with the faculty during a faculty meeting.
Bibliography
Barnes, L. J. (2008). Lecture-free high school biology using an audience response system. The American Biology Teacher, 70(9), 531-536.
Graham, C. R., Tripp, T. R., Seawright, L., & Joeckel, G. (2007). Empowering or compelling reluctant participators using audience response systems. Active Learning in Higher Education, 8(3), 233-258.
Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., . . . Zhang, H. (2009). Clickers in college classrooms: Fostering learnning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34, 51-57.
Mula, J. M., & Kavanagh, M. (2009). Click go the students, click-click-click: the efficacy of a student response system for engaging students to improve feedback and performance. e-Journal of Business Education & Scholarship of Teaching, 3(1), 1-17.
Mun, W. K., Hew, K. F., & Cheung, W. S. (2009). The impact of the use of response pad system on the learning of secondary school physics concepts: A Singapore quasi-experiment study. British Journal of Educational Technology, 40(5), 848-860.
Shaffer, D. M., & Collura, M. J. (2009). Evaluating the effectiveness of a personal response system in the classroom. Teaching of Psychology, 36(4), 273-277.
Sprague, E. W., & Dahl, D. W. (2010). Learning to click: An evaluation of the personal response system clicker technology in introductory marketing courses. Journal of Marketing Education, 32(1), 93-103.
Stowell, J. R., & Nelson, J. M. (2007). Benefits of electronic audience response systems on student participation, learning, and emotion. Teaching of Psychology, 34(4), 253-258.