Amanda Burke Aaronson*
Assistant Professor,University of San Francisco, USA
*Corresponding author: Amanda Burke Aaronson , Assistant Professor, University of San Francisco, USA, Tel: 650-270-0252; Email:firstname.lastname@example.org
Submission: June 21, 2018;Published: July 02, 2018
ISSN: 2577-2007Volume3 Issue4
Review sessions for final exams can be beneficial to student preparation. However, little research has been done on optimally structuring these sessions. Using a common nursing standardized test as a final exam, two semesters are compared using two different review session designs. In the first semester, a general review session, where topics were student-led, was used. In the second, a targeted review session, using practice tests to pre-assess gaps in knowledge, was used. The final exam scores were significantly higher in the second semester than in the first, demonstrating that targeted review sessions might play a role in student success.
Keywords: Nursing education;Review session;Final exam; HESI
Student preparation for a final exam is often based solely on their own study practices. On some occasions formal review sessions are made available. How these sessions are structured is likely as variable as the educators running them. Little literature exists on the optimal way to structure a review session. In required general nursing courses, review sessions can be particularly helpful due to the vast amount of information students are expected to absorb. I have attempted a few methods to optimize these sessions.
Only three articles were located in the ERIC database relating to designing pre-exam review sessions. The first involved using student participation in generation of topics, and encouraging them to come up with questions with the goal of identifying gaps in study strategies, and content. However, this write-up did not include any outcome data, therefore it is difficult to assess the efficacy of the intervention. One article included using problem-based review to help engineering students recognize what they did not know, along with online modules and answer keys . This study noted a trend in grade success as it related to attendance at such review sessions, but quantitative data were not available. Another, older study evaluated psychology students’ final exam performance after review sessions in which students simply used exam keys as review items, compared to students who actually took a practice exam, and were able to immediately review it after another student had scored it . These latter students were significantly more successful on the final exam than those who only reviewed answer keys. Further, those latter students also rated the exercise as significantly more helpful than those who only reviewed answers.
The purpose of this study is to evaluate the effectiveness of a targeted final exam review session compared to a general one in college students, with final exam performance as the measured outcome. This is an observational cohort study.
Students in the first semester of their senior year take Women’s Health Nursing. In this class the standardized nursing Health Education Systems Incorporated (HESI) Maternity specialty exam is used as a final exam. Class means are available for comparison between the Spring 2015 semester when a general review session was performed, and the Fall 2015 semester, when a targeted review session was performed.
The general review session in the Spring 2015 was an opportunity to go over material from the semester as requested by students, and answer specific questions prior to the exam. Specifically, the topics were student-led, and there was no problembased review.
The targeted review session in the Fall 2015 involved students taking a multiple-choice assessment the week before the review session, the students self-corrected, and then could review them immediately. It was not for a grade. I collected the assessments and ran an analysis of the questions most frequently missed within each section. From there I was able to target the general areas of weakness for each of the two course sections and target the review session to those areas.
The average HESI score (out of 1500) for Spring 2015, n=87, was 840.70 (SD 151.11) compared to the Fall 2015, n=88, HESI score, which was 977.69 (SD 121.94). Standardized unpaired t-test of means is 6.598, which is significant at a p value of 0.0001.
This increase in HESI exam scores is both statistically and clinically significant, as the Fall score is above our standard for matriculation that the students must meet at the end of their education, 850. This minimum score is also strongly correlated with success in passing the National Council Licensure Exam for Registered Nurses . However, the intervention in this case was twofold-practice exams, and targeted review. Therefore it is difficult to know if one aspect of the intervention was more effective than the other, if it was the effect of the interaction of the two, or if there was an additional contributing variable. King states that students are poor judges of what they do or do not know, which could also be a contributing factor to the lack of success in the Spring 2015 reviews, where students initiated the topics for review. Limitations to this study include different cohorts of students, a relatively small sample size, and that student demographics are unavailable. To that point, we do know that while the Fall cohorts are admitted to the school of nursing as freshman, the Spring cohorts are transfer students. Some of these are older, entering a second career, while some have been attending other universities out of high school and transfer during their school career. In conclusion, consistent with Balch’s work, practice exams are likely an effective way to assist students in self-identifying gaps in knowledge, and targeted review sessions can help validate the students’ efforts at studying those weaker areas.
I declare no conflict of interest regarding the publication of this paper. This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Internal Review Board (IRB) approval was not obtained, as this is classroombased research and quality improvement. Per the institutional IRB; “Classroom-based research … and program evaluation or quality improvement projects may not require reviews”.
© 2018 Amanda Burke Aaronson. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.