THE EFFECT OF GROUP COUNSELING INTERVENTION ON THE PERFORMANCE OF
RURAL STUDENTS ON THE GEORGIA HIGH SCHOOL GRADUATION TESTS
by
Donna A. Caudell
Liberty University
A Dissertation Presented in Partial Fulfillment
Of the Requirements for the Degree
Doctor of Education
Liberty University
2016
brought to you by COREView metadata, citation and similar papers at core.ac.uk
provided by Liberty University Digital Commons
2
THE EFFECT OF GROUP COUNSELING INTERVENTION ON THE PERFORMANCE OF
RURAL STUDENTS ON THE GEORGIA HIGH SCHOOL GRADUATION TESTS
by Donna A. Caudell
A Dissertation Presented in Partial Fulfillment
Of the Requirements for the Degree
Doctor of Education
Liberty University, Lynchburg, VA
2016
APPROVED BY:
Casey Reason, Ph.D., Committee Chair
Sam C. Smith, Ph.D., Committee Member
Steven McDonald, Ed.D., Committee Member
Scott Watson, Ph.D., Associate Dean, Advanced Programs
ABSTRACT
With an increase in high-stakes testing, educators continue to search for the best
methodologies for assisting students in maximizing academic achievement and successful
completion of graduation requirements including mandatory tests for high school graduation. At
the time of this study, students graduating from Georgia high schools were required to pass five
academic subject area tests in order to receive a high school diploma. The Georgia High School
Graduation Tests (GHSGT) tested English language arts, math, science, and social studies while
the Georgia High School Writing Test (GHSWT) assessed writing. Psychometric theory, social
cognitive theory, and cognitive behavioral theory provided the theoretical framework for this
study. Students from a rural high school in Georgia comprised the sample. This quantitative
study employed a posttest-only control group design with randomization. Students who had
failed at least one of the GHSGT were randomly placed into control and treatment groups.
Students in the treatment group participated in an 8-session group guidance program, Student
Success Skills. Control and treatment groups were compared using Mann-Whitney U tests for
math, social studies, and English language arts due to abnormal data and small sample sizes.
The results of this study found no significant difference between the groups’ test scores.
Keywords: exit exams, graduation tests, GHSGT, certificate of attendance, Student
Success Skills, group guidance
4
Dedication
This dissertation is dedicated to my husband and children. My husband, Doug, is the
most intelligent and highly practical person I have ever had the privilege to know. Without his
unwavering faith and incredible support throughout this process, this dissertation might not have
seen completion. My children, Derrick, Dianna, and Dillon, have been my faithful encouragers
and wonderful examples of perseverance throughout my dissertation journey.
5
Acknowledgements
I am thankful to God for leading me through this experience. Proverbs 3:6 has been
made real to me as I have completed this document. When I started this process, I was unsure of
why God was leading me in this direction. As the time has unfolded, He has steadfastly opened
and shut doors leading me to knowledge of His perfect plan for me and allowing me to
understand why He chose this path.
I am extremely grateful to Dr. Casey Reason for his patient leadership throughout this
process. His willingness to answer each question and to keep me on a logical track has been
invaluable. Also, to Dr. Steven McDonald to whom I owe many thanks for his help with the
methodology part of this dissertation. There were many times I would have faltered without his
patient encouragement and explanations of statistical nuances. Additionally, I owe a debt of
gratitude to Dr. Sam Smith for his help and willingness to read the many installments of this
document and for his intricate attention to detail. They have all provided both support and a
sounding board that I have truly valued.
6
Table of Contents
ABSTRACT .................................................................................................................................... 3
Dedication ........................................................................................................................... 4
Acknowledgements ............................................................................................................. 5
List of Tables ..................................................................................................................... 10
List of Abbreviations ......................................................................................................... 11
CHAPTER ONE: INTRODUCTION ........................................................................................... 12
Background ........................................................................................................................ 12
Problem Statement ............................................................................................................. 16
Purpose Statement ............................................................................................................. 17
Significance of the Study ................................................................................................... 17
Research Questions ........................................................................................................... 19
Null Hypotheses ................................................................................................................ 20
Definitions ......................................................................................................................... 20
Summary ............................................................................................................................ 23
CHAPTER TWO: LITERATURE REVIEW ............................................................................... 24
Introduction ....................................................................................................................... 24
Theoretical Background .................................................................................................... 24
Cognitive Development Theory ............................................................................ 25
Psychometric Theory ............................................................................................. 26
Social Cognitive Theory ........................................................................................ 26
Cognitive Behavorial Theory ................................................................................ 27
Review of the Related Literature ....................................................................................... 29
7
Historical Background ........................................................................................... 29
Accountability Measures ....................................................................................... 30
Perspectives on Standardized Testing ................................................................... 32
High School Exit Exams ....................................................................................... 39
High School Dropouts ........................................................................................... 46
Factors that Influence Student Achievement ......................................................... 47
Predicting Student Success .................................................................................... 50
Student Success Skills ........................................................................................... 57
Summary ............................................................................................................................ 59
CHAPTER THREE: METHODS ................................................................................................. 61
Introduction ....................................................................................................................... 61
Design ................................................................................................................................ 61
Research Questions ........................................................................................................... 62
Null Hypotheses ................................................................................................................ 63
Setting ................................................................................................................................ 63
Participants ........................................................................................................................ 65
Control Group ........................................................................................................ 66
Treatment Group ................................................................................................... 68
Instrumentation .................................................................................................................. 70
Georgia High School Graduation Tests (GHSGT) ................................................ 70
Validity and Reliability for GHSGT ..................................................................... 71
Procedures ......................................................................................................................... 72
Data Analysis ..................................................................................................................... 75
8
Summary ............................................................................................................................ 78
CHAPTER FOUR: FINDINGS .................................................................................................... 79
Introduction ....................................................................................................................... 79
Research Questions ........................................................................................................... 79
Null Hypotheses ................................................................................................................ 80
Descriptive Statistics ......................................................................................................... 80
GHSGT Math Group ............................................................................................. 80
GHSGT Social Studies Group ............................................................................... 81
GHSGT ELA Group .............................................................................................. 82
Results ............................................................................................................................... 82
Null Hypothesis One ............................................................................................. 82
Null Hypothesis Two ............................................................................................. 83
Null Hypothesis Three ........................................................................................... 86
Summary ............................................................................................................................ 87
CHAPTER FIVE: DISCUSSION, CONCLUSIONS, and RECOMMENDATIONS .................. 89
Discussion .......................................................................................................................... 89
Conclusions ....................................................................................................................... 93
Implications ....................................................................................................................... 94
Limitations ......................................................................................................................... 98
Recommendations for Further Research ......................................................................... 100
REFERENCES ............................................................................................................................ 103
APPENDIX A: Student Success Skills Program ........................................................................ 125
APPENDIX B: Approval by District School Superintendent to Conduct Research ................... 126
9
APPENDIX C: Approval by School Principal to Conduct Research .......................................... 128
APPENDIX D: Institutional Review Board Approvals .............................................................. 129
APPENDIX E: Parent or Guardian Permission Letter ................................................................ 131
APPENDIX F: Student Participation Consent Form ................................................................... 133
10
List of Tables
Table 1. Demographics for Study School ...................................................................................... 64
Table 2. Demographics for the Control Group .............................................................................. 67
Table 3. Demographics for the Control Group by GHSGT Content Area .................................... 68
Table 4. Demographics for the Treatment Group ......................................................................... 68
Table 5. Demographics for the Treatment Group by GHSGT Content Area ................................ 70
Table 6. Tests of Normality for the GHSGT by Group and Content Area .................................. 76
Table 7. Descriptive Statistics for the GHSGT Math Pretest and Posttest .................................... 81
Table 8. Descriptive Statistics for the GHSGT Social Studies Pretest and Posttest .................... 81
Table 9. Descriptive Statistics for the GHSGT ELA Pretest and Posttest .................................... 82
Table 10. Descriptive Statistics for Mann-Whitney U Test for Math Posttest Scores ................. 84
Table 11. Descriptive Statistics for Mann-Whitney U Test for Social Studies Posttest Scores ... 85
Table 12. Descriptive Statistics for Mann-Whitney U Test for ELA Posttest Scores .................. 87
11
List of Abbreviations
Adequate Yearly Progress (AYP)
Center on Education Policy (CEP)
Cognitive Behavioral Theory (CBT)
College and Career Ready Performance Index (CCRPI)
End of Course (EOC)
End-of-Course Test (EOCT)
English Learner (EL)
English Language Arts (ELA)
Every Student Succeeds Act (ESSA)
Florida Comprehensive Achievement Test (FCAT)
Georgia Department of Education (GADOE)
Georgia High School Graduation Tests (GHSGT)
Georgia High School Writing Test (GHSWT)
Governor’s Office of Student Achievement (GOSA)
Grade Point Average (GPA)
No Child Left Behind (NCLB)
Organisation for Economic Cooperation (OECD)
Programme for International Student Assessment (PISA)
Social-Emotional Learning (SEL)
Statistical Package for the Social Sciences (SPSS)
Student Success Skills (SSS)
12
CHAPTER ONE: INTRODUCTION
Background
While educators have always been concerned with helping students succeed, the
importance of increasing a school’s graduation rate has become more urgent with the
implementation of the No Child Left Behind Act (NCLB) beginning in 2002 (Swanson, 2004;
U.S. Government Accountability Office, 2005). In a nation with an overall dropout rate of
almost 25%, schools are under increased scrutiny to improve graduation rates and produce
students who are ready for the job market upon completion of high school (Berger, 2000; Bush,
2001; Goertz & Massell, 2005; Stillwell, Sable, & Plotts, 2011; U.S. Department of Education,
2010.). One focus of NCLB was to evaluate student ability, achievement, and performance
through the use of high-stakes standardized testing.
Standardized testing is currently used as a measure of achievement and intelligence
utilized for entrance into college, employment, college athletics, and graduation from high
school (Mathews, 2006; Noble & Sawyer, 2002). NCLB required the use of reading and math
scores as part of the formula for determining a school’s AYP score. The Every Student Succeeds
Act (ESSA) signed by President Obama alleviates many of the restrictions of the NCLB, but still
requires the use of standardized test scores as part of the formula for assessing student and school
achievement (U.S. Department of Education, n.d.).
Standardized testing is a foundational premise in education that has deep roots in our
society. The use of tests began in Imperial China when applicants for government jobs were
required to write essays on Confucianism (Mathews, 2006). The use of testing continued, and in
the early 1900s educational leaders encouraged school standardization (Sherman & Theobald,
2001; Wiebe, 1967), and began to make attempts at statewide educational standards (Sherman &
13
Theobald, 2001). The Stanford Binet Intelligence test, which assesses mental ability, was
developed to determine appropriate educational placement for children and initiated the current
emphasis on standardized testing. In 1914, Frederick Kelly developed multiple-choice questions,
which fostered an increase in the use of standardized testing. Standardized testing became so
widely used that even immigrants processing through Ellis Island completed a standardized test
as part of the process for being allowed into America (Barton, 2010; Jaffe, 1998; Mathews, 2006;
Schlenoff, 2015).
Psychologists impacted educational thought as they developed theories that supported the
expanded use of testing. Piaget’s theories highlighted the development of competencies or
benchmarks that could be expected at specific stages in children’s lives. Testing for expected
age-related competencies became an acceptable part of a student’s educational experience (Gray,
1978). Psychometric theories emphasized the importance of measureable skills and intelligence
at specific stages of development and further highlighted the expectations that students should
meet certain benchmarks to signify normal physical, emotional, and conceptual behavior (Cattell,
1971; Jaffe, 1998).
Current methods of standardized testing can be traced to the theoretical work of Alfred
Binet and Raymond Cattell. Their work laid the foundation for measuring mental abilities, as
well as academic achievement (Jaffe, 1998; Plucker & Esping, 2014). Cattell used the scientific
method to develop a methodology for identifying personality and motivation (“Raymond
Bernard Cattell,” n.d.).
As educators and psychologists continue to pursue testing as a tool for understanding
human development, politicians, parents, institutions of higher learning, and communities have
chosen to use testing as a method for measuring academic achievement in schools and students.
14
Exit exams have become a common approach for measuring academic achievement in high
school students.
The use of exit exams became common as part of a state’s accountability progress report
to measure the adequate yearly progress (AYP) of individual schools, school systems, and state
public education departments as required by NCLB (Georgia Department of Education
[GADOE], n.d-a; GeorgiaStandards.org, n.d.; Kornhaber, 2004; Simpson, 2009). NCLB
required states to close achievement gaps by providing every child with a high quality education.
The law mandated that states use standardized testing to increase student performance in reading
and math (U.S. Department of Education, n.d.). Students who failed to pass the exit exams were
classified as dropouts and were not awarded a high school diploma, regardless of the successful
completion of the required high school courses (Downey, 2010; Stillwell et al., 2011). Failure to
complete these exams left students with postsecondary status equivalent to that of a high school
dropout (Pedraza-Vidamour, 2008; Stillwell et al., 2011; Technical College System of Georgia,
2015). Additionally, students who received a GED or certificates of attendance were classified
as dropouts in a state’s national reporting data with regard to AYP status (Downey, 2010;
Stillwell et al., 2011). Further research indicates that high school dropouts are at increased risk
for poverty, social hardships, unemployment, and even incarceration (Sum, Khatiwada,
McLaughlin, & Palma, 2009). Little research is available that discusses long-term consequences
for students who complete high school, but are classified as dropouts since they have only
received a certificate of attendance rather than a high school diploma.
From 1995 until 2015, seniors in the state of Georgia were required to pass the Georgia
High School Graduation Tests (GHSGT) in each of four content areas of mathematics, English
language arts (ELA), social studies, and science, in addition to the Georgia High School Writing
15
Test (GHSWT) to obtain a valid high school diploma (GADOE, n.d.-e). Due to this requirement,
thousands of Georgia students were denied diplomas and received certificates of attendance
instead because they were unable to pass one or more areas of the GHSGT. Longitudinal data
collected by the Governor’s Office of Student Achievement (GOSA) shows that 2,894 Georgia
high school completers in 2009 were awarded certificates of attendance instead of a high school
diploma, followed by 2,602 in 2010, and 3,902 students in 2011. Even as the state began to relax
the criteria for the GHSGT and the End-of-Course Tests (EOCT), the number of students
receiving certificates of attendance remained high. The number of certificates of attendance
issued in 2012, 2013, and 2014 were 3,461, 4,536, and 4,120, respectively. These numbers do
not include students who received special education diplomas (GOSA, n.d.-a).
In 2011, state departments of education were invited, by the U.S. Department of
Education, to submit waivers requesting flexibility in the method used for meeting NCLB goals
(U.S. Department of Education, n.d.). Georgia requested a waiver and received approval to
implement the College and Career Ready Performance Instrument (CCRPI) as evidence of their
accountability (GADOE, n.d.-b).
On December 10, 2015, President Obama signed the ESSA, replacing NCLB. ESSA was
designed to give states more flexibility in assessing student achievement. Standardized testing is
still required by ESSA, but is used as one of a battery of measures to assess each state’s progress
in closing academic gaps (U.S. Department of Education, n.d.). With the continuing reliance on
standardized testing as a measure of academic achievement, educators must search for successful
programs and methods for helping students increase their success rate on high stakes graduation
tests (exit exams).
16
Problem Statement
The problem is a lack of research on interventions designed to help students perform well
on standardized tests such as high school exit exams. Much research is available for predictive
factors that contribute to low student achievement, including poor attendance and behavior
problems (Balfanz, Herzog, & MacIver, 2007; Garriott, 2007; Jerald, 2007). Researchers have
also studied student disengagement (Balfanz et al., 2007b), intrinsic motivation (Organisation for
Economic Cooperation [OECD], 2007) and early reading problems (Goffreda, Diperna, &
Pedersen, 2009; National Reading Panel, 2000; Scarborough, 1998; Scarborough, 2001; Snow,
Burns, & Griffin, 1998; Storch & Whitehurst, 2002; Whitehurst & Lonigan, 2001). There is
little research, however, that addresses methodologies for helping students successfully complete
their high stakes exit exams. Students who fail their exit exams are classified as dropouts and
face the same risk factors as students who drop out of school before graduation (Atkinson &
Geiser, 2009; Bruce, Getch, & Ziomek-Daigle, 2009).
Between 70% and 90% of students fail their exit exam on the first attempt (McIntosh,
2012). While some states are transitioning away from exit exams, some states are implementing
these tests for their students (McIntosh, 2012). Some states, such as Georgia, have replaced their
exit exams with other forms of standardized testing as part of their school’s academic
achievement instrument (GADOE, n.d.-e). The gaps in the literature become quite evident for
those searching for empirically supported interventions for students who have failed standardized
tests. Whatever the form, standardized testing remains an essential part of the national academic
and accountability landscape.
Results from recent research, however, indicate that short-term, empirically-based
interventions, in the form of group counseling, academic support, and social frameworks, can
17
have a significant positive impact on student achievement (Ascher & Maguire, 2007; Brigman &
Campbell, 2003; Brigman, Webb, & Campbell, 2007; Savitz-Romer & Jager-Hyman, 2009).
Brigman et al. (2007) found positive statistical results in the implementation of the Student
Success Skills (SSS) program for students taking the Florida Comprehensive Assessment Test
(FCAT). The researchers utilized the SSS curriculum (see Appendix A) to provide intervention,
via group guidance, for students who had previously failed one or more content areas of the
GHSGT.
Purpose Statement
The purpose of this experimental posttest-only control group design with randomization
was to determine if the 8-week guidance portion of the Student Success Skills (SSS) program
could statistically impact the Georgia High School Graduation Test (GHSGT) scores of students
who had previously failed at least one portion of the GHSGT. The independent variable was
participation in the SSS program. The dependent variable was student achievement as measured
by GHSGT scores.
It was not known if student participation in the SSS program would affect their ability to
pass the GHSGT. Randomly selected students were guided through the SSS curriculum in
activities that facilitated cognitive and meta-cognitive skills, social skills, and self-management
skills. The scores of these treatment group students were compared to the scores of the control
group students to determine if the SSS intervention was statistically effective. Due to small
sample sizes and abnormal data, the study employed Mann-Whitney U tests to determine
statistical significance for the null hypotheses.
Significance of the Study
Early intervention has been studied as a strategy for students who struggle with taking
18
standardized tests. Research indicates that identifying students who exhibit academic risk factors
for the purpose of establishing a plan for intervention has been found to be statistically positive
in helping students achieve academically (Allensworth & Easton, 2005; Garriott, 2007;
Kurleander, Reardon, & Jackson, 2008; Montes & Lehmann, 2004). Jerald (2007) indicates that
80%–85% of students who are at-risk for failing to successfully complete high school exhibit
observable signs of educational difficulty and school disengagement prior to entering high
school.
With a current national dropout rate of almost 25%, which includes students who
received certificates of attendance, educators are charged with raising the graduation rate and
exploring multiple options for helping students succeed academically (Stillwell et al., 2011; U.S.
Department of Education, 2010). This research, utilizing the SSS curriculum, sought to
determine the potential statistical significance of a standardized group intervention process as it
impacted students’ ability to pass the GHSGT. Specifically, this experiment studied the
implementation of a group intervention strategy for the purpose of helping students who had
failed to successfully complete one or more content areas of the GHSGT. This study contributes
to the research body regarding successful strategies that are available for school personnel to
provide intervention for students who have difficulty passing high-stakes exams.
While GHSGT is no longer required for students to receive their diploma in Georgia,
other states have implemented and are still using exit exams for their seniors (McIntosh, 2012).
Additionally, Georgia schools still administer standardized exams that impact student scores and
school performance accountability measures such as CCRPI (GADOE, n.d.-e). This research
may be useful for K–12 educators seeking to find effective methods for improving standardized
test scores on high stakes tests including high school exit exams. Also, policy makers may find
19
this study helpful as they develop accountability measures for students and schools.
The rural Georgia school that hosted this study failed to meet AYP for the 2009–2010
and 2010–2011 school years due to the low test scores of the students with disabilities subgroup
for both years and a low graduation rate in 2011 (GADOE, 2011). On February 9, 2012, the
state of Georgia was granted a waiver from the AYP requirements of NCLB. Under the auspices
of this waiver and during the time of this study, Georgia schools were required to meet state
standards via a comprehensive rubric called the CCRPI including evaluation of student’s scores
on the Georgia EOCTs, GHSWT, SAT, ACT, Compass, Advanced Placement exams, and
International Baccalaureate exams (GADOE, n.d.-a.). The CCRPI rubric is currently in use and
is updated yearly to reflect changes implemented by the Georgia Board of Education. The state
of Georgia implemented a new testing system called Georgia Milestones in the spring of 2015
that is a blended criterion-referenced and norm-referenced assessment (GADOE, n.d-e). The
CCRPI has been updated to reflect these changes including the use of End of Course (EOC)
measures. The school in this study as well as other schools will be able to analyze the results of
this study in light of how to best implement the SSS program in order to help students who
struggle to pass high stakes exams.
Research Questions
RQ1: Is there a difference in Georgia High School Graduation Test scores in math for
students who participated in the group guidance portion of the Student Success Skills curriculum
as compared to students who did not participate in the group guidance portion of the Student
Success Skills curriculum?
RQ2: Is there a difference in Georgia High School Graduation Test scores in social
studies for students who participated in the group guidance portion of the Student Success Skills
20
curriculum as compared to students who did not participate in the group guidance portion of the
Student Success Skills curriculum?
RQ3: Is there a difference in Georgia High School Graduation Test scores in English
language arts for students who participated in the group guidance portion of the Student Success
Skills curriculum as compared to students who did not participate in the group guidance portion
of the Student Success Skills curriculum?
Null Hypotheses
H
0
1: There is no significant difference in Georgia High School Graduation Test scores in
math for students who participated in the group guidance portion of the Student Success Skills
curriculum as compared to students who did not participate in the group guidance portion of the
Student Success Skills curriculum.
H
0
2: There is no significant difference in Georgia High School Graduation Test scores in
social studies for students who participated in the group guidance portion of the Student Success
Skills curriculum as compared to students who did not participate in the group guidance portion
of the Student Success Skills curriculum.
H
0
3: There is no significant difference in Georgia High School Graduation Test scores in
English language arts for students who participated in the group guidance portion of the Student
Success Skills curriculum as compared to students who did not participate in the group guidance
portion of the Student Success Skills curriculum.
Definitions
1. Certificate of Attendance - Students failing to pass the GHSWT and all four of the
GHSGT received a certificate of attendance rather than a high school diploma.
Students were allowed to retake any failed exam after their graduation date, but were
21
not awarded a diploma until they successfully completed all exams. Students were
allowed unlimited attempts to replace their certificate of attendance with a diploma
(GADOE, n.d.-d).
2. College and Career Ready Performance Index (CCPRI) - Georgia’s school
accountability instrument was first utilized in 2014 and seeks to promote college and
career readiness in students by assessing their schools on student achievement,
student progress, and achievement gap. Schools are rated on a scale of 0–100, with a
possibility of earning up to an additional 10 challenge points. The CCRPI is scored
on a complex formula accounting for Georgia Milestones EOC scores, participation
in a career pathway, number of graduates, SAT and ACT scores, as well as the
number of students completing college level classes while still in high school.
Additionally, schools can earn points for the number of high school completers, the
percentage of underclassmen on track for graduation, and the number of teachers who
utilize the state’s data system (GADOE, n.d.-a).
3. End of Course (EOC) - As part of the Georgia Milestones Assessment System, high
school students are required to take EOC exams upon completion of Ninth Grade
Literature and Composition, Analytic Geometry, United States History, American
Literature and Composition, Geometry, Coordinate Algebra Physical Science,
Economics, Algebra I, and Biology. The EOC grade counts as 20% of the student’s
final score in the course (GADOE, n.d.-e).
4. End-of-Course Tests (EOCT) - Students were required to take EOCTs in the academic
areas of Physical Science, Biology, Ninth Grade Literature, American Literature,
Math I, Math II, United States History, and Economics as the final exam for these
22
courses. The EOCT scores counted as 15% of the student’s overall grade point
average (GPA) in the respective course. Beginning in 2013, students who passed the
EOCT in a content area could exempt the GHSGT in that content area (GADOE, n.d.-
c).
5. Exit Exams - Exit exams are minimum competency tests that must be passed before a
student can be awarded a high school diploma. The exams are designed to encourage
school systems and students to attain a level of academic achievement that allows
employers to be confident of the student’s level of achievement, (Holme, Richards,
Jimmerson, & Cohen, 2010). In the state of Georgia, students who entered grade 9
from July 1, 1991, to June 30, 2011, had to pass both the GHSGT and the GHSWT in
order to earn a high school diploma (GADOE, n.d.-d).
6. Georgia High School Graduation Tests (GHSGT) –GHSGT were required for
students who entered grade 9 from July 1, 1991, to June 30, 2011, in order to
demonstrate proficiency in the four content areas of ELA, math, science, and social
studies. These exams served as Georgia’s exit exam, provided insight into the
proficiency of high school students, and helped identify areas in which students
needed additional instruction (GADOE, n.d.-d).
7. Georgia High School Writing Test (GHSWT) – The GHSWT was an exit exam used
to assess writing for students who entered grade 9 from July1, 1991, to June 30, 2011
(GADOE, n.d.-d).
8. High School Dropout - In Georgia at the time of this study, a high school dropout was
a student who failed to earn a diploma from an accredited high school or who earned
a GED. Students who completed their high school course work, but were unable to
23
pass any of the GHSGT were also classified as dropouts (Downey, 2010; Stillwell et
al., 2011).
9. Student Success Skills (SSS) – The SSS program is a K–12 program designed as a
school counselor-led program that helps student develop cognitive, social, and self-
management skills leading to improved academic student performance. The program
has classroom, parent, and group guidance components. The program’s group
guidance component of this program was used for this experiment (Brigman &
Campbell, 2003).
Summary
Until 2015, Georgia high school students were required to pass an exit exam, the
GHSGT, in order to receive a diploma. Many students across the state failed to pass this high
stakes standardized test. Consequently, they were awarded certificates of attendance and
considered dropouts. A review of the literature revealed that there is a lack of research on
interventions designed to help students perform well on standardized tests such as high school
exit exams. The purpose of this experimental posttest-only control group design with
randomization was to determine if the 8-week guidance portion of the Student Success Skills
(SSS) program could statistically impact the Georgia High School Graduation Test (GHSGT)
scores of students who had previously failed at least one portion of the GHSGT. This study will
be useful for K–12 educators in search of ways to improve test scores of students who struggle to
pass high stakes standardized tests as well as policy makers as they endeavor to develop
accountability measures for students and schools. Chapter 2 presents the theoretical framework
underpinning the study and a review of the relevant literature.
24
CHAPTER TWO: LITERATURE REVIEW
Introduction
As educators in the United States continue to work to prepare the next generation for
participation in the global market, they face increasing pressure from the public to improve the
nation’s current graduation rate of just over 75% (Stillwell et al., 2011). NCLB, which was
enacted in 2001, required that school systems work toward 100% of their students, regardless of
subcategory, meeting or exceeding state minimum proficiency requirements. Student scores for
schools and states were measured annually to track progress on ELA scores, math scores, and
high school graduation rate. Schools meeting the year’s set criteria were identified as having met
AYP. Schools not meeting AYP were classified as needs improvement and were required to
implement measures to increase student test scores and graduation rate (Bush, 2001).
The increasing dependence on testing as a measure of academic rigor and achievement
has had a profound effect on the teaching methodologies used by educators. Research results
have shown that intervention strategies focused on cognitive abilities, self-management, school
climate, promotion of extracurricular activities, and social skills can have positive influence on a
student’s successful completion of standardized tests (Brigman & Campbell, 2003; Brigman et
al., 2007; Bruce et al., 2009; Campbell & Brigman, 2005; Dennis, 2010; Miranda, Webb,
Brigman, & Peluso, 2007; Nichols, 2003). Further research by Campbell and Brigman (2005)
found that short-term group intervention that focused on cognitive, social, and self-management
skills yielded significant improvement in student performance on the FCAT.
Theoretical Background
The theoretical basis for standardized testing can be traced to various philosophies
including Piaget’s work in cognitive development (Gray, 1978; Jaffe, 1998), the psychometric
25
theories of Alfred Binet and Raymond Cattell (Cattell, 1971; Jaffe, 1998), and the social
cognitive theory espoused by Bandura (1991) and Zimmerman (1989). All of these theories
define calculable and quantifiable expectations and identify developmental tasks based on a
student’s age and educational level.
Cognitive Behavioral Theory (CBT), established by Aaron Beck in the 1960s, borrows
from both psychoanalytic theory and behavioral theory to propose that one’s thoughts and
illogical assumptions can cause one’s behavior to be positive or negative. The SSS program
used in this experiment is based on CBT as it works to help students establish goals and change
thought processes while working to increase academic knowledge as they prepare for exit exams
(Brigman et al., 2007).
Cognitive Development Theory
Jean Piaget, a predecessor to psychometric theory, defined stages in cognitive
development at which children and adolescents are able to comprehend information and acquire
knowledge. Piaget identified age-defined stages and patterns for proper development and stages
of cognition (Jaffe, 1998). His work further emphasized the continuous and changing
development of a child’s expected age-related competencies that could easily be identified as
benchmarks in today’s educational terminology (Gray, 1978).
Similarly, psychometric theories deal with the statistical measurement of differences in
individual’s cognitive abilities. According to these theorists, intelligence is quantifiable and
measureable. Their work provides the foundational thought that verbal, spatial, and
mathematical skills should be relatively standard at certain stages of development and are able to
be calculated via standardized means of measurement (Cattell, 1971; Jaffe, 1998).
26
Psychometric Theory
Alfred Binet and Raymond Cattell further advanced the importance of statistical
measurement in psychology and human development. However, Binet, in conjunction with
Theodore Simon, also laid the foundation for various versions of standardized testing with the
development of a battery of tests designed to measure mental abilities. The Binet-Simon
Intelligence Scale was unique in that it measured mental abilities, such as memory and attention,
rather than simply testing for specific academic achievement (Jaffe, 1998; Plucker & Esping,
2014). Cattell, influenced by his background in chemistry, further worked to identify the spheres
of personality and motivation through the use of the scientific method (Cattell, 1971). The work
of these researchers is easily identified as a precursor to today’s standards-based assessments.
Social Cognitive Theory
The tenets of the social cognitive theory can be found in the continuing reliance on
standardized testing to elicit positive change in academic processes within state and federal
departments governing curriculum advancement. The concept that successful learners
demonstrate self-regulated learning based on internal and external motivations and choices of
behavior can be found in the work of Paul Pintrich, a leading social cognitive theorist, when
viewing his three generalizations about the relationship between motivation and self-regulated
learning:
Students must feel confident they can accomplish what needs to be done.
Students must value and be interested in the required classroom assignments.
Students who are focused on self-improvement and the goals of learning and
understanding are more likely to be self-regulating (Pintrich & DeGroot, 1990).
Zimmerman (1989), building on Pintrich’s work, suggested a triadic view of self-
27
regulation with regard to students’ learning, environment, and behavior, indicating that these
factors influence an individual’s self-monitoring and affect change in self-esteem and personal
competence (Bandura, 1991; Zimmerman, 1989). Learners work toward self-regulation as they
set personal goals and monitor their own progress toward those objectives. A learner’s
accomplishments increase his or her feelings of self-efficacy, while focusing on failures
undermines personal value (Zimmerman, 1989).
Through the use of external stimuli such as AYP, monitoring of student scores on exit
exams and distribution of monetary incentives for schools whose students perform well on tests,
governing agencies hope to elicit self-regulating processes within the local school system that
will demonstrate increased student performance. Colleges and universities exhibit the same
confidence in students’ abilities to self-regulate as they work toward their college admissions.
Students who continue to take college admissions tests, monitor their scores, work to improve
their scores, and retake the tests are seen as exhibiting self-regulating qualities that will make
them successful college students.
Cognitive Behavioral Theory
CBT proposes that one’s thoughts and illogical assumptions can cause one’s behavior and
beliefs to be either positive or negative. Following training in Freudian therapies, Aaron Beck
and his associates wanted to understand how they could reverse negative thinking in their clients
seeking treatment for depression. In his treatment of patients with unipolar depression, Beck felt
that maladaptive attitudes and illogical thinking led people to repetitively view themselves and
their circumstances in a destructive light. He felt that these consistently harmful thought patterns
led to automatic and unrelenting negative thoughts that would flood the client’s mind leading to
depression.
28
Beck’s treatment initially utilized methods for changing cognitive processes and later
incorporated behavior-changing interventions. CBT therapists employ four phases to assist
clients in revising their thought patterns. First, the therapist introduces techniques for the patient
to increase his activities, working toward improving his mood. Time is spent each week creating
a schedule detailing activities for the week to follow.
The second phase of treatment teaches clients to recognize and record their automatic
thoughts. They are assigned to bring their list of automatic thoughts to the therapy session and
discuss them with the therapist. The job of the therapist is to help the client discover the reality
behind their automatic thoughts. The third phase of treatment continues as the therapist helps
clients discover the flaws and lack of logic in their negative automatic thoughts and begins to
help the person test and challenge their harmful attitudes. The fourth phase of CBT is when the
therapist helps clients begin to change their harmful attitudes, replacing them with more positive
approaches and behaviors. Overall treatment time for CBT is relatively short in comparison to
other types of therapy. CBT treatment ranges from nine to 25 sessions dependent on the
patient’s diagnosis and the severity of his symptoms (Comer, 2015).
The SSS program utilized in this research experiment is based on the premises of CBT.
In their study of CBT interventions appropriate for schools, Zyromski and Edwards (2015)
described SSS as “the only empirically supported school-based cognitive behavioral intervention
study to directly impact academic achievement” (p. 8). They describe the SSS program as a
program that utilizes a CBT approach to help students improve their educational goals,
behaviors, and strategies while assisting students in reducing their academic anxiety.
29
Review of the Related Literature
Historical Background
The history of standardized exams can be traced to 7th century A.D. Imperial China when
applicants for government jobs were required to complete essays that included writing poetry
and discussing Confucianism. The improvement and expansion of technology and modern
inventions, such as the printing press, improved paper manufacturing, personal computers, and
the Internet, have fueled the increased use of tests and have continued to be key factors in the
growth of standardized testing (Mathews, 2006).
As the American focus in areas such as law, medicine, and manufacturing trended toward
standardization, educational leaders also promoted school homogeny during the first two decades
of the 1900s, which was part of the Progressive Era (Sherman & Theobald, 2001; Wiebe, 1967).
Iowa established the first statewide school-improvement program, marking the first attempt at
developing a set of statewide educational standards (Sherman & Theobald, 2001). Atlanta
schools implemented standardization in hiring of and tenure for teachers, centralized purchasing
methods, and employed curricular practices designed to align schools to the best national
practices (“Progressive Era,” n.d.).
Horace Mann first advocated standardized testing for the public schools in the form of
essays, but the Stanford Binet Intelligence test is credited as the inaugural document that birthed
the trend toward today’s focus on standardized testing due to its easily definable method for
assessing an individual’s mental abilities. With the inception of multiple-choice questions by
Frederick Kelly in 1914, standardized testing became increasingly prominent as an assessment
method. Ellis Island was a foundational testing center where immigrants were given a form of
standardized tests as part of the processing and approval procedure for entering America (Barton,
30
2010; Jaffe, 1998; Mathews, 2006; Schlenoff, 2015).
Today’s use of standardized testing has morphed into a measure of learning rather than a
measure of intelligence and is utilized, with increasing frequency, for varying educational and
noneducational purposes. Admission to college, for the average student, relies heavily on SAT
or ACT scores (Mathews, 2006; Noble & Sawyer, 2002). With the implementation of
Proposition 48 in 1985, standardized testing has become more critical for students wishing to
participate in college athletics, with Division I colleges requiring a minimum of 700 on the SAT
combined verbal and math score or a composite ACT score of 17 as well as a minimum 2.0 GPA
in academic subjects (Klein & Bell, 1995; Wainer, 2006). NCLB utilized a school’s test scores,
attendance, and graduation rate in determining a school’s AYP status (Berger, 2000; Bush, 2001;
Goertz & Massell, 2005; U.S. Department of Education, 2010).
Test scores have also had far reaching effects outside of the educational setting. An IQ
cut score was the determining factor in the execution of Jerome Bowden in the 1986 case of
Bowden v. Georgia. Jerome Bowden was convicted of murdering two women and sentenced to
execution. Eight hours before he was to die on June 18, 1986, Bowden was issued a 90-day stay
of execution pending the results of an IQ exam. Bowden’s previous IQ score was 59. On the
second exam, however, Bowden scored a 65. The State Board of Pardons and Paroles ruled that
Bowden’s score was high enough to warrant his execution. Bowden was executed on June 24,
1986 (Human Rights Watch, 2001). The resulting public outcry questioned the validity of the
Board’s decision, which was seen as having been singularly based on a standardized test
(Amnesty International, 1996).
Accountability Measures
The overarching dilemma facing both politicians and educators is the need to establish
31
criteria that puts student achievement in a measurable, yet understandable, form for the
stakeholders and taxpayers of the American public educational system. With increasing calls for
accountability in the public school arena, educators cannot ignore the fact that taxpayers and
politicians rely on tangible data and sets of numbers to make judgments on the progress their
schools are making. The utilization of prescriptive testing requirements to identify successful
schools, as well as schools that are failing to adequately prepare their students, has been ongoing
since the early 1900s (Hartel & Hermon, 2005).
When the Soviet Union successfully launched Sputnik in 1957, Americans began to
question the U.S. educational system’s ability to produce students who could lead the world in
math and sciences. The public began to put increased pressure on the public school system to
demonstrate and produce data on student performance (Barton, 2010; Mathews, 2006). The
1954 Brown v. Board of Education, arguably, began the process of providing equal educational
opportunities for all students in the American public school system, giving hope to many that the
achievement gap between students of varying races and socioeconomic levels would be
eliminated (Slavin & Madden, 2006). The U.S. educational system still struggles in its efforts to
narrow that achievement gap, and the debate continues over the use of testing as an adequate
measure of accountability for student academic success.
Brooks and Miles (2006) stated, “In the United States, 2001’s NCLB signaled the
beginning of an educational policy era marked by accountability and an emphasis on increasing
student achievement” (p. 26). Specifically, NCLB was intended to improve student achievement
by delineating the basics of what would be taught, establishing an expected level of performance,
constructing an equality of opportunity by coordinating the operations of a district, refocusing
the efforts of education on student learning, and alleviating variability by ensuring more
32
consistency from school system to school system and from state to state. NCLB additionally
sought to provide feedback on performance to students and parents, act as a benchmark for
expectations, create high expectations, and align instruction to the curriculum (Berger, 2000;
Goertz & Massell, 2005). In short, NCLB was intended to raise the standards of academic
achievement of students (Bush, 2001).
The premise of NCLB was to establish accountability for educators by focusing on the
performance of student cohorts as opposed to evaluating the scores of individual students.
Schools were evaluated on the proportion of students in each grade level achieving a defined
proficiency. The mandate indicated that the level of competency of various student subgroups
within the school was expected to increase each year until all cohorts reached 100% proficiency
by the year 2014 (Wiliam, 2010a). Hanushek and Raymond (2005) viewed NCLB as simply a
formalization of the move toward standardized testing that has been sweeping from state to state
since the early 1990s. ESSA is currently the most widely recognized accountability measure in
the public view having replaced NCLB when it was signed into law on December 10, 2015.
Perspectives on Standardized Testing
As can be expected with almost any issue within the educational arena, one can find both
proponents and critics of the current trend toward testing as an indicator of student achievement.
An area of pervasive dispute between advocates and opponents of increased testing for
accountability purposes is whether a standardized test actually measures improved instruction
and increased acquisition of knowledge as well as the degree to which test results are affected by
other influences that might impact a student’s scores.
Some argue that a myriad of external factors that cannot be controlled in the educational
environment, such as the racial composition of a school (Hanushek & Raymond, 2005), student
33
grouping within the school (Figazzolo, 2009), general intelligence, work drive, self-discipline,
perseverance, and motivation (Geiser, 2009; Ridgell & Lounsbury, 2004), and even state-specific
educational policies (Hanushek & Raymond, 2005) may influence student achievement on
standardized tests. Opponents of standardized testing also claim that both the rewards and
consequences imposed on schools due to student test scores forces schools to structure their
curriculum toward teaching to the test, rather than meeting the individual needs of students.
They argue that forcing schools to spend so much time preparing students for tests prevents
educators from helping students develop proficiency in necessary skills such as time-
management and short-range planning, which are necessary competencies for success in the
post-secondary job market (Kitsantas, Winsler, & Huie, 2008; Nodding, 2004). Kornhaber
(2004) asserts that the current focus that policymakers have on using standardized testing for
attempting to solve many educational problems should be balanced by creating assessments that
enable students to successfully function as citizens in varying capacities outside the educational
setting.
Heilig (2011) found that high stakes exit testing among English learners (ELs) in Texas
impacted not only their academic focus, but also added additional stress, causing some students
to drop out of high school because they felt that they would not be able to successfully pass the
Texas Assessment of Knowledge and Skills. This research further looked at the impact of exit
testing on the trust between parents and the school. Heilig found that parents of ELs tended to
implicitly trust that teachers and administrators would do what is best for their children, but the
students did not share that trust. Many of the students in this study indicated that focus on
preparing for the Texas Assessment of Knowledge and Skills negatively impacted their
education. So, while test scores of these EL students were improving, their dropout rates had
34
increased, with 60% of ELs not graduating in 2007 (Heilig, 2011).
While the majority of research is negative toward standardized testing, Wang, Beckett,
and Brown (2006) chose to study both sides of the standardized assessment controversy. Their
investigation found that high-stakes assessment can drive and has driven improvements in
student achievement in the United States. U.S. students are still lagging behind in skills that are
necessary for success in the global market, but Wang et al. (2006) asserted that the changes
generated by standardized assessment have produced positive modifications at the school level in
learning models, curriculum, and staff development opportunities.
Some researchers argue that the educational system is too reliant on testing as a measure
of knowledge achievement (Geiser, 2009; Kornhaber, 2004; Nodding, 2004; Volger, 2008;
Wiliam, 2010b) rather than implementing a variety of assessment types. Further questions have
been raised regarding the current high-stakes testing required by NCLB and whether it has
benefitted or harmed students. Researchers have raised questions as to whether this type of high-
stakes testing moves educators toward teaching to the test and away from teaching students to
think critically (Nodding, 2004). Some argued that NCLB was the equivalent of declaring
academic martial law forcing schools to focus on raising student test scores, while neglecting
other important school factors that were more difficult to measure (“Interview: Beyond ‘the
stone age’ of testing,” 2004). Even the widely accepted SAT is charged with having a negative
impact on poor and minority college applicants when used as a criterion for college admissions
(Geiser, 2009; Zwick & Himelfarb, 2011). Other researchers maintain that high-stakes testing is
punitive in nature and fails to adequately test students’ in-depth knowledge and ability to
function in the world. The argument is made that more focus must be placed on helping students
make the transition from high school to the workplace or to postsecondary education, rather than
35
requiring schools to expend so much time and energy on raising test scores (Conely, 2001;
Kornhaber, 2004; Nodding, 2004).
Those who disagreed with the current focus on test scores argued that standardized
testing is highly utilized because policymakers see it as a simple way to ensure that students are
taught a standard curriculum and view numerical test results as a way to prove to their
constituents that students are achieving at acceptable levels. It can be argued, however, that the
impact of testing on student learning has yet to be adequately established. Volger (2008) along
with Firestone and Martinez (2007) reported significant uncertainty regarding the impact that
standardized testing has on actual classroom experience and instructional practice, while Pintrich
(1988) cautioned that any assessment program designed to improve instruction should be based
on strong theoretical models of student learning, motivation, and instruction rather than political
expediency.
Conversely, Phelps (2005) provided data promoting testing as a motivator for both
students and educators. He indicated that using test scores as stimuli for receiving awards,
whether monetary for school systems and schools, or more tangible prizes, as in the awarding of
a high school diploma to seniors, worked as inspiration for harder and more focused work for
teachers and students alike. Wiliam (2010a) concluded that “it is only through assessment that
we can find out whether instruction has had its intended effect” (p. 107). The fundamental
thought behind the entirety of this reform is that “schools will have to show through testing that
achievement is going up for all students and for those across lines of color, disability, income,
and English proficiency” (Kornhaber, 2004).
Both opponents and promoters of high-stakes testing cite teaching to the test as a negative
effect of the standardized assessment process. In their research on the effects of testing in both
36
Maine and Maryland school systems, Firestone, Mayrowetz, and Fairman (1998) found that “the
effects of state testing on teaching may be overrated by both advocates and opponents of such
policies” (p. 111). Their research indicated that state assessment programs had more impact on
reorganizing learning opportunities for educators than impacting specific teaching behaviors and
curriculum changes within the classroom. Vogler (2008) found that 83.2 % of responding
Mississippi educators reported spending some portion of their instructional hours preparing
students for graduation exams. Additionally, 61.9% of the studied educators from Mississippi
indicated that they spent at least 20% of their school year prepping their students for
standardized tests.
Other research studies view achievement from a differing perspective and indicate that
factors such as general intelligence, work drive, self-discipline, perseverance, and motivation are
the strongest predictors of student success (Geiser, 2009; Ridgell & Lounsbury, 2004). Kitsantas
et al. (2008) found that students exhibiting strong time-management skills and short-range
planning ability were more successful in the postsecondary setting. Focusing so much of the
school’s effort on testing and achieving a certain level of scores in order to meet state and federal
standards is viewed by many as preventing schools from helping students develop necessary life
management skills which are necessary for success in postsecondary education and careers.
Kornhaber (2004) argued that the creation of assessments that enable students to successfully
function outside of the school setting may be more beneficial to students than the current
political focus on using standardized testing to attempt to solve educational problems.
Even at the federal level, opinions on the best methods for measuring student
achievement continue to evolve. On January 31, 2010, The New York Times reported that the
Obama administration recommended massive changes to the NCLB legislation. While pledging
37
to maintain the spirit of the law, the White House sought to implement changes in the funding
formulas based on a school’s academic progress via as yet unnamed assessments rather than
assessing AYP through standardized testing. The plan included recognizing schools that were
succeeding and utilizing funds to improve or close schools that were unsuccessful in closing the
academic gap (Dillon, 2010). The goal of the proposed reform was for students to be college or
career ready upon earning a high school diploma (Dillon, 2010). Funding for the accompanying
assessments would have been provided through Race to the Top, a grant program for states
working to provide innovative reforms to increase student achievement and work readiness (U.S.
Department of Education, 2009).
While the 2002 reform proposal was never passed into law, continuing efforts were
offered for the revision of NCLB. One revision offered in the 2015 legislative session sought to
reduce some of the federal control over student achievement. Those who opposed this measure
indicated that the proposed legislation also decreased federal funding for educational programs
and argued that states may not continue their efforts for student achievement without federal
oversight (Executive Office of the President, 2015).
Regardless of arguments on both sides of the political debate, it is clear that student
achievement and the increasing dropout rate is an educational factor that warrants the concern of
educators, parents, and politicians alike. As the debate continues regarding the best way to
provide quality, measurable educational services to all children, many educators feel caught in a
vortex of negative public opinion and increasingly stringent controls and expectations from the
state and federal levels.
Success on standardized tests. While many researchers have conducted studies on the
overall reliability and broad educational effects of standardized testing, little research targets
38
specific interventions for student success on high school exit exams. Studies have been done in
California, Indiana, Tennessee, Georgia, and Florida that focus on high school exit exams
(Brigman et al., 2007; Bruce et al., 2009; Dennis, 2010; Kurleander et al., 2008; Nichols, 2003;
Pedraza-Vidamour, 2008). However, three of the research projects focused on finding predictive
factors for student success on standardized tests.
In searching for predictive factors for successful completion of the California High
School Exit Examination, Kurleander et al. (2008) found that students' eighth grade Algebra I
grades, in combination with their eleventh grade GPAs, was a strong forecaster of potential
failure on the California High School Exit Examination for first-time test takers, especially when
viewed in light of students’ socioeconomic status. Additionally, their research revealed that
students scoring significantly below average on other standardized California assessments had
very low rates of passing the CAHSEE on their first attempt. This research corroborated the
results of a study conducted by Silver, Saunders, and Zarate (2008) who found that test scores at
every middle school level were predictive of successful high school graduation.
Similarly, Nichols (2003), in an effort to describe students who failed to pass Indiana’s
ELA and mathematics exit exams, examined students’ ninth, eighth, sixth, and third grade
standardized test results, along with their attendance and GPAs. This study found strong
predictive correlations between English and math achievement on standardized test scores, high
absenteeism (indicative of lack of student engagement), elementary and middle school grades,
and failure on high school exit exams.
Dennis (2010) utilized testing data in conjunction with other factors to predict overall
student academic success in the class. She studied the criterion-referenced test scores from the
Tennessee Comprehensive Assessment Program, along with additional assessments that
39
measured “phonemic awareness, phonics, fluency, vocabulary, and comprehension skills” (p.
285) to determine core reading weaknesses for her sixth grade students. Through the use of the
combined data from these appraisals, she was able to target her students’ unique areas of
weakness in relation to their reading achievement so that she could create interventions within
her classroom for helping her students increase their academic achievement.
Standards-based assessments. Testing in the 1980s and 1990s utilized the concept of
assessment of authentic student work. Student portfolios were often the chosen method for
assessment and proved to have a positive impact on student learning. This type of appraisal,
however, proved to be costly and difficult to establish reliability in traditional psychometric
terms (Wiliam, 2010a). So the pendulum swung back toward an easier form of assessment
measurement.
In the last century, criterion-referenced testing, which is designed to measure specific
knowledge of a particular set of information and skills, has become the standard for measuring
student success on multiple educational fronts and is becoming the standard for progression to
the next grade or educational level in many school systems nationwide (Atkinson & Geiser,
2009; FairTest, 2007). Twenty-six states require some form of exit exam as a criterion for
issuing diplomas to high school graduates (GreatSchools Staff, n.d.; Pytel, 2007). College
admissions testing has grown exponentially (Atkinson & Geiser, 2009) with most colleges
requiring scores from the SAT or ACT for admission to their institutions. Even technical
colleges now require students to attain set scores on the ASSET or COMPASS tests for
admission to various programs in the curriculum (Technical College System of Georgia, 2015).
High School Exit Exams
National trends for exit exams. The significance of researching methods for helping
40
students successfully complete standardized tests becomes evident when viewing the ever-
changing landscape of exit exam testing in the public school system. The most recent report
from the Center on Education Policy (CEP) highlights the current status and expected changes in
testing in American schools (McIntosh, 2012). The report found that almost 70% of students
across the U.S. continue to be impacted by exit testing. While four states, including Georgia, are
phasing out, many states, including Georgia, are transitioning to EOC exams, which are typically
more closely aligned with the Common Core State Standards.
Many states are now using, or plan to use, exit exams as evidence of college and career
readiness. In the CEP report, Georgia indicated that its high school exit exam was intended to
assess college and career readiness for its high school graduates. However, it is interesting to
note that 17 of the states responding to this report indicated that scores from their high school
exit exams are not used in college decisions about student admission, while other states either did
not know that information or did not confirm it for this report (McIntosh, 2012).
The report also indicated that 25 states require students to pass an exit exam in order to
receive a high school diploma with Rhode Island planning to implement an exit exam for 2014
graduates (McIntosh, 2012). Most schools use either a comprehensive or EOC exam. A
comprehensive exam is typically standards-based and administered in the student’s 10th or 11th
grade year. The tests vary from state to state in the number of subject area tests that comprise the
overall exit exam. States typically test from one to four academic areas. All states test students
in some form of ELA (reading, writing, etc.) and most states assess students’ math skills on their
exit exam. Some states include science and social studies as part of their test battery.
EOC exams are content area tests given on completion of a specified academic course.
Some states require that students pass the EOC in specified content areas to earn a diploma,
41
while other states do not require a passing grade but count the EOC as a percentage of a student’s
course average. Two states, Georgia and South Carolina, were using both types of exams
simultaneously at the time of the CEP report (McIntosh, 2012).
Nationally, between 70% and 90% of students fail to pass their high school exit exam on
the first attempt. Students who initially fail exit exam(s) are typically provided multiple
opportunities to retake the test. States offer varied retake opportunities before a student’s date of
graduation. Some states offer unlimited opportunities to reattempt the test, while others allow
only two additional attempts. Most states offer additional testing opportunities through the
summer and during the school year until the time of the student’s graduation (McIntosh, 2012).
The state of Georgia allowed students to continue to take the test after the date of graduation,
with the possibility of unlimited attempts (GADOE, n.d.-d). Twenty-two states offer alternate
paths to graduation for students who cannot pass the high school exit exam. Several states offer
alternative paths to graduation for students with disabilities and language issues.
Some states are moving away from their exit exam policies. Tennessee, North Carolina,
Alabama, and most recently Georgia have transitioned from exit exams to EOCs as their form of
student assessment (McIntosh, 2012). Over the course of this research, Georgia allowed the
EOCT in any academic area to substitute for the exit exam in the same academic area as an
alternate path for graduation (GADOE, n.d.-d). Twelve states allow students to take an
alternative assessment to substitute SAT or ACT scores for failing exit exam scores (McIntosh,
2012). Georgia and six other states allowed students to appeal their failing test scores after
meeting specific criteria (McIntosh, 2012). In 2015, legislation was passed that allowed Georgia
students who had failed any portion of their exit exam, the GHSGT, to submit an appeal to their
local school board. Upon verification that the student has completed all coursework and has only
42
been denied a diploma due to failing test scores on the GHSGT, the student is awarded a high
school diploma (GADOE, n.d.-d). Similarly, Texas Senate bill 149, enacted in May 2015,
allows students in the graduating classes of 2015, 2016, and 2017 to receive their diplomas if
they only fail one of their three exit exams. As expected, these retroactive measures have
opponents who feel that the policies are reducing the rigor required for earning a high school
diploma. Much of the dispute comes from politicians who feel that allowing students to have a
diploma without having passed all of the graduation tests essentially reduces the significance of
having earned a high school diploma (Gewertz, 2016).
Another area of concern with the current changes in exit exam practices is the capricious
nature of the literature surrounding this subject. Baker and Lang (2013) stated that, “The
existing literature reaches inconsistent conclusions about the consequences of exit exams” (p. 7).
They found that exit exams had very little impact on graduation rates, employment, or wage, but
did have a small, implied impact on future incarcerations. Hemelt and Marcotte (2013) found
that high school exit exams increased the dropout rates for seniors, especially among African-
American students. Their research also indicated an increase in female dropouts for states with
high school exit exams. Papay, Murnane, and Willett (2014) found that students who failed
their high school exit exam the first time they attempted it were less likely to attend a post-
secondary institution.
It is evident that the landscape of high school exit exam tests appear to be ever-changing,
with varying opinions on what is right or wrong with exit exam policies. McIntosh (2012)
addressed the frequent testing changes in state policy as she described her data collection
methodology of having states verify and respond to surveys and state profiles for the report. In
her caveat she stated, “However, because events in this field move quickly some policies will
43
undoubtedly have changed soon after publication of this report” (McIntosh, 2012, p. 5). An
example of the frequent change in testing policy as indicated by McIntosh can be found in the
timeline of modification in Georgia’s testing program from the beginning of the exit exam
program to present.
History of exit exams in Georgia. Exit testing in Georgia has undergone many changes
in the last three decades. In the 1980s, the Basic Skills Test was implemented as a form of exit
exam. For students entering ninth grade in 1991, however, students’ eligibility to graduate was
based on successful completion of the newly implemented GHSGT and GHSWT, a battery of
tests in math, ELA, science, social studies, and writing. The tests were based on the state’s
Quality Core Curriculum (GADOE, n.d-d).
The A+ Educational Reform Act of 2000 mandated the additional implementation of
EOCTs in Algebra I, Algebra II, Physical Science, Biology, U.S. History, Economics, Ninth
Grade Literature, and American literature. Students were required to take these tests, but their
scores did not greatly impact their course grade until 2003, when the EOCT score was counted
for the first time as 15% of the student’s final grade in the specified course (GADOE, n.d-c). In
2004, the state of Georgia began transitioning the GHSGT to reflect the newly implemented
Georgia Performance Standards. The requirement for passing all five tests remained the same
under the new performance standards (GeorgiaStandards.org, n.d.).
High school math courses in the state of Georgia were changed with the incoming ninth
graders of 2008 to Math I, Math II, Math III, and Math IV to reflect a combined curriculum of
Algebra, Geometry, and Statistics in sequential levels for each course. The EOCTs for math
were changed to reflect the new math curriculum (GADOE, n.d.-b).
In April of 2011, the Georgia State Board of Education ruled that students entering ninth
44
grade that year would no longer be required to pass all of the four academic GHSGTs if they
were able to demonstrate proficiency in an equivalent academic area by passing one of the two
EOCTs in that academic group. This ruling allowed students to meet their exit exam
requirement by successfully completing the GHSGT or an EOCT in math, ELA, science, or
social studies. Under the new ruling, EOCT scores counted 15% of the student’s final grade in
that academic course. All students, however, were still required to pass the GHSWT. This
change was made retroactive so that all students at that time could use either the GHSGT or the
EOCT to satisfy their exit exam requirement (GADOE, n.d.-e). An additional change was made
requiring the EOCT grade to count 20% for those students who entered ninth grade after July 1,
2011 (GADOE, n.d.-b).
With the incoming ninth graders in 2012, the state of Georgia transitioned to the
Common Core Curriculum. The math courses were changed to reflect Common Core standards,
and students began taking the EOCT in Coordinate Algebra I and Analytic Geometry. The
EOCT continued to count as 20% of a student’s final grade in academic classes requiring an
EOCT (GADOE, n.d.-a; Common Core Standards Initiative, 2009).
With the incoming sixth graders of 2014, the state of Georgia is implementing another
change in its testing landscape, the Georgia Milestones Assessment System. According to the
GADOE website, Georgia Milestones currently consists of both end-of-grade (grades 3–8) and
end-of-course (grades 9–12) measures. The EOC assessments include Ninth Grade Literature
and Composition, American Literature and Composition, Coordinate Algebra, Analytic
Geometry, Physical Science, Biology, U.S. History, and Economics (GADOE, n.d-e).
Even as the state of Georgia is implementing on-going modifications to its current
assessment program, it joins other states in simultaneously making ex post facto changes to
45
previous assessments. Georgia House Bill 91, enacted in March 2015, allows students to petition
their school district to receive their diplomas. Students are eligible to petition for their diplomas
if they have passed all required coursework to graduate and were previously denied a diploma
solely on the basis of having failed one or more graduation tests (GADOE, n.d.-d). Since the
passage of this bill, Georgia has granted more than 17,000 diplomas to former nongraduates and
the numbers continue to rise for students who are receiving diplomas years after completing high
school (Gewertz, 2016).
It is evident that the face of testing has changed multiple times in the state of Georgia and
continues to evolve throughout the nation. What has remained static, however, is the fact that
standardized testing continues to be a critical assessment factor for student and school
accountability. As states transitions from the NCLB accountability measure known as AYP to
new accountability methods.
In Georgia’s current accountability instrument, the CCRPI, the importance of test scores
continues to be evident. The new assessment tool is comprised of a rubric with multiple factors.
A significant area of the rubric evaluates students’ scores on the Georgia EOCs, SAT, ACT,
Compass, Advanced Placement exams, and International Baccalaureate exams (GADOE, n.d-a).
Implementing methodologies to assist students in successfully completing these standardized
exams will assist students as they progress toward their postsecondary education, as well as
helping the school perform well as it seeks to meet state standards.
Georgia’s graduation assessments. At the time of this study, the state of Georgia
utilized two forms of criterion-referenced testing, the GHSGT and the EOCT, to assess student
progress and determine eligibility for graduation from high school (GADOE, n.d.-g; GADOE,
2011; Simpson, 2009). Seniors were ineligible for a diploma until they were able to pass either
46
an EOCT or a GHSGT in the academic areas of math, ELA, science, and social studies.
Students’ scores on the GHSGT functioned as the dependent variable for this experimental study.
More information regarding the GHSGT and EOCT is located in Chapter 3.
High School Dropouts
In the current global society, high school students are expected to emerge from their late
teen years educated and ready to successfully enter post-secondary education or the job market
(Balfanz, 2008). The minimum expectation is that each high school student will earn a high
school diploma. However, the national high school graduation rate is currently 81%, indicating
about one fifth of the U.S. population currently fails to achieve a diploma during their high
school experience (U.S. Department of Education, 2015).
Minorities were disproportionately represented in these statistics, with Blacks and
Hispanics having graduation rates of only 61.5% and 26.35%, respectively. In the same time
period, the state of Georgia had 20,135 dropouts, with only 55.4% of Hispanic students
graduating. Students who received certificates of attendance or a GED were not classified as
graduates (Downey, 2010; Stillwell et al., 2011). According to the 2009 State of Georgia Report
Card, students receiving certificates of attendance instead of high school diplomas in 2009
numbered 2,894 out of 93,790 high school completers (GOSA, n.d.-a).
According to Swanson (2004), there is a continued downturn in graduation rates
nationwide. In 2010, one in three seniors failed to earn a high school diploma, with minorities
and disadvantaged students comprising a large proportion of that group. Swanson found that
despite some positive increases in graduation rates between the late 1990s and 2005, the national
graduation rate stood at the same level it was in the 1960s.
Surprisingly, 25% of the 14,000 school districts in the U.S. account for more than
47
250,000 nongraduates. These school districts are typically large city districts and countywide
systems. The nation’s largest school system, the New York City Public School System, reports
the largest number of dropouts–over 44,000 students each year (Swanson, 2004). While much
research has been done on predictive factors for high school dropouts, little research is available
that investigates intervention methodologies for students who are at risk for failing to
successfully complete their high school exit exams.
Factors That Influence Student Achievement
Much research can be found, with differing results, on the best methodology for
increasing student achievement. In his research on the validity of using standardized testing to
determine the quality of a school’s educational program, Wiliam (2010a) found that variation in
educational setting and practices yielded little difference in overall test scores. He purported that
understanding the extent to which differences in test scores represent differences in the quality of
a student's education (construct relevant variance) rather than varying factors, such as the amount
of parental support and differences in achievement levels of students prior to entering the school
system (construct-irrelevant variance) is critical in evaluating a school's overall effectiveness.
Isaacson (2009), however, argued that raising the standards in American schools must be
accomplished through a set of unambiguous, definable standards that are accompanied by
assessments that determine whether those standards have been reached. He asserted that the
current public school system is encumbered by confused and disjointed standards that are
dependent on one’s state or local interpretation. Many states actually lowered their standards in
order to comply with federal requirements (Isaacson, 2009). This arena of thought has led to the
increased involvement of various organizations voicing their opinions on the best methods for
determining appropriate national standards for academics in the public school setting.
48
Regardless of one’s philosophical viewpoint on national standards and high-stakes testing
or on the superior methodology for increasing student academic and work readiness, the fact that
American student achievement has declined in the last forty years is undeniable. In a 2006
international assessment of 30 countries, American 15-year olds plummeted to a rank of 25 in
mathematics, 15 in reading, and 21 in science. Swanson (2004) reported that, nationally, one
third (3.1 million) of the members of the 2010 high school graduating class failed to earn a
diploma. Statistics also revealed that college and university graduation rates dropped to 14th in
2006, with the United States holding the second highest college dropout rate of 27 countries in
that same year (Jerald, 2008). Questions remain, however, as to the best methods for
implementing educational reform and the most accurate means of assessing student achievement.
Whether the emphasis will remain on standardized testing or shift to another type of assessment,
it is most likely that students will continue to be tested in some way to ascertain their level of
achievement.
Varying federal and international agencies have conducted research, written reports, and
promoted reform measures—all aimed at identifying causes and cures for low student
achievement and work readiness. In 2000, the OECD investigated the relationship between
student learning and other factors that may have an influence on achievement by researching the
preparedness of young adults for the work force. The OECD authorized a long-term study,
Programme for International Student Assessment (PISA), to investigate students’ (a) ability to
analyze, reason, and communicate, and (b) capacity for lifetime learning (OECD, 2000; Wiliam,
2010a).
Utilizing a standardized test administered in three-year cycles, PISA tested students from
57 countries in reading, math, and science. Correlating student performance indicators with
49
nonacademic factors such as motivation, self-efficacy, personal value, enjoyment of science, and
student optimism, the data from this assessment suggested that nations with higher national
incomes performed better in science and that parental attitudes toward education were positively
significant for student learning (Figazzolo, 2009; MacGaw, 2008; OECD, n.d.).
Similarly, Hanushek and Raymond (2005) suggested that the racial composition of a
school may influence student achievement. Other research indicated that a complex combination
of factors such as a student’s family and school experiences, race, socioeconomic status, and
GPA, along with individual circumstances, have been found to be indicative of a student’s
overall success (Atkinson & Geiser, 2009; Bruce et al., 2009; Dryfoos, 1990; Franklin, 1992;
Geiser, 2009; Montes & Lehmann, 2004). Other researchers found that failing classes in middle
school and poor attendance are strong predictors for poor student achievement as well as
indicators of the lack of student engagement in school (Balfanz et al., 2007a, 2007b; Bridgeland,
Diluilio, & Morison, 2006).
Other research has focused on whether or not increasing state and school accountability
factors has a positive impact on student achievement. Hanushek and Raymond (2005) found that
the Black–White gap actually increased from 0.39 to 0.52 with the implementation of state
accountability measures, while the Hispanic–White achievement gap decreased from 0.63 to
0.44. Wiliam (2010a) found that variation in educational setting and practices yielded little
difference in overall student achievement.
Some agencies argue that the implementation of rewards and sanctions yields a positive
effect on student achievement. In a 2002 report, the National Alliance of Business stated that “it
is increasingly clear to business leaders that the public education system is simply not going to
respond sufficiently to reformers without incentives to perform at high levels” (p. 8). The
50
implementation of rewards and incentives, such as merit pay, administrative bonuses,
scholarships based on achievement, and graduation tests are advocated by the National Business
Alliance as viable methods for encouraging higher student achievement (National Alliance of
Business, 2000).
Hanushek and Raymond (2005) concluded that federal and state accountability measures
“lead to overall improvements in student performance on National Assessment of Educational
Progress mathematics and reading tests, but they do not uniformly meet the objective of closing
achievement gaps” (p. 314). They suggested conducting more research determine the causal
effects of rewards and sanctions for states and educators with regard to the results of student
achievement via standardized testing.
Predicting Student Success
High school dropouts are less likely to find employment and currently have jobless rates
exceeding 50%. Their earnings are significantly lower than that of their peers who graduated
with high school diplomas. They face lower earning potential, higher unemployment, a higher
risk of poverty, and an increased risk of incarceration. Female dropouts are 6 times more likely
than their college-educated peers to become mothers at a young age, 9 times more likely to be
single mothers, and are more likely to experience social hardships (Sum, Khatiwada,
McLaughlin, & Palma, 2009). Students who complete all of their high school coursework, but
fail to complete their exit exams face the same consequences as students who drop out of high
school (National Center for Education Statistics, 2011; Pedraza-Vidamour, 2008; Technical
College System of Georgia, 2015).
Research indicates that failing to finish high school can be associated with a complex
combination of risk factors involving a student’s family and school experiences, along with
51
individual circumstances and potential risk factors (Dryfoos, 1990; Franklin, 1992; Montes &
Lehmann, 2004). Factors such as race, socioeconomics status, gender, and GPA have been
found to be indicative of a student’s overall success (Atkinson & Geiser, 2009; Bruce et al.,
2009; Geiser, 2009; Silverstein, 2000).
NCLB charges educators with the task of raising the graduation rate for schools in order
to meet the mandated AYP requirements (U. S. Department of Education, n.d.). Research
suggests that utilizing predictive factors and group intervention strategies for students who are
at-risk for not graduating from high school are effective in reducing the dropout rate
(Allensworth & Easton, 2005; Brigman & Campbell, 2003; Garriott, 2007; Miranda et al., 2007;
Montes & Lehmann, 2004). Factors such as behavior problems, failing grades in math, failing
grades in English, and poor attendance have been found to be strong predictors of future dropout
risk, as well as indicators of the lack of student engagement in school (Balfanz et al., 2007a,
2007b; Bridgeland et al., 2006; Garriott, 2007).
The primary purpose of identifying students at-risk for dropping out prematurely or not
meeting graduation requirements is to target students for early intervention strategies
(Kurleander et al., 2008). Jerald (2007) indicated that dropping out of school is easily
predictable for 80–85% of students who leave high school without graduating. Identifying
students who are disengaged from the school environment and providing intervention strategies
can assist them with completing the myriad of requirements for graduation (Campbell &
Brigman, 2005; Jerald, 2007; Spears, 2008).
While a student's GPA continues to be statistically significant as the most reliable
indicator for students’ success in postsecondary education (Atkinson & Geiser, 2009; Burton &
Ramist, 2001; Geiser, 2009; Gwynne, Lesnick, Hart, & Allensworth, 2009), the search for other
52
predictive risk factors to determine students’ potential for academic failure is abundantly
available in the literature. Predictive factors for student achievement have been studied in
physical education (Lacy & LaMaster, 1996), reading (Morris, Bloodgood, & Perney, 2003;
Snow et al., 1998), disengaged and maladjusted students (Balfanz et al., 2007b; Janosz, LeBlanc,
Boulerice, & Tremblay, 2000; Simons-Morton & Chen, 2009), behavior problems (Balfanz et al.,
2007b; Garriott, 2007), failing classes (Balfanz et al., 2007b; Garriott, 2007), and poor
attendance (Jerald, 2007). Researchers have also found a significant predictive correlation
between early academic factors, such as achievement in middle school math and English classes,
standardized testing skills, and student performance on state exit exams (National Center for
Education Statistics, 1992; Nichols, 2003).
At the elementary level, predictor variables that determine early reading difficulties have
been investigated (Goffreda et al., 2009; National Reading Panel, 2000; Scarborough, 1998;
Scarborough, 2001; Snow et al., 1998; Storch & Whitehurst, 2002; Whitehurst & Lonigan, 2001)
for the purpose of providing early intervention to help students get their reading skills up to
grade level. Morris et al. (2003) found that first and second grade reading achievement could be
effectively predicted in the middle of students’ kindergarten year. Balfanz et al. (2007b)
concluded that 60% of students who would fail to graduate could be identified in the sixth grade
by observing their attendance, behavior, and grades in English and mathematics.
After academic achievement, the school attendance rate proves to be the factor with the
highest correlation with students not graduating from high school. Poor attendance is an
indicator of a student’s disengagement from school and has been found to be strongly predictive
of a student’s probability for becoming a dropout (Allensworth & Easton, 2007; Balfanz, 2009;
Christle, Jolivette, & Nelson, 2007; Neild & Balfanz, 2006; Silver et al., 2008).
53
Research studies indicate that it is important to consider motivational factors, although
more difficult to quantifiably assess, when attempting to find variables for student success in
academics. Intrinsic motivation, self-regulated learning, and a students’ ability to value their
learning goals are significant predictive variables in their ability and desire to successfully
perform academic tasks (Amrein & Berliner, 2002; Barr & Dreeban, 1991; Hamilton & Akhter,
2009; Pintrich & DeGroot, 1990). Intrinsic motivation has been found to correlate (0.88) with a
student’s achievement in science (OECD, 2007).
Students’ scores on high-stakes tests are intended to be used as motivators for both
students and educators. Students who fail to adequately perform on tests may be denied a high
school diploma or admission to the postsecondary institution of their choice (Pedraza-Vidamour,
2008; Technical College System of Georgia, 2015). However, some research studies have
indicated that test scores fail to provide positive motivation for student achievement. Instead,
some students view high-stakes testing as an insurmountable task. Many of these students
assume that they will never be able to pass their exit exams and subsequently choose to drop out
of school rather than face failure (Heilig, 2011).
Educators and schools are held accountable for test scores with financial consequences
and loss of autonomy in directing their school’s educational practices resulting from poor
performance (GADOE, n.d-a; Phelps, 2005). Both intrinsic and extrinsic motivation, along with
factors such as perseverance, self-discipline, and work drive, should be considered by educators
and researchers looking for ways to help students successfully complete their high school
graduation goals and, at the same time, raise graduation rates and attain accountability standards.
Cognitive Skills. The integral relationship that exists between cognitive skills and
academic achievement is a generally accepted concept by those who work with children and
54
adolescents. Literature identifies working memory capacity, processing speed, and spatial ability
as specific cognitive abilities that impact student achievement (Conway, Conwan, Bunting,
Therriault, & Minkoff, 2002; Rohde & Thompson, 2007). While the relationship is
acknowledged, the determination of specific cognitive factors that may have a causal effect on
academic achievement is still a topic open to research.
Luo, Thompson, and Dettermen (2003), utilizing the Cognitive Abilities Test, the
Weschler Intelligence Scale, and the Metropolitan Achievement Test, found that processing
speed was a causal factor in positive correlation between intelligence and academic performance
in children and preteens. In a replication of Luo et al.’s 2003 research, Rohde and Thompson
(2007) found that general cognitive ability, spatial ability, and perceptual speed positively
contributed to mathematical achievement in young adult males, as measured by SAT math
scores; however, their research further indicated that general cognitive ability measures are more
accurate predictors of academic achievement than individual measures of working memory,
spatial abilities, and processing speed.
Social Skills. The impact that a student's social skills have on his or her ability to feel
connected in the school setting has long been noted by those who work regularly with students.
Research, however, also indicates that social skills can also considerably impact a student’s
overall academic achievement. Malecki and Elliott (2002) found that social skills were
statistically significant predictors of academic competence and academic achievement.
Similarly, Wentzel (1994) concluded that students with high social responsibility goals were
statistically more likely to post higher standardized test scores and GPAs.
Proponents of social-emotional learning (SEL) have identified four crucial areas that
relate to academic competence: self-awareness, self-management, relationship skills, and
55
responsible decision-making (Jones & Bouffard, 2012). Research studies indicate that SEL
positively impacts academic performance, as well as other personal/social aspects of life (Durlak
& Weissberg, 2011; Zins, Weissberg, Wang, & Walberg, 2004). In a meta-analysis of over 200
studies of SEL-type programs, Durlak and Weissberg (2007) found that academic performance is
positively affected by the implementation of programs that work to improve relational/emotional
aspects of student learning such as school climate, self-esteem, social-emotional skills, and
school bonding. Other meta-analytical research found that programs focusing on student’s
personal/social skills positively impacted participants' standardized test scores at a standard mean
difference of 0.31 (Durlak, Weissberg, & Pachan, 2010). Similarly in a meta-analysis of 370
out-of-school programs, Lauer et al. (2006) found that programs with a combination of academic
and social components had a statistically positive effect on students’ mathematics and reading
scores. Other research studies indicated that programs with counseling components had
statistically positive effects on reading and mathematics achievement for high school students
(Brigman & Campbell, 2003; Bruce et al., 2009; Campbell & Brigman, 2005).
Self-Management and Motivation. The development of competent management skills
can also have a direct impact on a student’s overall performance in the academic arena. Students
who are better able to manage multiple aspects of their academic world and who exhibit intrinsic
self-efficacy skills are usually able to better perform in school. Research studies also indicate
that students with better self-management skills, self-efficacy, and motivation perform better on
standardized academic assessments (Abd-El-Fattah, 2010; Garrison, 1997; Pintrich & DeGroot,
1990).
Garrison (1997) defines self-management as one’s focus on task control issues. Self-
management is also related to the social and behavioral implementation of learning as defined by
56
one’s external activities associated with the learning process. One evidences self-management in
the process of implementing learning goals and management of learning resources. Self-
management of the learning process has been found to strengthen meaningful, long-term learning
for students and has been found positively linked to the concept of self-regulation and motivation
(Garrison, 1997; Pintrich & DeGroot, 1990).
Self-management, motivation, and self-efficacy are viewed as integral concepts in much
of the research on student achievement (Abd-El-Fattah, 2010; Garrison, 1997; Pintrich &
DeGroot, 1990). Self-management was found to be predictive of self-monitoring and motivation
in college freshmen education students since an elevated sense of responsibility on the part of the
learner yields a greater responsibility in the learning process. Additionally, motivation was
found to positively influence student achievement by building a sense of responsibility for one’s
own learning process (Abd-El-Fattah, 2010).
Many researchers have further cited student motivation as a significant predictor of
student failure or success (Amrein & Berliner, 2002; Barr & Dreeban, 1991). Paul Pintrich
(1988) found a significant relationship between students’ motivation and self-regulated learning,
citing the need for students to value and be interested in their learning goals. Motivation is an
essential factor in a student’s ability and desire to perform academic tasks (Hamilton & Akhter,
2009). In an international study of 57 countries assessing the capacity of 15-year-old students to
reflect on and use the skills they had developed in reading, mathematics, and science as related to
job readiness, the OECD and PISA found that intrinsic motivation and general interest in science
positively correlated (0.88) with student performance on science achievement (Figazzolo, 2009;
MacGaw, 2008; OECD, n.d.; OECD, 2007).
While more difficult to quantifiably assess, research indicates that it is important to
57
consider motivational factors when attempting to find predictor variables for student success in
academics. Some researchers purport that testing itself can act as an academic motivator for both
students and educators (Phelps, 2005). This viewpoint is also evident in the fact that many
educational dollars are tied to testing outcomes in today’s climate of academic accountability
(GADOE, n.d.-a). Other research studies view achievement from a differing perspective and
indicate that factors such as general intelligence, work drive, self-discipline, perseverance, and
motivation are the strongest predictors of student success (Geiser, 2009; Ridgell & Lounsbury,
2004). Kitsantas et al. (2008) found that students exhibiting strong time-management skills and
short-range planning ability were more successful in the post-secondary setting.
Student Success Skills (SSS)
Brigham and Campbell (2003) focused their research on an intervention methodology
that involved classroom and group guidance sessions entitled Student Success Skills (SSS). In a
comprehensive two-year research project utilizing a program called SSS that involved both
classroom and group guidance, Brigham and Campbell (2003) found that structured group
intervention for students taking the FCAT yielded significant differences in math (p = .000) and
reading (p = .003) between students in the control and experimental groups. Further replication
of this study (Webb, Brigham, & Campbell, 2005) found that 85% of students in the treatment
group improved their FCAT scores in math when compared to students in the control group who
improved their scores by 73%. Although not statistically significant, 75% of students in the
reading treatment group improved their scores as compared to 73% in the control group.
Additional study of aggregate data from four studies involving the SSS program
(Brigman & Campbell, 2003; Brigman et al., 2007; Campbell & Brigman, 2005; Webb et al.,
2005) found that the program resulted in statistically significant increases in student test scores
58
on the FCAT across all involved ethnic groups. Analysis of scores from 1,123 students enrolled
in 36 schools, found a statistically significant (p < .05) increase in both reading and math scores
of the treatment group, regardless of ethnicity (Miranda et al., 2007).
The SSS program is based on skill sets that have been found in research to contribute to
improved academic achievement. A review of research indicates skill sets involving goal
setting, self-monitoring of academic progress, listening, team-work, motivation, and managing
one's attention and anger contribute to improved academic and behavioral performance in many
students (Hattie, Biggs, & Purdie, 1996; Masten & Coatsworth, 1998; Walberg & Paik, 2000;
Wang, Haertal, & Walberg, 1994; Zin et al., 2004). Due to the fact that the SSS program focuses
on skills essential to learning meta-cognitive, social, and self-management skills rather than
simply targeting tested academic skills, the program has been found to be effective in closing the
academic achievement gap for African-American and Latino students and improving academic
outcomes for all low-achieving students (Miranda et al., 2007).
In a study utilizing a format very similar to the SSS program, Bruce et al. (2009) found
that test scores of African-American students on the GHSGT were significantly higher. All
students in the study who participated in an 8-week, counselor-led, preparation program passed
math and ELA, while 87% passed science and social studies.
Mariani, Webb, Villares, and Brigman (2015) utilized the guidance portion of the SSS
program to study its potential impact on student behavior. The study used a quasi-experimental
pretest–posttest design to determine if the guidance portion of the SSS could statistically impact
prosocial and bullying behaviors in fifth graders. Additionally, the research examined school
engagement and student perceptions of the school. The study found statistically significant
evidence that the behavior and perceptions of students completing the SSS guidance program
59
were positively impacted.
The classroom portion of the SSS was translated into Spanish and used by Urbina (2011)
to study its impact on the academic achievement of Hispanic students. Guidance counselors
conducted the SSS standardized classroom guidance sessions for Hispanic 9th and 10th graders.
The study indicated statistically significant improvement in student’s math and reading scores on
the FCAT.
Conversely, Kane (2015) found that the classroom guidance portion of the SSS had no
significant impact on the motivation, social engagement, and self-regulation (identified as key
academic behaviors) and college/career readiness indicators for fifth grade students. However,
the National Panel for School Counseling Evidence-Based Practice found that the intervention
provided by the SSS program demonstrated positive effects on academic achievement, as
measured by FCAT scores. The panel evaluated the program in seven domains and found that
the program achieved "strong evidence" (p. 200) of success in measurement, implementation
fidelity, and ecological fidelity. The program demonstrated "promising evidence" (p. 200) in the
domains of comparison groups, statistical analysis of outcome variables, and replication.
However, SSS was found to present "weak evidence" (p. 200) in persistence of effect (Carey,
Dimmitt, Hatch, Lapan, & Whiston, 2008). The panel made strong recommendations for further
research utilizing the SSS program, especially with regard to the longitudinal effects of the
program on student achievement and behavior (Carey et al., 2008).
Summary
Given the reality that high-stakes testing remains a critical factor for successful
graduation from a Georgia high school, educators must continue to search for methods to assist
students in improving their skills in taking these exams. The SSS program has been found to
60
provide statistically significant results in helping students improve their scores on the high-stakes
FCAT. Identifying students who need additional help in testing skills can allow educators to
implement strategies that will help students pass tests similar to the GHSGT or other high-stakes
tests. This study adds to the body of research knowledge by studying the potential influence that
the SSS program could have on students' GHSGT scores.
61
CHAPTER THREE: METHODOLOGY
Introduction
High stakes testing has become the standard through which the public evaluates the
effectiveness of the schools they support with their tax dollars. While experts, educators,
students, and parents voice both positive and negative opinions toward high stakes testing,
teachers and administrators understand the increasing importance of working toward improving
test scores as a means of proving their effectiveness in educating their students. Educators are
charged with the task of implementing programs and intervention methods that enable their
students to improve the test scores that are critical to their graduation success. This project
studied the potential impact of the SSS program on the GHSGT scores of students who had
previously failed one or more of the required subject areas of the GHSGT.
This chapter presents the research design used in this quantitative experimental study
along with the research questions and hypotheses, the setting, the participants, instrumentation,
the procedures for conducting the study, and data analysis for each of the hypotheses.
Design
This quantitative study employed a posttest-only control group design with
randomization (Ary, Jacobs, Razavieh, and Sorensen, 2006). For this design, all participants are
assigned to control and treatment groups by random assignment, after which the experimental
group is exposed to the treatment. Participants in each group are administered a posttest,
followed by a comparison of scores to determine the effect of the treatment. According to Ary et
al. (2006), this design contains two elements that are important for controlling threats to internal
validity: a control group and randomization. Ary et al. further explained that “randomization
controls for all possible extraneous variables and assures that any initial differences between the
62
groups are attributable only to chance and therefore will follow the laws of probability” (p. 329).
Participants in this study were compared after the treatment was administered using posttest
scores (GHSGT scores after the treatment was administered). The dependent variable was
student achievement as measured by posttest scores on the GHSGT. The independent variable
was participation in the SSS Program.
Participants had previously taken the GHSGT during any of the four 2012 testing
administrations and had not passed at least one portion of the GHSGT and were thus required to
retake the portions not passed. After the treatment was administered in January through early
March 2013, participants were compared based on posttest scores (i.e., the GHSGT retest score
that occurred closest to the implementation of the treatment between March 2013 and November
2013).
Research Questions
RQ1: Is there a difference in Georgia High School Graduation Test scores in math for
students who participated in the group guidance portion of the Student Success Skills curriculum
as compared to students who did not participate in the group guidance portion of the Student
Success Skills curriculum?
RQ2: Is there a difference in Georgia High School Graduation Test scores in social
studies for students who participated in the group guidance portion of the Student Success Skills
curriculum as compared to students who did not participate in the group guidance portion of the
Student Success Skills curriculum?
RQ3: Is there a difference in Georgia High School Graduation Test scores in English
language arts for students who participated in the group guidance portion of the Student Success
Skills curriculum as compared to students who did not participate in the group guidance portion
63
of the Student Success Skills curriculum?
Null Hypotheses
H
0
1: There is no significant difference in Georgia High School Graduation Test scores in
math for students who participated in the group guidance portion of the Student Success Skills
curriculum as compared to students who did not participate in the group guidance portion of the
Student Success Skills curriculum.
H
0
2: There is no significant difference in Georgia High School Graduation Test scores in
social studies for students who participated in the group guidance portion of the Student Success
Skills curriculum as compared to students who did not participate in the group guidance portion
of the Student Success Skills curriculum.
H
0
3: There is no significant difference in Georgia High School Graduation Test scores in
English language arts for students who participated in the group guidance portion of the Student
Success Skills curriculum as compared to students who did not participate in the group guidance
portion of the Student Success Skills curriculum.
Setting
The setting for this study was a rural high school in northeast Georgia serving 1,300
students in grades 10–12 and 23 repeating ninth graders for a total of 1,323 students in its main
building. The remaining 536 ninth grade students in the district attended the Ninth Grade
Academy, which is housed in a separate building. Thus, there were a total of 1,859 students in
grades 9–12 in the school district at the time of the study. The public school district consisted of
eight elementary schools, three middle schools, one ninth-grade academy, and one high school
for a total of 13 schools. There were three private schools in the area. Since the participants for
this study were 10th, 11th, and 12th graders, the following demographic information in Table 1
64
includes students in grades 10–12 only.
Table 1
Demographics for Study School
Female
Male
Total
Ethnicity/Race
n
n
%
N
%
Hispanic
152
141
10.85
293
22.54
Asian
19
28
2.15
47
3.62
Black
14
13
1.00
27
2.08
White
433
469
36.08
902
69.38
Multiracial
13
18
1.38
31
2.38
Totals
631
669
51.46
1300
100.00
Note. Table reflects enrollment numbers based on the GADOE’s Enrollment by Ethnicity/Race,
Gender and Grade Level (PK–12) - Fiscal Year 2013-1 Data Report for October 2, 2012.
During the 2012–2013 school year, the school served 2% of its students in the English to
Speakers of Other Languages program. The area of northeast Georgia in which this school is
located is home to a large poultry industry, making it attractive to a large transitory Hispanic
population. The special education population comprised 16% of the student body, and 53.51%
percent of the students at this school qualified for free and reduced lunches. The gifted program
served 11% of the student body (GOSA, n.d.-b).
The school chosen for participation in this study failed to meet AYP status for the 2009–
2010, the 2010–2011, and the 2011–2012 school years due to low graduation test scores in a sub-
group and low graduation rates. The school’s graduation rate was impacted by the number of
students failing to receive a high school diploma due, in part, to the inability of some students to
pass all parts of the GHSGT (GADOE, 2011).
65
Participants
Each year, the GADOE required high school juniors to be tested for the first time during
the spring administration of the GHSGT. The exploration for potential GHSGT participants for
this study began with a group of 141 students who had failed the GHSGT in at least one area
during the spring 2012 administration of the test. An additional six participants were added from
the remaining three testing sessions of 2012. Three potential participants were added to the list
from the summer 2012 administration, two from the fall 2012 administration, and one from the
winter 2012 administration. This brought the total number of potential participants to 147.
From this pool of 147 students, 25 students were removed before the treatment was
implemented because they were able to pass the content area test(s) they had previously failed.
Additionally, the Georgia State Board of Education ruled in April 2011 that a passing EOCT
score could replace a GHSGT score in the same content area resulting in the loss of 41 potential
participants. An additional 12 students either transferred or moved by the time IRB approval had
been granted in early November 2012. One student died in an automobile accident. The
remaining 68 students were randomly placed into either a control group (34 students) or a
treatment group (34 students) using randomizer.org as soon as IRB approval was granted in early
November 2012. Before the SSS program sessions began, one student from the control group
transferred to an alternative school setting in the district leaving 33 students in the control group.
Over the course of the SSS program sessions and before the April 2013 administration of the
GHSGT, one student from the control group was removed from enrollment for an unknown
reason and one student was removed from the treatment group for lack of attendance. Loss of
these two students resulted in 32 students in the control group and 33 students in the treatment
group for an overall total of 65 students in the study.
66
The SSS sessions began in January 2013 and ended in early March 2013. Of the 65
participants, there were 59 students (91%) who were tested after implementation of the SSS
program during the spring 2013 testing session. An additional two students (3%) were tested
during the summer 2013 session, three students (5%) during the fall 2013 session, and one
student (1%) during the winter 2013 session.
The GHSGT assessed ELA, math, science, and social studies as four separate tests with
individual test scores. After the treatment was implemented, the control and treatment groups
were subdivided into content area groups according to the tests students did not pass for the
purpose of data analysis. Georgia statewide test results from the spring 2011 administration of
the GHSGT revealed that more students passed the science (93%) and ELA (91%) portions than
the math (84%) and social studies (80%) portions (GADOE, n.d.-f). Test results for the study
school from the spring 2011 administration of the GHSGT were similar to those of the state with
92% passing science, 88% passing ELA, 79% passing math, and 78% passing social studies
(GADOE, n.d.-f). This held true for participants in this study who were tested during the four
2012 testing sessions. As a result, there were not enough participants who failed the science
portion to include them in the study as many of these students retested and passed prior to the
implementation of the treatment and had to be removed from the potential participant list.
Therefore, only the math, social studies, and ELA results were included in the inferential
statistical testing portion of this study. Demographic information for all four content area groups
is presented (see Tables 3 and 5).
Control Group
Table 2 shows gender and ethnicity information for the 32 students in the control group.
The largest group was white (75%) while Hispanic students comprised the second largest group
67
(25%). This is similar to the makeup of the school where 69% were white and 23% were
Hispanic. There were four more males (18) than females (14) in this group. Special education
students comprised 28% of the group while English language learners comprised 22%.
Table 2
Demographics for the Control Group
Female
Male
Total
Demographic
n
%
n
%
N
%
Ethnicity/Race
Hispanic
3
9.38
5
15.63
8
25.00
Asian
0
0.00
0
0.00
0
0.00
Black
0
0.00
0
0.00
0
0.00
White
11
34.38
13
40.63
24
75.00
Total
14
43.75
18
56.25
32
100.00
Special Education
5
15.63
4
12.50
9
28.13
English Language Learner
5
15.63
2
6.25
7
21.88
Students in the control group were placed into the four content areas of the GHSGT
according to the content areas that they did not pass (see Table 3). The largest group was social
studies with 22 students, followed by ELA with 11 students, math with 10 students, and science
with 3 students. Inferential statistics were not calculated for the science group due to the small
sample size. The math, social studies, and science groups were mostly white with 80%, 82%,
and 67%, respectively. The ELA group was about half Hispanic (55%) and half white (45%).
Special education students comprised from 40% to 67% of the four content area control groups.
There were no English language learners in any of the control groups.
68
Table 3
Demographics for the Control Group by GHSGT Content Area
Math
Social Studies
ELA
Science
Demographic
n
%
n
%
n
%
n
%
Gender
Female
4
40.00
12
54.55
1
9.09
1
33.33
Male
6
60.00
10
45.45
10
90.91
2
66.67
Total
10
100.00
22
100.00
11
100.00
3
100.00
Ethnicity/Race
Hispanic
2
20.00
4
18.18
6
54.55
1
33.33
Asian
0
0.00
0
0.00
0
0.00
0
0.00
Black
0
0.00
0
0.00
0
0.00
0
0.00
White
8
80.00
18
81.82
5
45.45
2
66.67
Total
10
100.00
22
100.00
11
100.00
3
100.00
Special Education
5
50.00
9
41.00
7
63.64
2
66.67
English Language
Learner
0
0.00
0
0.00
0
0.00
0
0.00
Treatment Group
Table 4 shows gender and ethnicity information for the 33 students in the treatment
group. The largest group was Hispanic (70%) while white students comprised the second largest
group (21%). This was the exact opposite of the control group where 75% were white and 25%
were Hispanic. There was one more female (17) than males (16) in this group. Special
education students comprised 27% of the group while 21% of the group was English language
learners. This closely mirrored the control group where 28% were special education students
69
and 22% were English language learners.
Table 4
Demographics for the Treatment Group
Female
Male
Total
Demographic
n
%
n
%
N
%
Ethnicity/Race
Hispanic
12
36.36
11
33.33
23
69.69
Asian
2
6.06
0
0.00
2
6.06
Black
1
3.03
0
0.00
1
3.03
White
2
6.06
5
15.15
7
21.21
Total
17
51.51
16
48.48
33
100.00
Special Education
5
15.15
4
12.12
9
27.27
English Language Learner
5
15.15
2
6.06
7
21.21
Students in the treatment group were placed into the four content areas of the GHSGT
according to the content areas that they did not pass as shown in Table 5. The math and social
studies groups each had 22 students while the ELA group had nine students and the science
group had eight. Inferential statistics were not calculated for the science group due to the small
sample size. The four treatment groups were mostly Hispanic ranging from 73% to 82%. The
social studies treatment group had the only black student in the study. Special education students
comprised from 13% to 36% of the four content area control groups which was a much lower
percentage overall than the control group. The social studies group was comprised of more
students than the other content area groups. English language learners ranged from 23% to 75%
with science having the most.
70
Table 5
Demographics for the Treatment Group by GHSGT Content Area
Math
Social
Studies
ELA
Science
Demographic
n
%
n
%
n
%
n
%
Gender
Female
11
50.00
11
50.00
4
44.44
5
62.50
Male
11
50.00
11
50.00
5
55.56
3
37.50
Total
22
100.00
22
100.00
9
100.00
8
100.00
Ethnicity/Race
Hispanic
16
72.73
18
81.82
9
81.82
6
75.00
Asian
0
0.00
1
4.55
1
9.09
2
25.00
Black
0
0.00
1
4.55
0
0.00
0
0.00
White
6
27.27
2
9.09
1
9.09
0
0.00
Total
22
100.00
22
100.00
11
100.00
8
100.00
Special Education
6
27.27
8
36.36
2
22.22
1
12.50
English Language
Learner
5
22.73
5
22.73
4
44.44
6
75.00
Instrumentation
Georgia High School Graduation Tests (GHSGT)
Beginning in 1995, the state of Georgia implemented a high school exit exam called the
GHSGT. The GHSGT assessed subject content in ELA, mathematics, science, and social
studies. This battery of four tests measured students’ mastery of essential core content of
Georgia’s curriculum. Scores ranged from below 200 (below proficiency) to 275 or above
71
(proficiency) for each of the four content area tests. A score below 200 was considered not
passing. A student was required to retest on any content area test that received a score below
200. A Georgia high school student’s first attempt at the GHSGT occurred during the spring
administration of the test in March of the student’s 11th grade year. Students not successfully
passing each test were given an additional four opportunities to retake the unsuccessful parts of
the test before their expected graduation date (GADOE Assessment Research and Development
Department, 2010, 2011). These four opportunities included the summer administration in July
following their junior year and the fall, winter, and spring administrations in September,
November, and March of their senior year. However, students who did not pass all parts by
graduation at the end of the senior year were allowed to continue taking the test until each part
was passed. If all other graduation requirements had been met, these students were issued a
certificate of attendance at graduation instead of a diploma. Georgia utilized the GHSGT as its
exit exam until 2015 (GADOE, n.d.-e).
Validity and Reliability for the GHSGT
The GADOE document entitled An Assessment and Accountability Brief: Validity and
Reliability for the 2009–2010 Georgia High School Graduation Tests states that validity was
established for the GHSGT through a series of evidentiary steps throughout the test development
process (GADOE Assessment Research and Development Department, 2010).
The first evidence of validity was to provide clear substantiation of the test’s purpose.
The GADOE asserted that the purposes of Georgia’s standardized testing program was to assess
students’ level of mastery of the state’s academic curriculum, to identify students who were
failing to adequately progress academically, to provide data for the purpose of making
instructional decisions, and to identify strengths and weaknesses in school systems (GADOE
72
Assessment Research and Development Department, 2010, 2011).
The second step in establishing validity was a multi-step test development process that
included curriculum alignment, as well as the identification of content that would be tested.
Committees of content specialists, test designers, and state educators were tasked with the
development of test items. Field-testing was accomplished through the implantation of specific
test questions in the GHSGT for review. The established committees approved discarded test
items for future GHSGT testing sessions. Content specialists and psychometricians developed
multiple forms for student use (GADOE Assessment Research and Development Department,
2010, 2011). Reliability coefficients ranged from 0.85 to 0.93 for the 2009 and 2010
administrations of the GHSGT (GADOE Assessment Research and Development Department,
2010).
Procedures
Permission to conduct the study was obtained from the school district’s superintendent
(see Appendix B), the school principal (see Appendix C), and Liberty University’s IRB (see
Appendix D). After IRB approval was obtained and the November 2012 GHGST administration
results were made available, a list of 147 students who had failed the GHSGT in at least one area
were identified from the four 2012 administrations of the test. Students who were no longer in
attendance at the school (13 students) or who had successfully retaken all tests (25 students) or
who were allowed to substitute an EOCT score for a failing GHSGT score (41 students) were
removed from the list leaving 68 students available to participate in the study.
Individual students were called to the guidance office and given the appropriate
informational letter and statement of consent (see Appendices E and F). The SSS program was
explained to the student including information on program content, expectations for attendance,
73
and how scheduling for the program would look. Students were encouraged to ask questions and
discuss their questions and concerns with the researcher. Students were asked to give the
informational letter and statement of consent (see Appendices E and F) to their parents or
guardians and to return the signed permission form to the guidance office by the end of the
following week. Students who were at least 18 years old were allowed to sign their own
permission forms. Students on the list who had received a certificate of attendance were
contacted, via phone, and given the information by the researcher. The researcher reminded
students to turn in their forms, called parents who had questions, and provided information to
teachers who had students who might miss some class time by participating in this research. The
68 students were randomly placed into either a control group or a treatment group using
randomizer.org.
After randomization, the researcher met with the 34 students in the treatment group, both
individually and in small groups, to determine the best meeting times. Times were chosen that
provided the least impact on the students’ academic classes. Students participating in the
treatment group for this study took part in eight group-counseling sessions from the group
guidance portion of the SSS program. The GHSGT group sessions were held during January,
February, and early March 2013. Due to varying scheduling issues in the school year, the groups
were not always able to meet each week consecutively, but all eight sessions were completed as
designated in the SSS manual. Students attended SSS sessions during the school day, based on
the time that worked best with their academic schedules. This school was on a 4x4 block
schedule, with four classes meeting for 90 minutes each day. The group sessions were held in
different academic blocks in order to keep the student from missing too much class time.
The SSS program was designed to take place over an eight to ten week period before the
74
administration of a standardized test. The researcher, who also worked as a guidance counselor
in the school, led the treatment group sessions. Sessions focused on cognitive, self-management,
and goal-setting skills. Each group session consisted of three segments. The beginning section
of each group session began with a student self-check of energy and mood, followed by an
activity focused on the student’s progress in academic and behaviors and goals previously set by
that student, a progress report on incorporating self-management skills provided to the group by
the student, and a preview of the day’s meeting. The middle portion of the group meeting
introduced the day’s topic and provided activities for exploration and discussion of that day’s
subject matter. The closing segment of each meeting included activities to aid students in
reflecting on the material learned and making decisions on how to use this information to reach
their academic and behavioral goals. Students also set goals to work on for the next group
counseling session.
During the course of the SSS sessions, one student from the control group was removed
from enrollment for an unknown reason and one student was removed from the treatment group
for lack of attendance. Loss of these two students resulted in 32 students in the control group
and 33 students in the treatment group for an overall total of 65 students in the study. Most
students (59) were able to retest during the spring 2013 testing session at the end of March, while
the remaining six students were not retested until the later 2013 testing sessions. Two students
retested during the summer 2013 testing session, three during the fall session, and one during the
winter session.
A staff member in the school guidance office entered data for each group onto an Excel
spreadsheet and assigned each participant an alphanumeric code so that participants were
unidentifiable. Data entered for each participant included gender, race/ethnicity, participation in
75
special education, identification as an English language learner, and scores from the eight
administrations of the GHSGT in 2012 and 2013.
The researcher identified the scores that would be used in the data analysis. The pretest
scores were used to identify students who received a failing GHSGT score (below 200) from the
March 2012, July 2012, September 2012, or November 2012 testing sessions. Most of the
pretest scores came from the November 2012 administration of the GHSGT (100% of the
treatment group and 69% of the control group). The pretest scores were not used in any of the
inferential statistics; they were used only to identify students who had failed at least one portion
of the GHSGT and were therefore eligible to participate in the study. The posttest score used in
the analyses was the student’s score (passing or failing) from the first GHSGT testing session
following completion of the SSS group counseling sessions that each student was able to
participate in (March 2013, July 2013, September 2013, or November 2013). Posttest scores for
most students came from the March 2013 administration (97% of the treatment group and 84%
of the control group).
Data were never presented in a way that individual students could be identified. All
digital data pertaining to the study was stored on a single, password-protected computer, which
could be accessed only by the researcher. The paper documents were stored in a locked file
cabinet in the guidance office vault. All digital data and all paper documents containing student
data will be destroyed three years after the completion of the project.
The appropriate statistical tests were conducted using Statistical Package for the Social
Sciences (SPSS) version 22. These tests are described in the following section.
Data Analysis
The recommended statistical procedure for a posttest-only control group design is a t test
76
or ANOVA (Gall, Gall, & Borg, 2003). Prior to IRB approval, the number of potential
participants decreased due to various reasons as previously discussed in this chapter which
greatly reduced the sample sizes, so the Mann Whitney U test was considered to determine if
there was a significant difference between the posttest scores of the control group and the
posttest scores of the treatment group for the various GHSGT content areas. The Mann-Whitney
U test is applicable for non-normal distributions regardless of sample size. At combined samples
smaller than 20, the U statistic is most pertinent; however, once the combined sample size
surpasses 20, the U statistic can be closely approximated to a normal behavior represented by the
z statistic (Brase & Brase, 2006; Miller, Freund, & Johnson, 1990).
Additionally, it was essential to check the data for normality due to the small sample
sizes involved. Histograms were created and tests of normality including Kolmogorov-Smirnov
and Shapiro-Wilk were conducted to make the determination of normality. Table 6 shows the
normality results for the posttest scores. All of the math and ELA posttest data sets were normal
while both sets of the social studies posttest data were not normal. If the results were mixed
when looking at both the Kolmogorov-Smirvov and the Shapiro-Wilk results, it was considered
to be not normal (i.e., social studies posttest for the treatment group).
Table 6
Tests of Normality for the GHSGT by Group and Content Area
Kolmogorov-Smirnov
a
Shapiro-Wilk
Statistic
df
Sig.
Statistic
df
Sig.
Normal?
Math
Posttest Control
.18
10
.299
.91
10
.256
Yes
Posttest Treatment
.149
22
.200
.92
22
.092
Yes
77
Social Studies
Posttest Control
.21
22
.015
.78
22
.000
No
Posttest Treatment
.15
22
.200
.9
22
.036
No
ELA
Posttest Control
.16
11
.200
.92
11
.289
Yes
Posttest Treatment
.24
9
.145
.87
9
.113
Yes
a. Lilliefors Significance Correction
After examining normality and sample size, it was determined that the Mann-Whitney U
test would be best for testing each of the three hypotheses even when the data sets were normal,
in order to ensure consistency across the content areas and due to the extremely small sample
sizes for some of the groups.
The Mann-Whitney U test is a nonparametric test that is typically used when at least one
of the assumptions for the independent samples t test is found to be not tenable (Field, 2013).
This test was appropriate since some of the data violated the normality assumption and all were
considered small sample sizes. According to Green and Salkind (2008), this test is used to
evaluate whether the medians of the dependent variable of the two groups are significantly
different. Test scores are converted to ranks, ignoring group membership, after which the mean
ranks for the two groups are tested to see if the two groups differ significantly from each other.
An alpha level of .05 was used for all testing. Assumption tests for Mann-Whitney U included
checking for (a) independence within and between groups, (b) data that is continuous or ordinal
for the dependent variable, (c) two categorical, independent groups for the independent variable,
and (d) homogeneity of variances to determine if the shapes of the distribution for the data were
the same (DeCoster, 2006; Green & Salkind, 2008; Nachar, 2008). Effect sizes were not
calculated due to findings of no significant difference for all three hypotheses.
78
Summary
A quantitative experimental methodology using a posttest-only control group design was
utilized to determine if participation in the guidance portion of the SSS program would produce a
significant difference in GHSGT scores for students who participated in the SSS intervention as
compared to those who did not participate. The study was conducted in a rural high school in
northeast Georgia. Mann-Whitney U tests were used to test the three hypotheses because the
data for some of the groups were not normal and all sample sizes were small. The results of the
study are presented in Chapter 4.
79
CHAPTER FOUR: FINDINGS
Introduction
The purpose of this experimental posttest-only control group design with randomization
was to determine if the 8-week guidance portion of the Student Success Skills (SSS) program
could statistically impact the Georgia High School Graduation Test (GHSGT) scores of students
who had previously failed at least one portion of the GHSGT. Randomly selected students were
guided through the SSS curriculum in activities that facilitated cognitive and meta-cognitive
skills, social skills, and self-management skills. The scores of these treatment group students
were compared to the scores of the control group students to determine if the SSS intervention
was statistically effective. Due to small sample sizes and abnormal data, the study employed
Mann-Whitney U tests to determine statistical significance for the null hypotheses. Chapter 4
includes the descriptive statistics, the assumptions, and the results of the Mann-Whitney U for
the three null hypotheses.
Research Questions
RQ1: Is there a difference in Georgia High School Graduation Test scores in math for
students who participated in the group guidance portion of the Student Success Skills curriculum
as compared to students who did not participate in the group guidance portion of the Student
Success Skills curriculum?
RQ2: Is there a difference in Georgia High School Graduation Test scores in social
studies for students who participated in the group guidance portion of the Student Success Skills
curriculum as compared to students who did not participate in the group guidance portion of the
Student Success Skills curriculum?
RQ3: Is there a difference in Georgia High School Graduation Test scores in English
80
language arts for students who participated in the group guidance portion of the Student Success
Skills curriculum as compared to students who did not participate in the group guidance portion
of the Student Success Skills curriculum?
Null Hypotheses
H
0
1: There is no significant difference in Georgia High School Graduation Test scores in
math for students who participated in the group guidance portion of the Student Success Skills
curriculum as compared to students who did not participate in the group guidance portion of the
Student Success Skills curriculum.
H
0
2: There is no significant difference in Georgia High School Graduation Test scores in
social studies for students who participated in the group guidance portion of the Student Success
Skills curriculum as compared to students who did not participate in the group guidance portion
of the Student Success Skills curriculum.
H
0
3: There is no significant difference in Georgia High School Graduation Test scores in
English language arts for students who participated in the group guidance portion of the Student
Success Skills curriculum as compared to students who did not participate in the group guidance
portion of the Student Success Skills curriculum.
Descriptive Statistics
GHSGT Math Group
A total of 32 students comprised the math group as shown in Table 7. The treatment
group had 12 more students than the control group. Mean and standard deviation were computed
for both the pretest and the posttest for the GHSGT math participants (see Table 7). The
GHSGT control and treatment groups performed similarly on the pretest, but the control group
outperformed the treatment group on the posttest by scoring about 7 points higher on average
81
than the treatment group.
Table 7
Descriptive Statistics for the GHSGT Math Pretest and Posttest
Math Pretest
Math Posttest
n
%
M
SD
M
SD
Control
10
31.25
178.40
13.09
173.50
21.95
Treatment
22
68.75
178.73
15.32
166.14
17.19
Total
32
100.00
!!
GHSGT Social Studies Group
A total of 44 students comprised the social studies group as shown in Table 8. Both
groups had 22 students. Mean and standard deviation were computed for both the pretest and the
posttest for the GHSGT social studies participants (see Table 8). The GHSGT social studies
control and treatment groups performed similarly on both the pretest with a score of about 179
and the posttest with the treatment group scoring only one point higher than the control group.
Table 8
Descriptive Statistics for the GHSGT Social Studies Pretest and Posttest
Social Studies Pretest
Social Studies Posttest
n
%
M
SD
M
SD
Control
22
50.00
179.00
12.40
186.68
19.51
Treatment
22
50.00
179.32
11.52
187.73
11.55
Total
44
100.00
!!
82
GHSGT ELA Group
A total of 20 students comprised the ELA group as shown in Table 9. The control group
had 2 more students than the treatment group. Mean and standard deviation were computed for
both the pretest and the posttest for the GHSGT ELA participants (see Table 9). The GHSGT
control and treatment groups performed similarly on the pretest, but the control group
outperformed the treatment group on the posttest by scoring about 3 points higher on average
than the treatment group.
Table 9
Descriptive Statistics for the GHSGT ELA Pretest and Posttest
ELA Pretest
ELA Posttest
n
%
M
SD
M
SD
Control
11
55.00
174.00
13.08
182.64
23.00
Treatment
9
45.00
174.67
11.99
180.00
18.19
Total
20
100.00
!!
Results
Null Hypothesis One
H
0
1: There is no significant difference in Georgia High School Graduation Test scores in
math for students who participated in the group guidance portion of the Student Success Skills
curriculum as compared to students who did not participate in the group guidance portion of the
Student Success Skills curriculum..
Mann-Whitney U test assumptions. Because the sample sizes were small, the GHSGT
math data were analyzed using a Mann-Whitney U test. Prior to conducting the Mann-Whitney
U test, the data were examined to verify that certain conditions were met (Nachar, 2008). The
83
examination determined that (a) there was independence within each group and mutual
independence between the two groups, (b) the data for the dependent variable (GHSGT math
scores) were continuous or ordinal, and (c) the data for the independent variable consisted of two
categorical, independent groups (those who participated in the SSS intervention and those who
did not participate).
A fourth assumption for the Mann-Whitney U test requires that the distribution of the
scores for both groups have the same shape. If the shapes are the same, this in turn implies that
the variance should be the same for the two groups (DeCoster, 2006). In order to determine
homogeneity of variances, Levene’s test is typically used; however, a nonparametric version of
this test must be used for Mann-Whitney. Therefore, a modified Levene’s test for ranked data
was run in order to establish that the shapes of the distributions were the same. This is necessary
for Mann Whitney. This modified Levene test was a one-way ANOVA of the absolute
deviations of the ranked data. The control and treatment group data were ranked and aggregated
to obtain the mean rank score for the groups. The absolute deviation between the ranked score
and the mean rank score was determined. Based on the absolute deviations between the control
and treatment groups for math posttest scores, a one-way ANOVA was conducted to determine
homogeneity of variances. If the results are not significant, it can be assumed that the
homogeneity of variances assumption has been met. Results of the one-way ANOVA for the
math posttest revealed F(1,30) = 1.29, p = .266. Since the p value for the math posttest scores
was greater than the alpha level of .05, the assumption of homogeneity of variances was met for
the math posttest scores. Further, it could be implied that the distributions were similar. All
assumptions were met.
Mann-Whitney U test results. In order to test the first hypothesis, a Mann-Whitney U
84
test was conducted using the math posttest scores for the control and treatment groups to
determine if there was a difference in the scores between the two groups after the SSS
intervention was administered. Table 10 shows the descriptive statistics for the math posttest for
the control and treatment groups.
Table 10
Descriptive Statistics for Mann-Whitney U Test for Math Posttest Scores
n
Mdn
Mean Rank
Sum of Ranks
Control
10
179.50
19.20
192.00
Treatment
22
166.00
15.27
336.00
Total
32
The results of the Mann-Whitney U test for the math posttest scores were U = 83.00, z = -
1.10, p = .271, 2-tailed. Since the p value of .271 was greater than the alpha level of .05, the
math posttest scores for the control group and the treatment group were not significantly
different. There was insufficient evidence to support that students who participated in the SSS
intervention and were retested on the math portion of the GHSGT scored significantly different
than those who did not participate in the SSS intervention. The null hypothesis was not rejected.
Null Hypothesis Two
H
0
2: There is no significant difference in Georgia High School Graduation Test scores in
social studies for students who participated in the group guidance portion of the Student Success
Skills curriculum as compared to students who did not participate in the group guidance portion
of the Student Success Skills curriculum.
Mann-Whitney U test assumptions. Because the GHSGT scores for the social studies
data sets failed to pass the normality assumption and the sample sizes were small, the GHSGT
85
social studies data were analyzed using a Mann-Whitney U test. Prior to conducting the Mann-
Whitney U test, it was determined that the assumption of independence was met and that the data
types were appropriate for conducting a Mann-Whitney-U test. The fourth assumption for the
Mann-Whitney U test requiring the distribution of the scores for both groups to have the same
shape was tested as previously described. A one-way ANOVA was conducted to determine
homogeneity of variances. Results of the one-way ANOVA for the social studies posttest
revealed F(1,42) = 0.51, p = .481. Since the p value for the social studies posttest scores was
greater than the alpha level of .05, the assumption of homogeneity of variances was met for the
posttest scores. Further, it could be implied that the distributions were similar. All assumptions
were met.
Mann-Whitney U test results. In order to test the second hypothesis, a Mann-Whitney
U test was conducted using the social studies posttest scores for the control and treatment groups
to determine if there was a difference in the scores between the two groups after the SSS
intervention was administered. Table 11 shows the descriptive statistics for the social studies
posttest for the control and treatment groups.
Table 11
Descriptive Statistics for Mann-Whitney U Test for Social Studies Posttest Scores
n
Mdn
Mean Rank
Sum of Ranks
Control
22
186.50
21.02
462.50
Treatment
22
186.00
23.98
527.50
Total
44
The results of the Mann-Whitney U test for the social studies posttest scores were U =
209.50, z = -0.76, p = .445, 2-tailed. Since the p value of .445 was greater than the alpha level of
86
.05, the social studies posttest scores for the control group and the treatment group were not
significantly different. There was insufficient evidence to support that students who participated
in the SSS intervention and were retested on the social studies portion of the GHSGT scored
significantly different than those who did not participate in the SSS intervention. The null
hypothesis was not rejected
Null Hypothesis Three
H
0
3: There is no significant difference in Georgia High School Graduation Test scores in
English language arts for students who participated in the group guidance portion of the Student
Success Skills curriculum as compared to students who did not participate in the group guidance
portion of the Student Success Skills curriculum.
Mann-Whitney U test assumptions. Because the sample sizes were small, the GHSGT
ELA data were analyzed using a Mann-Whitney U test. Prior to conducting the Mann-Whitney
U test, it was determined that the assumption of independence was met and that the data types
were appropriate for conducting a Mann-Whitney-U test. The fourth assumption for the Mann-
Whitney U test requiring the distribution of the scores for both groups to have the same shape
was tested as previously described. A one-way ANOVA was conducted to determine
homogeneity of variances. Results of the one-way ANOVA for the ELA posttest revealed
F(1,18) = 0.18, p = .673. Since the p value of the ELA posttest scores was greater than the alpha
level of .05, the assumption of homogeneity of variances was met for the posttest scores.
Further, it can be implied that the distributions were similar. All assumptions were met.
Mann-Whitney U test results. In order to test the third hypothesis, a Mann-Whitney U
test was conducted using the ELA posttest scores for the control and treatment groups to
determine if there was a difference in the scores between the two groups after the SSS
87
intervention was administered. Table 12 shows the descriptive statistics for the ELA posttest for
the control and treatment groups.
Table 12
Descriptive Statistics for Mann-Whitney U Test for ELA Posttest Scores
n
Mdn
Mean Rank
Sum of Ranks
Control
11
189.00
10.95
120.50
Treatment
9
180.00
9.94
89.50
Total
20
The results of the Mann-Whitney U test for the ELA posttest scores were U = 44.50, z = -
0.38, p = .704 (2-tailed). Since the p value of .704 was greater than the alpha level of .05, the
ELA posttest scores for the control group and the treatment group were not significantly
different.
There is insufficient evidence to support that students who participated in the SSS
intervention and were retested on the ELA portion of the GHSGT scored significantly different
than those who did not participate in the SSS intervention. Based on the results of the Mann-
Whitney U test, the null hypothesis was not rejected.
Summary
Using a posttest-only control group design, the researcher was able to determine if
participation in the guidance portion of the SSS program produced a significant difference in
GHSGT scores for students who participated in the program as compared to those who did not
participate. Students who did not pass the GHSGT on at least one the four administrations of the
test in 2012 (pretest scores) and who retook the test at least once during the four administrations
of the test in 2013 (posttest scores) were included. Descriptive statistics were included for
88
pretest and posttest scores. The results of Mann-Whitney U tests were analyzed and the results
were presented for the math, social studies, and ELA portions of the GHSGT. There was no
significant difference found for the three hypotheses, indicating that the SSS program had no
significant effect on the GHSGT scores of those students who participated in the program when
compared to those students who did not participate. Chapter 5 presents a discussion of the
results along with the conclusions, implications, limitations, and recommendations for further
research.
89
CHAPTER FIVE: DISCUSSION, CONCLUSIONS, AND RECOMMENDATIONS
Chapter 5 presents a synopsis of this study and its conclusions. The purpose of this
experimental posttest-only control group design with randomization was to determine if the 8-
week guidance portion of the Student Success Skills (SSS) program could statistically impact the
Georgia High School Graduation Test (GHSGT) scores of students who had previously failed at
least one portion of the GHSGT. The conclusions include the results of the research on the
statistical effectiveness of the group guidance portion of the SSS program on GHSGT scores of
rural high school students.
Discussion
Increasing student achievement is a major focus of research in the field of education.
Available literature seeks to address a wide variety of factors that may impact a student’s ability
to learn and to demonstrate learning in a way that is measureable.
A review of the literature indicates that risk factors such as behavior issues (Kurleander
et al., 2008), disengagement (Balfanz et al., 2007b; Simons-Morton & Chen, 2009) and poor
attendance (Jerald, 2007) can be predictive of poor achievement as a student progresses through
school. Additionally, research has identified significant predictive correlations between early
academic factors, such as achievement in middle school math and English classes, standardized
testing skills, and student performance on state exit exams (Nichols, 2003).
Student academic achievement has even found its way into the political spectrum with
the implementation of laws such as NCLB (GADOE, n.d-a) and the most recent ESSA of 2015
(U.S. Department of Education, n.d.). Schools are expected to show increasing student
achievement and academic progress. The accountability measures in these laws require
demonstrable student achievement in exchange for tax dollars received.
90
Based on the theories of Cattell, Piaget, and Benet, testing as a measure of achievement is
not a new concept. Piaget was a stage theorist who defined stages (by age) during which
children should have specified competencies and developmental levels of cognition (Jaffe,
1998). His theory of age-related competencies is similar to the benchmarks expected of students
as they progress through their school years (Gray, 1978). Cattell and Benet, along with others,
laid the groundwork for psychometric testing which served as a predecessor to today’s standards-
based testing (Plucker & Esping, 2014; “Raymond Bernard Cattell,” n.d). More recently, Beck’s
cognitive behavioral theory (CBT) teaches that a person’s cognition impacts his behavior.
Therefore, teaching a student to change his thoughts with regard to learning and test taking can
positively impact his ability to successfully complete a standardized test (Comer, 2015).
As the focus on student achievement has become more politicized, counselors, teachers,
and administrators are charged with determining methodologies that produce measureable results
in students’ academic achievement. While the pervasiveness of testing as an accountability
measure is evident in today’s educational assessments, there is a lack of research on
interventions to help students perform well on standardized tests (i.e., high school exit exams).
The American School Counselor Association promotes small group counseling as a preferred
method for school counselors to promote academic achievement among their students (American
School Counselor Association, n.d.), but there are gaps in the literature on small group
counseling programs that assist high school students in raising their test scores.
The SSS program is the intervention strategy used in a study that was recognized in the
literature as an “exceptionally well-done, comprehensive study” (McGannon, Carey, & Dimmitt,
2005, p. 12). Zyromski and Edwards (2015) discussed the SSS program as the only empirically
supported school-based cognitive intervention program available in their meta-analysis of school
91
counseling programs.
In 2003, Brigman and Campbell conducted a district-wide study to determine the effect
of counseling interventions on student’s math and reading scores on the FCAT. The study was
conducted using a quasi-experimental, pretest-posttest design with 185 randomly selected
students. Participants were chosen from students whose initial FCAT scores were in the 25th to
the 50th percentile (Brigman & Campbell, 2003). Counselors led both classroom guidance
sessions throughout the school year and small group guidance sessions for student who needed
extra help for eight weeks before a testing administration. The results of this study yielded
significant differences in math between students in the control and experimental groups.
The research in this study utilized the group guidance portion of the SSS in an attempt to
determine if the implementation of the group guidance portion of the SSS would statistically
impact GHSGT scores for students in a rural high school who had not yet successfully completed
the GHSGT in the required four content areas of math, ELA, science, and social studies.
Students who had previously failed one or more test of the GHGST battery were eligible for
participation in this study. The study began with 147 potential participants, but the number of
students decreased over the course of the study due to students dropping out of school, passing
the required graduation test, or using their EOCT score to replace their failing GHSGT in the
same subject area. More information is provided in chapter 3. Contrary to the results achieved
in the study by Brigham and Campbell (2003), participation in the group guidance portion of the
SSS showed no significant impact on students’ test scores in this research project. The abnormal
data and the small sample sizes may have affected the results of this study.
Another factor that may have impacted the results of this study is the fact that only the
group counseling portion of the program was used. The main goal of this study was to determine
92
if the group guidance portion would significantly impact student’s standardized test scores. This
study adds to the body of research indicating that the SSS program will most likely yield the best
results if done on a district-wide or school-wide scale, especially if provided with administrative
support.
Another factor that may have impacted the results of this research is the number of times
students attempted to retake previously failed tests. The four academic tests of the GHSGT were
taken in the spring of the student’s junior year. If students failed one or more of the tests, they
were given four additional opportunities to take the test before graduation. If students did not
pass all four academic tests before graduation, they received a certificate of attendance, not a
high school diploma. Even after the date of graduation, students were allowed to continue to
attempt a failed graduation test(s) for an unlimited number of times.
Students who fail their exit exams are classified as dropouts and face the same risk
factors as students who drop out of school before graduation (Atkinson & Geiser, 2009; Bruce et
al., 2009). Since exit exams are designed to help schools produce graduates who show evidence
of college and career readiness upon completion of high school (Berger, 2000; Bush, 2001;
Goertz & Massell, 2005; Stillwell et al., 2011), it is clear that more research is necessary to
determine if taking a graduation test multiple times is indeed the best way to determine if a high
school student has the knowledge needed to enter college or the workforce after graduation.
The most recent report from the CEP highlights the current status and expected changes
in testing in American schools. McIntosh (2012) indicates that more than 70% of students,
nationwide, continue to be impacted by high school exit exams. While some states, such as
Georgia, have eliminated exit exams, other states still have or are implementing exit exams
(McIntosh, 2012).
93
Conclusions
The results of the research in this experiment indicated that the group guidance portion of
the SSS program did not provide a statistically significant impact on GHGST scores for students
in this sample. Previous studies revealed that the SSS program was successful in helping
students achieve passing scores on the FCAT (Brigman & Campbell, 2003; Brigman et al., 2007;
Campbell & Brigman, 2005; Webb et al., 2005). Both of these studies implemented the SSS
program on a district-wide basis. As previously noted in this chapter, only the group guidance
portion of the SSS program could be used for this group of students. The results of this study
may have been found to be significant if the administration had allowed the program to be
implemented on a district-wide or school-wide basis. Additionally, this study highlights the
difficulty of conducting a study in the public school arena. The pressure on school
administrators, teachers, and counselors to find the successful formula for increasing student test
scores is enormous. In an effort to be successful, various programs seem to come and go in
schools. Some programs may not be fully implemented or given sufficient time to work simply
because these programs do not appear to produce quick results.
Another factor in student performance is the fluidity in which students come and go in
the schools. Some students may enroll, withdraw, and reenroll multiple times over their high
school careers. Students may be married, have children, take care of sick parents, or have a
number of other responsibilities and life circumstances that may impact their ability to do well
on standardized tests. There was one student in this study who was married with a young child.
He left school each day to go to work in a local factory. He worked the 4 p.m. to 12:30 a.m.
shift, went home to sleep, and got up at 5 a.m. each day to complete homework and get to school
by 7:30 a.m. He was one of several students in this study who had life circumstances that
94
negatively impacted academic achievement.
Another factor impacting a study such as this one is the rapid change in state criteria for
exit exams and graduation requirements. Since the implementation of the A+ Educational
Reform Act of 2000, the state of Georgia has implemented eight changes in its testing and exit
exam policies (GADOE, n.d-c). Georgia is not alone, however, in its rapidly changing testing
strategies. In her comprehensive report on national exit exams, McIntosh (2012) indicated that
“because events in this field move quickly some policies will undoubtedly have changed soon
after publication of this report” (p. 5). The speed with which some states change their testing
policies makes it difficult to conduct research. There are very few studies utilizing the GHSGT.
This study is an example of the difficulties a researcher may encounter when attempting to find
interventions that help students realize success on exit exams.
Further research is warranted to determine if implementation of the comprehensive SSS
program, including classroom guidance, parent sessions, and group guidance sessions, might
provide a statistically positive result on exit exams similar to the GHSGT.
Implications
Improving students’ test scores has become a prominent focus not only for educators, but
also for politicians, parents, employers, and the nation as a whole. Research focusing on
predictive factors (Geiser, 2009; Zwick & Himelfarb, 2011), environmental and social factors
(Geiser, 2009), external factors (Figazzolo, 2009; Geiser, 2009), and remediation methodologies
(Misco, 2010) is found in the existing body of research. Outside of the arena of formal research,
one can find a myriad of newspaper and Internet articles addressing schools’ standardized test
scores, indicating the broad-based concern over testing in the public school system. A large
variety of opinions, both positive and negative, are evident in the national debate over
95
standardized testing for elementary, middle, and high school students. In the current climate of
accountability, a school’s standardized test scores are a critical measure of the school’s success.
Issues such as teachers and administrators cheating on their school’s standardized tests
(Vogell, Perry, Judd, & Pell, 2012), concern over the efficacy of using student test scores for
teacher evaluation (Kratochwill, 2013), and claims that test scores are impacted by a student’s
family wealth and/or social status (White et al., 2016) are contentious topics for many who are
concerned about the present condition of education. With the continuing focus on test scores as
a means of quality measurement for students, educators, and schools as a whole, educators will,
of necessity, continue to search for methodologies that help students improve performance on
standardized tests.
The intent of this study was to determine if the group guidance portion of the SSS
program would be effective in helping students pass the required standardized tests for
graduation. This research indicated that there was no statistically significant difference in the
standardized test scores of the students in the treatment group for the GHSGT math and social
studies groups. The SSS program was designed to be implemented as a system-wide program,
involving multiple grade levels, classroom guidance lessons, parent instruction, and group
guidance over the course of a school year in preparation for scheduled standardized testing. The
majority of the research completed, thus far, has been for first-time test takers. It is important to
note that this research was completed in a single-school setting for students who had previously
failed one or more GHSGT. As with any research, varying circumstances may impact the
statistical outcome of an experiment. Much of the available research on student achievement
deals with predictive and correlational factors that impact student’s standardized test scores.
More research is needed that specifically addresses effective strategies for helping students
96
improve their ability to pass standardized achievement tests.
Little research is available that addresses remediation for students who have previously
failed standardized tests. Many schools implement some sort of remediation program for their
students who fail high school exit exams, but on many occasions the remediation is random
and/or short-lived and empirical data is not easily found on the success of those programs.
Levine (2012) found a greater impact from pretest scores and race than from remediation for
high school students in Arizona who had previously failed the Arizona’s Instrument to Measure
Standards in math. Rothman and Henderson (2011) found that students who had participated in
tutoring sessions were more successful on a standardized math test. Other research indicates that
faculty involvement in tutoring and other interventions (Biesinger & Crippen, 2008), remediation
as a scheduled class (Biesinger & Crippen, 2008), and differentiation (Grimes & Stevens, 2009)
provided positive impact on standardized test scores. Little, McCoach, and Reis (2014) found
that differentiation in the classroom setting provided a stronger positive impact on student
standardized test scores than the type of classroom setting (i.e., traditional versus learner-
centered).
Many states have implemented the Common Core State Standards curriculum and have
or are in the process of developing assessments for their public schools (Common Core State
Standards Initiative, n.d.; National Governor’s Association, 2009). In the primary testing session
for the Common Core Standards in New York, only “26 percent of students in third through
eighth grade passed the tests in English, and 30 percent passed in math, according to the New
York State Education Department” (Strauss, 2013).
Carol Burris, a high school principal in New York and previous supporter of the
Common Core standards, maintained that principals, teachers, and students feel pressured to do
97
well on standardized exams. She indicated that educators were engaging in test preparation
activities to assist students in raising their standardized test scores, even to the point of
purchasing expensive test preparation materials from the test manufacturer (Strauss, 2013).
An unexpected result of this study is the insight into the emotional toll that testing of this
nature has on students. Most states have a policy of unlimited retakes for students not passing
the exam the first time it is taken (McIntosh, 2012). The data from this study indicated the
complexity and confusion that allowing unlimited retakes introduces to the exit exam scenario.
The data for this study was followed over a two-year period. Most of the students in this
research project took their failed graduation test(s) from two to five times. One student, who
received a certificate of attendance and kept coming back to retake the GHSGT, retook the ELA
test 5 times, the math test 5 times, the science test 4 times, and the social studies test 7 times over
the course of the two years of test scores used for data in this project. The data from this study
indicates that simply retaking the test is not necessarily going to help students pass the GHSGT.
Also evident from this study is the need for administrative support for programs that help
students improve their test scores. The school hosting this study had random study sessions
available to students. Subject area teachers offered most of the study sessions, but students were
not required to attend since the sessions were held before or after school.
When possible, the administration asked that students be scheduled in an appropriate
academic course for remediation, but the district administration would not allow students to be
rescheduled in a course they had previously passed. For example, if they did not pass the social
studies GHSGT, they could not retake any of the history courses they had already passed.
The SSS program was presented to the administration. The administrators were
interested in seeing the results of the study but did not want to consider its implementation until
98
this study was complete. However, over the course of this study, they implemented an
afterschool program that offered some tutoring in academic areas and hosted a few Saturday
study sessions a couple of weeks before the GHSGT testing dates. They also began a push to
encourage students who were failing classes or had failed a GHSGT to move to another
educational setting that allowed the student to work at an individual pace. Obviously, these
changes impacted this study and in some cases served to reduce the potential data set.
The biggest impact to this study, however, came when the state decided to allow a
student’s passing EOCT score to replace their failing GHSGT score in the corresponding
academic area. The new law was retroactively implemented which greatly impacted the data set
for this study resulting in a loss of 41 potential participants.
All of these scenarios highlight the quickly changing landscape in which educators
currently work. The public demands improvement. If that improvement is not quickly seen in
the data, change is expected. This fluid environment of rapid modifications makes long-term
data collection difficult, but is very representative of the problems surrounding comprehensive
research as it relates to exit exam testing in public schools.
Limitations
Factors, such as parental pressure, additional tutoring, and administrative policies had the
potential to impact student scores and were unable to be controlled. Additionally, some students
may have had learning disabilities, cognitive deficits, physical difficulties, anxiety issues, and
other factors that may have negatively impacted their ability to perform well on tests. Evaluating
predictor variables such as academic success in specified subject areas and other predictive
factors that correlate with standardized test scores would be helpful in determining the necessity
for remediation and intervention for students taking standardized tests (Atkinson & Geiser, 2009;
99
Bruce et al., 2009; Dennis, 2010; Geiser, 2009; Nichols, 2003). Intervention strategies that focus
on cognitive, social, and self-management skills have been found to be successful in helping
students perform better academically (Bruce et al., 2009; Silver et al., 2008).
Tutoring sessions for the GHSGT were offered on a continuing basis and were open to all
students, but students were not required to attend. Some students received remedial help within
the classroom setting if the student’s schedule allowed. There were no records of attendance for
specific students available during this research.
Lack of student engagement is a limitation for a high-stakes exit exam. Engagement
impacts areas of student attendance, academic performance, and testing (Balfanz et al., 2007a,
2007b; Garriott, 2007). Completing the SSS guidance sessions while taking the content course
for the area to be tested and completing the test immediately upon completion of the content
course could potentially engage the student and influence the success of the program. The
guidance session assignments required students to set goals each week and to monitor their
progress for those goals. Setting goals in light of an upcoming high-stakes test could influence
engagement and assist a student’s effort in learning the material necessary to successfully
perform on the test.
Maturation is also a common threat to external validity when students are tested multiple
times. Students were allowed unlimited retakes of the test(s) they had failed. The participants in
this study had previously failed at least one of the tests from one to five times. While the data
indicated that most of the students had scores on their retake attempts that only varied by a few
points, maturation may have impacted some students’ scores.
Recommendations for Future Research
While there was some positive impact on student test scores, the findings of this study
100
provided unlikely statistical evidence that short-term guidance sessions in isolation will be an
effective intervention for students attempting high-stakes standardized tests. Additional
research, with a much larger sample size, is needed to determine if guidance sessions would be
more effective as part of a system-wide effort to assist students in successfully completing these
exams. To be truly effective, the guidance sessions would require the blessing and participation
of the school administration to allow the guidance staff or other interested participating staff to
be able to conduct the treatment groups on a large scale.
The SSS program includes both classroom guidance and parent training sessions (Webb
et al., 2005). The program was found to have significant positive impact for students taking the
FCAT (Brigman et al., 2007; Webb et al., 2005). Further research, utilizing the comprehensive
program, including the parent training sessions, would provide insight into the potential impact
of this and similar programs on Georgia’s standardized tests and standardized tests in other
states.
Further research is needed to determine if offering the guidance sessions before the
student takes an exit exam similar to the GHSGT the first time would enable students to perform
better on the assessments. The program implemented in this research was completed for students
who had already failed a GHSGT. Using predictor variables to identify at-risk students for
intervention is recommended before the student attempts the exit exam.
While the guidance portion of the SSS program alone was not found to be statistically
significant in this study, educators know that any effort made to connect with a student in a small
group setting and individually can have significant results for that individual student. One of the
students in the treatment group who passed her test after previously failing the test four times
attributed her success directly to the skills she learned in the SSS group guidance sessions. For
101
this student, the results were life changing. After learning that she had passed her test, the
college of her choice accepted her giving her the opportunity to earn her college degree and
pursue her chosen career. For that student, this program was significant.
The current study further contributes to the field by assisting educators and policymakers
in determining whether or not unlimited attempts at exit exams are actually beneficial for
students. States should also use the data from this and similar research to evaluate whether one
failed test should keep a student from graduating. States with tests designed like the GHSGT
may not issue a diploma to students even if those students have passed all but one of their
graduation tests. Georgia has now moved to an EOC exam given upon completion of an
academic course. The grade on the EOC counts as 20% of the student’s grade, but will not
singularly prevent the student form graduating.
Additionally, a qualitative study on the impact of failing a high school exit exam is
recommended. Researching what happens to a student in both his career and his post-secondary
education would be beneficial research for those in the position of making decisions about exit
exams.
As standardized testing continues to be an important part of the assessment and
evaluation for school and student success, it is imperative that educators continue to search for
methodologies that will allow them to help students succeed on these exams. Accountability is
an important factor in today’s schools and ensuring that students are prepared for post-secondary
education and careers is a key part of that culpability. Even more important, however, is that
good educators are continuously searching for better techniques to help their students succeed.
After all, helping students succeed is the foundational premise for excellence in education.
102
REFERENCES
Abd-El-Fattah, S. M. (2010). Garrison’s model of self-directed learning: Preliminary validation
and relationship to academic achievement. The Spanish Journal of Psychology, 13(2),
586–596.
Allensworth, E. M., & Easton, J. Q. (2005). The on-track indicator as a predictor of high school
graduation. Retrieved from the University of Chicago Consortium on School Research
website: http://consortium.uchicago.edu/publications/track-indicator-predictor-high-
school-graduation
American School Counselor Association (n.d.). ASCA ethical standards for school counselors.
Retrieved from
http://schoolcounselor.org/asca/media/asca/Ethics/EthicalStandards2016.pdf
Amnesty International. (1996). The death penalty in Georgia: Racist, arbitrary and unfair.
Retrieved from http://www.amnesty.org/en/documents/amr51/025/1996/en/
Amrein, A. L., & Berliner, D. C. (2002). High-stakes testing, uncertainty, and student learning.
Education Policy Analysis Archives, 10(18). Retrieved from
http://epaa.asu.edu/ojs/article/viewFile/297/423
Ary, D., Jacobs, L. C., Razavieh, A., & Sorensen, C. (2006). Introduction to research in
education (7th ed.). Belmont, CA: Thomson Wadsworth.
Ascher, C., & Maguire, C. (2007). Beating the odds: How thirteen NYC schools bring low
performing ninth graders to timely graduation and college enrollment. Providence, RI:
Annenberg Institute for School Reform at Brown University.
Atkinson, R. C., & Geiser, S. (2009). Reflections on a century of college admissions tests.
Educational Researcher, 38(9), 665–676. doi:10.3102/0013189X09351981
103
Baker, O., & Lang, K. (2013). The effect of high school exit exams on graduation, employment,
wages, and, incarceration (Working paper 19182). Retrieved from the National Bureau
of Economic Research website: http://www.nber.org/papers/w19182.pdf
Balfanz, R. (2008). Three steps to building an early warning and intervention system for
potential dropouts. Retrieved from the Everyone Graduates Center website:
http://new.every1graduates.org/three-steps-to-building-an-early-warning-and-
intervention-system/
Balfanz, R. (2009). Can the American high school become an avenue of advancement for all?
The Future of Children, 19(1), 17–36. doi:10.1353/foc.0.0025
Balfanz, R., Herzog, L., & Mac Iver, D. J. (2007a). Keeping students on a graduation path in
Philadelphia’s middle-grades schools. Retrieved from
http://www.lausd.k12.ca.us/SLC_Schools/docs/ms/rsrch/GraduationPathInMS%5B1%5D
.ppt
Balfanz, R., Herzog, L., & Mac Iver, D. J. (2007b). Preventing student disengagement and
keeping students on the graduation path in urban middle-grade schools: Early
identification and effective interventions. Educational Psychologist, 42, 223–235.
doi:10.1080/00461520701621079
Bandura, A. (1991). Social cognitive theory of self-regulation. Organizational Behavior and
Human Decision Processes, 50, 248–287. doi:10.1016/0749-5978(91)90022-L
Barr, R., & Dreeben, R. (1991). Grouping students for reading instruction. In R. Barr, M. Kamil,
P. Mosenthal, & P. D. Pearson (Eds.). Handbook for reading research (Vol. 2, pp. 885–
910). New York, NY: Longman.
Barton, P. (2010). National education standards: To be or not to be? Educational Leadership,
104
67(7), 22–29.
Berger, J. (2000). Does top-down, standards based reform work? A review of the status of
statewide standards-based reform. NASSP Bulletin, 84(612), 57–65.
doi:10.1177/019263650008461210
Biesinger, K., & Crippen, K. (2008). The impact of state-funded online remediation on
performance related to high school mathematics proficiency. Journal of Computers in
Mathematics and Science Teaching, 27(1), 5–17.
Brase, C. H., & Brase, C. P. (2006). Understandable statistics. Boston, MA: Houghton Mifflin.
Bridgeland, J. M., Dilulio, J. J., & Morison, K. B. (2006). The silent epidemic: Perspectives of
high school dropouts. Retrieved from
http://docs.gatesfoundation.org/Documents/TheSilentEpidemic3-06Final.pdf
Brigman, G., & Campbell, C. (2003). Helping students improve academic achievement and
school success behavior. Professional School Counseling, 7, 91–98.
Brigman, G. A., Webb, L. D., & Campbell, C. (2007). Building skills for school success:
Improving academic and social competence. Professional School Counseling, 10, 279–
288.
Brooks, J. S., & Miles, M. (2006). From scientific management to social justice . . . and back
again? Pedagogical shifts in the study and practice of educational leadership.
International Electronic Journal for Leadership in Learning, 10(21). Retrieved from
http://iejll.journalhosting.ucalgary.ca/index.php/ijll/article/viewFile/621/283
Bruce, A. M., Getch, Y. Q., & Ziomek-Daigle, J. (2009). Closing the gap: A group counseling
approach to improve test performance of African-American students. Professional School
Counseling, 12(6), 450–457. doi:10.5330/PSC.n.2010-12.450
105
Burton, N. W., & Ramist, L. (2001). Predicting success in college: SAT studies of classes
graduating since 1980 (College Board Report No. 2001-2). Retrieved from
http://research.collegeboard.org/sites/default/files/publications/2012/7/researchreport-
2001-2-predicting-college-success-sat-studies.pdf
Bush, G. W. (2001). No child left behind [Foreword]. Retrieved from
http://www.whitehouse.gov/news/reports/no-child-left-behind.html
Campbell, C., & Brigman, G. (2005). Closing the achievement gap: A structured approach to
group counseling. Journal for Specialists in Group Work, 30, 67–82.
doi:10.1080/01933920590908705
Carey, J. C., Dimmitt, C., Hatch, T. A., Lapan, R. T., & Whiston, S. C. (2008). Report of the
National Panel for Evidence-Based School Counseling: Outcome research coding
protocol and evaluation of Student Success Skills and Second Step. Professional School
Counseling, 11, 197–206. doi:10.5330/PSC.n.2010-11.197
Cattell, R. (1971). Abilities: Their structure, growth, and action. Boston: Houghton Mifflin.
Christle, C., Jolivette, K., & Nelson, C. M. (2007). School characteristics related to high school
dropout rates. Remedial and Special Education, 28(6), 325–339.
Comer, R. (2015). Abnormal psychology (9th ed.). New York, NY: Worth Publishers.
Common Core State Standards Intiative (n.d.). Frequently asked questions. Retrieved from
http://www.corestandards.org/about-the-standards/frequently-asked-questions/
Conely, D. T. (2001). Rethinking the senior year. NASSP Bulletin, 85(625), 26–41.
doi:10.1177/019263650108562504
Conway, A. R. A., Cowan, N., Bunting, M. F., Therriault, D. J., & Minkoff, S. R. B. (2002). A
latent variable analysis of working memory capacity, short-term memory capacity,
106
processing speed, and general fluid intelligence. Intelligence, 30, 163–183.
doi:10.1016/S01602896(01)00096-4
DeCoster, J. (2006). Testing group differences using t-tests, ANOVA, and nonparametric
measures. Retrieved from http://www.stat-help.com/ANOVA%202006-01-11.pdf
Dennis, D. (2010). “I’m not stupid”: How assessment drives (in)appropriate reading instruction.
Journal of Adolescent & Adult Literacy, 53(4), 283–290. doi:10.1598/JAAL.53.4.2
Dillon, S. (2010, January 31). Obama to seek sweeping change in ‘No Child’ law. The New York
Times. Retrieved from
http://www.nytimes.com/2010/02/01/education/01child.html?scp=2&sq=no%20child%2
0left%0behind&st=cse
Downey, M. (2010, June 2). New national dropout rates: 25 percent of all students; nearly 40
percent of black and Hispanic kids fail to graduate on time [Atlanta Journal and
Constitution blog]. Retrieved from http://blogs.ajc.com/get-schooled-
blog/2010/06/02/new-national-dropout-rates-25-percent-of-all-students-nearly-40-
percent-of-black-and-hispanic-kids-fail-to-graduate-on-time/
Dryfoos, J. G. (1990). Adolescents at risk: Prevalence and prevention. New York: Oxford
University Press.
Durlak, J. A., & Weissberg, R. P. (2007). The impact of after-school programs that seek to
promote personal and social skills. Retrieved from The Collaborative for Academic,
Social and Emotional Learning website: http://www.casel.org/library/?tag=After-School
Durlak, J. A., & Weissberg, R. P. (2011). Promoting social and emotional development is an
essential part of students’ education. Human Development, 54, 1–3.
doi:10.1159/000324337
107
Durlak, J. A., Weissberg, R. P., & Pachan, M. (2010). A meta-analysis of after-school programs
that seek to promote personal and social skills in children and adolescents. American
Journal of Community Psychology, 45, 294–309. doi:10.1007/s10464-010-9300-6
Executive Office of the President. (2015). Elementary and secondary education act: A progress
report on elementary and secondary education. Retrieved from
https://www.whitehouse.gov/sites/whitehouse.gov/files/documents/ESSA_Progress_Rep
ort.pdf
FairTest. (2007). Criterion- and standards-referenced tests. Retrieved from
http://www.fairtest.org/criterion-and-standards-referenced-tests
Field, A. P. (2013). Discovering statistics using IBM SPSS statistics (4th ed.). Los Angeles, CA:
Sage.
Figazzolo, L. (2009). Impact of PISA 2006 on the education policy debate. Retrieved from the
Education International website: http://download.ei-
ie.org/docs/IRISDocuments/Research%20Website%20Documents/2009-00036-01-E.pdf
Firestone, W. A. (2007). Districts, teacher leaders, and distributed leadership: Changing
instructional practice. Leadership and Policy in Schools, 6, 3–35.
doi:10.1080/15700760601091234
Firestone, W., & Martinez, C. (2007). Districts, teacher leaders, and distributed leadership:
Changing instructional practice. Leadership and Policy in Schools, 6(1). doi:
10.1080/15700760601091234
Firestone, W. A., Mayrowetz, D., & Fairman, J. (1998). Performance-based assessment and
instructional change: The effects of testing in Maine and Maryland. Educational
Evaluation and Policy Analysis, 20(2), 95–113. doi:10.3102/01623737020002095
108
Franklin, C. (1992). Family and individual patterns in a group of middle-class dropout youths.
Social Work, 37, 338–344.
Garriott, M. (2007, May/June). Intervene now so they will graduate later. Principal, 60–61.
Retrieved from the National Association of Elementary School Principals website:
http://www.naesp.org/principal-archives
Garrison, D. R. (1997). Self-directed learning: Toward a comprehensive model. Adult Education
Quarterly, 48, 18–33. doi:10.1177/074171369704800103
Geiser, S. (2009). Back to the basics: In defense of achievement (and achievement tests) in
college admissions. Change: The Magazine of Higher Learning, 41(1), 16–24.
doi:10.3200/CHNG.41.1.16-23
Georgia Department of Education. (n.d.-a). Accountability. Retrieved from
http://www.gadoe.org/Curriculum-Instruction-and-
Assessment/Accountability/Pages/default.aspx
Georgia Department of Education. (n.d.-b). Answers to frequently asked question about AYP.
Retrieved from http://www.gadoe.org/AYP/Pages/AYP-FAQ.aspx
Georgia Department of Education. (n.d.-c). End of Course Tests (EOCT). Retrieved from
http://www.gadoe.org/Curriculum-Instruction-and-
Assessment/Assessment/Pages/EOCT.aspx
Georgia Department of Education. (n.d.-d). Georgia high school graduation tests (GHSGT).
http://www.gadoe.org/Curriculum-Instruction-and-
Assessment/Assessment/Pages/GHSGT.aspx
Georgia Department of Education. (n.d.-e). Georgia milestones assessment system. Retrieved
from http://www.gadoe.org/Curriculum-Instruction-and-
109
Assessment/Assessment/Pages/Georgia-Milestones-Assessment-System.aspx
Georgia Department of Education. (n.d.-f). GHSGT statewide scores. Retrieved from
https://www.gadoe.org/Curriculum-Instruction-and-
Assessment/Assessment/Pages/GHSGT-Statewide-Scores.aspx
Georgia Department of Education. (n.d.-g). Graduation requirements. Retrieved from
http://www.gadoe.org/External-Affairs-and-Policy/AskDOE/Pages/Graduation-
Requirements.aspx
Georgia Department of Education. (2011). 2011 AYP . Retrieved from
http://archives.doe.k12.ga.us/ayp2011/sear8ch.asp
Georgia Department of Education Assessment Research and Development Department. (2010).
An assessment and accountability brief: Validity and reliability for the 2009–2010
Georgia High School Graduation Tests. Atlanta, GA: Georgia Department of Education.
Georgia Department of Education Assessment Research and Development Department. (2011).
An assessment and accountability brief: Validity and reliability for the 2010–2011
Georgia End-of-Course Tests. Atlanta, GA: Georgia Department of Education.
GeorgiaStandards.org. (n.d.). Georgia Performance Standards (GPS): Frequently asked
questions. Retrieved from
http://www.GeorgiaStandards.org/standards/GPS%20Support%20Docs/Curriculum%20F
requently%20Asked%20Questions.pdf
Gewertz, C. (2016, January 26). States move to issue high school diplomas retroactively: New
laws give students who never passed their exit exams another chance to graduate.
Education Week. Retrieved from http://www.edweek.org/ew/articles/2016/01/27/states-
move-to-issue-high-school-diplomas.html
110
Goertz, M., & Massell, D. (2005). Holding high hopes: How high schools respond to state
accountability policies. Retrieved from The Consortium for Policy Research in Education
website: http://www.cpre.org/holding-high-hopes-how-high-schools-respond-state-
accountability-policies
Goffreda, C. T., Diperna, J. C., & Pedersen, J. A. (2009). Preventive screening for early readers:
Predictive validity of the dynamic indicators of basic early literacy skills (DIBELS).
Psychology in the Schools, 46(6), 539–52. doi:10.1002/pits.20396
Good, C., Aronson, J., & Inzlicht, M. (2003). Improving adolescents’ standardized test
performance: An intervention to reduce the effects of stereotype threat. Applied
Developmental Psychology, 24, 645–662. doi:10.1016/j.appdev.2003.09.002
Governor’s Office of Student Achievement. (n.d.-a). Exiting credentials for high school
completers in numbers [Data file]. Retrieved from http://gosa.georgia.gov/downloadable-
data
Governor’s Office of Student Achievement (n.d.-b). K-12 public schools report card/indicators
& demographics/student and school demographics [Data file]. Retrieved from
http://gosa.georgia.gov/
GreatSchools Staff (n.d.). High school exit exams: Issues to consider. Retrieved from
http://www.greatschools.org/gk/articles/high-school-exit-exams-issues/
Gray, W. M. (1978, March). Standardized tests based on developmental theory. Paper presented
at the Annual Meeting of the American Educational Research Association, Toronto,
Ontario, Canada.
Green, S. B., & Salkind, N. J. (2008). Using SPSS for Windows and Macintosh: Analyzing and
understanding data (5th ed.). Upper Saddle River, NJ.: Pearson.
111
Grimes, K. J., & Stevens, D. D. (2009). Glass, bug, mud. Phi Delta Kappan, 90, 677–680.
Retrieved from
http://homepage.scasd.org/cms/lib5/PA01000006/Centricity/Domain/127/Glass%20Bug
%20Mud.pdf
Gwynne, J., Lesnick, J., Hart, H. M., & Allensworth, E. M. (2009). What matters for staying on-
track and graduating in Chicago public schools: A focus on students with disabilities.
Retrieved from the University of Chicago Consortium on School Research website:
http://consortium.uchicago.edu/publications/what-matters-staying-track-and-graduating-
chicago-public-schools-focus-students
Hanushek, E. A., & Raymond, M. E. (2005). Does school accountability lead to improved
student performance? Journal of Policy Analysis and Management, 24(2), 297–327.
doi:10.1002/pam.20091
Hamilton, R. J., & Akhter, S. (2009). Construct validity of the motivated strategies for learning
questionnaire. Psychological Reports, 104, 1–11. doi:10.2466/PR0.104.3.711-7
Hartel, E. H., & Herman, J. L. (2005). A historical perspective on validity arguments for
accountability testing. In J. L. Herman & E. H. Haertel (Eds.). Yearbook of the National
Society for the Study of Education, 104(2), 1–34.
Hattie, J., Biggs, J., & Purdie, N. (1996). Effects of learning skills interventions on student
learning: A meta-analysis. Review of Educational Research, 66(2), 99–130.
Heilig, J. (2011). Understanding the interaction between high-stakes graduation tests and English
learners. Teachers College Record, 113(12), 2633–2669.
Hemelt, S., & Marcotte, D. (2013). High school exit exams and dropout in an increased era of
accountability. Journal of Policy Analysis and Management, 32(2), 323–349.
112
doi:10.1002/pam.21688
Holme, J., Richards, M., Jimerson, J., & Cohen, R. (2010). Assessing the effects of high school
exit examinations. Review of Educational Research, 80(4), 476–526. Retrieved from
http://www.jstor.org/stable/40927292
Human Rights Watch. (2001). Beyond reason: The death penalty and offenders with retardation.
Retrieved from http://www.hrw.org/sites/default/files/reports/ustat0301.pdf
Interview: Beyond “the stone age” of testing. The Princeton Review’s founder, John Katzman,
shares his views on better, more subtle assessment. (2004, March). Scholastic
Administr@tor. Retrieved from http://www2.scholastic.com/browse/article.jsp?id=304
Isaacson, W. (2009). How to raise the standard in America’s schools. Time, 173(16), 32–36.
Jaffe, M. (1998). Adolescence. New York, NY: John Wiley & Sons, Inc.
Janosz, M., LeBlanc, M., Boulerice, B., & Tremblay, R. E. (2000). Predicting different types of
school dropouts: A typological approach with two longitudinal samples. Journal of
Educational Psychology, 92(1), 171–190. doi:10.1037//0022-0663.92.1.171
Jerald, C. (2007). Keeping kids in school. Lessons for research about preventing dropouts.
Retrieved from the Center for Public Education website:
http://www.centerforpubliceducation.org/Main-Menu/Staffingstudents/Keeping-kids-in-
school-At-a-glance/Keeping-kids-in-school-Preventing-dropouts.html
Jerald, C. (2008). Benchmarking for success: Ensuring U.S. students receive a world-class
education. Washington, D.C.: National Governor’s Association, Council of Chief State
School Officers, and Achieve, Inc. Retrieved from
http://www.corestandards.org/assets/0812BENCHMARKING.pdf
Jones, S. M., & Bouffard, S. M. (2012). Social and emotional learning in schools: From
113
programs to strategies. Social Policy Report, 26(4), 1–22. Retrieved from the Society for
Research in Child Development website:
http://www.srcd.org/sites/default/files/documents/spr_264_final_2.pdf
Kane, M. C. (2015). The effect of student participation in student success skills on the academic
behaviors and key learning skills and techniques associated with college-career
readiness (Doctoral Dissertation). Retrieved from ProQuest Dissertations and Theses
database. (ProQuest No. 3730723)
Klein, S. P., & Bell, R. M. (1995). How will the NCAA’s new standards affect minority student-
athletes? Chance, 8(3), 18–21.
Kitsantas, A., Winsler, A., & Huie, F. (2008). Self-regulation and ability predictors of academic
success during college: A predictive validity study. Journal of Advanced Academics,
20(1), 42–68.
Kornhaber, M. L. (2004). Appropriate and inappropriate forms of testing, assessment, and
accountability. Educational Policy,18(1), 45–70. doi:10.1177/0895904803260024
Kratochwill, A. (2013). Are student test scores an accurate measure of high school
teachers’effectiveness? Monthly Labor Review, 136(8). Retrieved from
http://www.bls.gov/opub/mlr/2013/beyond-bls/student-test-scores.htm
Kurleander, M., Reardon, S. F., & Jackson, J. (2008). Middle school predictors of high school
achievement in three California school districts (California Dropout Research Project
Report #13). Retrieved from California Dropout Research Project website:
http://www.cdrp.ucsb.edu/pubs_reports.htm
Lacy, A. C., & LaMaster, K. J. (1996). Teacher behaviors and student academic learning time in
elementary physical education. Physical Educator, 53, 44–51.
114
Lauer, P., Akiba, M., Wilkerson, S., Apthrop, H., Snow, D., & Martin-Glenn, M. (2006). Out-of-
school time programs: A meta-analysis of effects for at-risk students. Review of
Educational Research, 76, 275–313.
Levine, T. D. (2012). Effects of a remediation class on standardized test retake (Doctoral
dissertation). Retrieved from ProQuest Dissertations and Theses database. (UMI No.
3505926)
Little, C. A., McCoach, D. B., & Reis, S. M. (2014). Effects of differentiated reading instruction
on student achievement in middle school. Journal of Advanced Academics, 25(4), 384–
402. doi: 10.1177/1932202X145492
Luo, D., Thompson, L. A., & Detterman, D. K. (2003). The causal factor underlying the
correlation between psychometric g and scholastic performance. Intelligence, 31(1), 67–
83. doi:10.1016/S0160-2896(02)00113-7
MacGaw, B. (2008). The role of the OECD in international comparative studies of achievement.
Assessment in Education: Principles, Policy, and Practice, 15(3), 223–243.
Malecki, C. K., & Elliott, S. N. (2002). Children’s social behaviors as predictors of academic
achievement: A longitudinal analysis. School Psychology Quarterly, 17, 1–23.
doi:10.1521/scpq.17.1.1.19902
Mariani, M., Webb, L., Villares, E., & Brigman, G. (2015). Effects of participation in student
success skills on pro-social and bullying behavior. Retrieved from
http://tpcjournal.nbcc.org/effect-of-participation-in-student-success-skills-on-prosocial-
and-bullying-behavior/
Masten, A. S., & Coatsworth, J. D. (1998). The development of competence in favorable and
unfavorable environments: Lessons from research on successful children. American
115
Psychologist, 53(2), 205–220. doi:10.1037//0003-066X.53.2.205
Mathews, J. (2006, November 14). Just whose idea was all this testing? The Washington Post.
Retrieved from http://www.washingtonpost.com/wp-
dyn/content/article/2006/11/13/AR2006111301007.html
McGannon, W., Carey, J., & Dimmitt, C. (2005). The current outcome status of school
counseling outcome research (Monograph). Retrieved from ERIC database. (ED512567)
McIntosh, S. (2012). State high school exit exams: A policy in transition. Retrieved from the
Center on Education Policy website: http://www.cep-
dc.org/displayDocument.cfm?DocumentID=408
Miller, I., Freund, J. E., & Johnson, R. A. (1990). Probability and statistics for engineers.
Englewood Cliffs, NJ: Prentice Hall.
Miranda, A., Webb. L., Brigman, G., & Peluso, P. (2007). Student success skills: A promising
approach to closing the achievement gap for African Americans and Latino students.
Professional School Counseling, 10(5), 490–497.
Misco, T. (2010). High school exit exam. Kappa Delta Pi Record, 46(3), 121–126.
Montes, G., & Lehmann, C. (2004). Who will drop out of school? Key predictors from the
literature (No. T04-001). Retrieved from Children’s Institute, Inc. website:
http://www.childrensinstitute.net/sites/default/files/documents/T04-001.pdf
Morris, D., Bloodgood, J. W., & Perney, J. (2003). Kindergarten predictors of first- and second-
grade reading achievement. The Elementary School Journal, 104(2), 93–109.
doi:10.1086/499744
Nachar, N. (2008). The Mann-Whitney U: A test for assessing whether two independent samples
come from the same distribution. Tutorials in Quantitative Methods for Psychology, 4(1),
116
13–20.
National Alliance of Business. (2000). Improving performance: Competition in American public
education. Retrieved from ERIC database. (ED443147)
National Center for Education Statistics. (1992). National Education Longitudinal Study of 1988:
Characteristics of at-risk students in NELS:88 (NCES Report No. 92-042). Retrieved
from: http://nces.ed.gov/pubs92/92042.pdf
National Governor’s Association. (2009). Fifty-one states and territories join Common Core
State Standards Initiative. Retrieved from http://www.nga.org/cms/home/news-
room/news-releases/page_2009/col2-content/main-content-list/title_fifty-one-states-and-
territories-join-common-core-state-standards-initiative.html
National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the
scientific research literature on reading and its implications for reading instruction
(reports of the subgroups). Washington, DC: National Institute of Child Health and
Human Development.
Nichols, J. D. (2003). Prediction indicators for students failing: The state of Indiana high school
graduation exam. Preventing School Failure, 47, 112–116.
doi:10.1080/10459880309604439
Noble, J., & Sawyer, R. (2002). Predicting different levels of academic success in college using
high school GPA and ACT composite score (ACT Research Report Series No. 2002-4).
Retrieved from ERIC database. (ED469746)
Nodding, N. (2004). High-stakes testing: Why? Theory and Research in Education, 2(3), 263–
269. doi:10.1177/1477878504046520
Organisation for Economic Co-operation and Development. (n.d.). Programme for international
117
student assessment (PISA). Retrieved from
http://www.oecd.org/department/0,3355,en_2649_35845621_1_1_1_1_1,00.html
Organisation for Economic Co-operation and Development. (2000). Measuring student
knowledge and skills: A new framework for assessment. Retrieved from
http://www.oecd.org/edu/school/programmeforinternationalstudentassessmentpisa/33693
997.pdf
Organization for Economic Co-operation and Development. (2007). PISA 2006: Science
competencies for tomorrow’s world volume 1-analysis. Retrieved from
http://www.oecd.org/pisa/pisaproducts/39703267.pdf
Papay, J., Murnane, R., & Willett, J. (2014). High-school exit examinations and the schooling
decisions of teenagers: Evidence from regression-discontinuity approaches. Journal of
Research on Educational Effectiveness, 7(1), 1–27. doi:10.1080/19345747.2013.819398
Pedraza-Vidamour, B. (2008, May 24). One point keeps student from getting diploma. Times-
Herald.com. Retrieved from http://www.times-herald.com/local/1-point-keeps-student-
from-getting-diploma--472299
Phelps, R. P. (Ed.). (2005). Defending standardized testing. Mahwah, NJ: Lawrence Erlbaum
Associates.
Pintrich, P. R. (1988) A process-oriented view of student motivation and cognition. In J. Startk &
L. Mets (Eds.), Improving teaching and learning through research: New directions for
institutional research (Vol. 57, pp. 65–79). San Franciso, CA: Jossey-Bass.
Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components
of classroom academic performance. Journal of Educational Psychology. 82(1), 33–40.
doi:10.1037/0022-0663.82.1.33
118
Plucker, J. A., & Esping, A. (Eds.). (2014). Alfred Binet. Retrieved from the Human Intelligence
website: http://www.intelltheory.com/binet.shtml
Progressive era. (n.d.). Retrieved from K12 academics website:
http://www.k12academics.com/history-education-united-states/progressive-era
Pytel, B. (2007). State graduation exam debate. Retrieved from Suite101 website:
http://educationalissues.suite101.com/article.cfm/state_graduation_exam_debate
Raymond Bernard Cattell. (n.d.). St. Thomas University. Retrieved from
http://www.stthomasu.ca/~jgillis/bio.htm
Ridgell, S. D., & Lounsbury, J. W. (2004). Predicting academic success: General intelligence,
“big five” personality traits, and work drive. College Student Journal, 38(4), 607–618.
Rohde, T. E., & Thompson, L. A. (2007). Predicting academic achievement with cognitive
ability. Intelligence, 35(1), 83–92. doi:10.1016/j.intell.2006.05.004
Rothman, T., & Henderson, M. (2011). Do school-based tutoring programs significantly improve
student performance on standardized tests? RMLE Online: Research in Middle Level
Education, 34(6), 1–10.
Savitz-Romer, M., & Jager-Hyman, J. (2009). Stronger together. Principal Leadership, 9(8),
48–53.
Scarborough, H. (1998). Early identification of children at risk for reading disabilities:
Phonological awareness and some other promising predictors. In B. Shapiro, P. Accardo,
& A. Capute (Eds.), Specific reading disability: A view of the spectrum (pp. 75–119).
Timonium, MD: York.
Scarborough, H. (2001). Connecting early language and literacy to later reading (dis)abilities:
Evidence, theory, and practice. In S. Neuman & D. Dickinson (Eds.), Handbook of early
119
literacy research (pp. 97–110). New York: Guilford.
Schlenoff, D. (2015). Challenging the immigrant: The Ellis Island intelligence tests, 1915.
Retrieved from the Scientific American website:
http://www.scientificamerican.com/article/ellis-island-challenging-the-immigrant/
Sherman, W. L., & Theobald, P. (2001). Progressive era rural reform: Creating standard schools
in the midwest. Journal of Research in Rural Education, 17(2), 84–91.
Silver, D., Saunders, M., & Zarate, E. (2008). What factors predict high school graduation in the
Los Angeles Unified School District? (California Dropout Research Project Report #14).
Retrieved from California Dropout Research Project website:
http://www.cdrp.ucsb.edu/pubs_reports.htm
Silverstein, A. (2000). Standardized tests: The continuation of gender bias in higher education.
Hofstra Law Review, 29, 669–1401.
Simons-Morton, B., & Chen, R. (2009). Peer and parent influences on school engagement among
early adolescents. Youth & Society, 41(1), 3–25. doi:10.1177/0044118X09334861
Simpson, F. M. (2009). An analysis of factors that influence success in a low socioeconomic
Georgia middle school (Doctoral dissertation). Retrieved from
http://digitalcommons.liberty.edu/doctoral/143/
Slavin, R. E., & Madden, N. A. (2006). Reducing the gap: Success for all and the achievement of
African American students. The Journal of Negro Education, 75(3), 389–400. Retrieved
from http://www.jstor.org/pss/40026810
Snow, C., Burns, M., & Griffin, P. (1998). Preventing reading difficulties in young children.
Washington, DC: National Academy Press.
Spears, T. (2008, July 25). Gender gap bridged over math. The Province. Retrieved from
120
http://www.canada.com/story.html?id=987a4bc3-991a-4c1a-8160-1a00e2735b4b
Stillwell, R., Sable, J., & Plotts, C. (2011). Public school graduates and dropouts from the
common core data: School year 2008-09 (NCES 2011-312). Retrieved from National
Center for Education Statistics website: http://nces.ed.gov/pubs2011/2011312.pdf
Storch, S., & Whitehurst, G. (2002). Oral language and code-related precursors to reading:
Evidence from a longitudinal structural model. Developmental Psychology, 38, 934–947.
Strauss, V. (2013, March 4). Principal: "I was naive about common core." The Washington Post.
Retrieved from
http://www.washingtonpost.com/blogs/answersheet/wp/2013/03/04/principal-i-was-
naive-about-common-core/
Sum, A., Khatiwada, I., McLaughlin, J., & Palma, S. (2009). The consequences of dropping out
of high school: Joblessness and jailing for high school dropouts and the high cost for
taxpayers. Retrieved from http://www.northeastern.edu/clms/wp-
content/uploads/The_Consequences_of_Dropping_Out_of_High_School.pdf
Swanson, C. B. (2004). Who graduates? Who doesn't? A statistical portrait of public high
school graduation, Class of 2001. Retrieved from Urban Institute Education Policy
Center website: http://www.urban.org/publications/410934.html
Technical College System of Georgia. (2015). V. B. 1. Admissions requirements. Retrieved from
http://tcsg.edu/tcsgpolicy/docs/V.B.1.Admissions_Requirements.html
Urbina, I. (2011). The effects of student participation in the cultural Spanish translation of the
Student Success Skills program on high school student achievement (Doctoral
Dissertation). Retrieved from ProQuest Dissertations and Theses database. (UMI No.
349646)
121
U.S. Department of Education. (n.d.). Every student succeeds act (ESSA). Retrieved from
http://www.ed.gov/ESSA
U.S. Department of Education. (2009). Race to the top program executive summary. Retrieved
from http://www2.ed.gov/programs/racetothetop/executive-summary.pdf
U.S. Department of Education. (2010). A–Z Index for No Child Left Behind. Retrieved from
http:// http://www2.ed.gov/nclb/index/az/index.html
U.S. Department of Education. (2015). U.S. high school graduation rate hits new record high.
Retrieved from http://www.ed.gov/news/press-releases/us-high-school-graduation-rate-
hits-new-record-high
U.S. Government Accountability Office. (2005). No child left behind act: Education could do
more to help states better define graduation rates and improve knowledge about
intervention strategies (GAO Report 05-879). Retrieved from
http://www.gao.gov/new.items/d05879.pdf
Vogell, H., Perry, J., Judd, A., & Pell, M. B. (2012, March 25). Suspicious scores across the
nation. The Atlanta Journal-Constitution, pp. A1, A12–A13.
Volger, K. E. (2008). Comparing the impact of accountability examinations on Mississippi and
Tennessee social studies teachers’ instructional practices. Educational Assessment, 13, 1–
32. doi:10.1080/10627190801968158
Wainer, H. (2006). Book review. [Review of the book Defending Standardized Testing, by R. P.
Phelps]. Journal of Educational Measurement, 43(1), 77–84. doi:10.1111/j.1745-
3984.2006.00005.x
Walberg, H. J., & Paik, S. J. (2000). Educational practices series–3: Effective educational
practices. Retrieved from http://www.ibe.unesco.org/en/services/online-
122
materials/publications/educational-practices.html
Wang, L., Beckett, G. H., & Brown, L. (2006). Controversies of standardized assessment in-
school accountability reform: A critical synthesis of multidisciplinary research evidence.
Applied Measurement in Education, 19, 305–328. doi:10.1207/s15324818ame1904_5
Wang, M. C., Haertel, G. D., & Walberg, H. J. (1994). What helps students learn? Educational
Leadership, 57, 74–79.
Webb, L. D., Brigman, G. A., & Campbell, C. (2005). Linking school counselors and student
success: A replication of the Student Success Skills approach targeting the academic and
social competence of students. Professional School Counseling, 8(5), 407–413.
Wentzel, K. R. (1994). Relations of social goal pursuit to social acceptance, classroom behavior,
and perceived social support. Journal of Educational Psychology. 86(2), 173–182.
doi:10.1037//0022-0663.86.2.173
White, G. W., Stepney, C. T., Hatchimonji, D. R, Moceri, D. C., Linsky, A. V., Reyes-Portillo, J.
A., & Elias, M. J. (2016). The increasing impact of socioeconomics and race on
standardized academic test scores across elementary, middle, and high school. American
Journal of Orthopsychiatry, 86(1), 10–23. doi:10.1037/ort0000122
Wiebe, R. H. (1967). The search for order 1877–1920. New York: Hill and Wang.
Whitehurst, G., & Lonigan, C. (2001). Emergent literacy: Development from prereaders to
readers. In S. Neuman & D. Dickinson (Eds.), Handbook of early literacy research, 11–
29. New York: Guilford.
Wiliam, D. (2010a). Standardized testing and school accountability. Educational Psychologist,
45(2), 107–122. doi:10.1080/00461521003703060
Wiliam, D. (2010b). What counts as evidence of educational achievement? The role of
123
constructs in the pursuit of equity in assessment. Research in Education, 34, 254–284.
doi:10.3102/0091732X09351544
Zimmerman, B. J. (1989). A social cognitive view of self-regulated academic learning. Journal
of Educational Psychology, 81(3), 329–339. doi:10.1037//0022-0663.81.3.329
Zins, J. E., Weissberg, R. P., Wang, M. C., & Walberg, H. J. (2004). Building academic success
on school social and emotional learning. New York: Teachers College Press.
Zwick, R., & Himelfarb, I. (2011). The effect of high school socioeconomic status on the
predictive validity of SAT scores and high school grade-point average. Journal of
Educational Measurement, 48(2), 101–121.
Zyromski, B., & Edwards, A. (2008). Utilizing cognitive behavioral interventions to positively
impact academic achievement in middle school students. Retrieved from ERIC database.
(EJ894786)
124
Appendix A: Student Success Skills Program
125
Appendix B: Approval by District School Superintendent to Conduct Research
126
127
Appendix C: Approval by School Principal to Conduct Research
128
Appendix D: Institutional Review Board Approvals
!
!
!
!
November!1,!2012!
!
Donna!A.!Caudell!
IRB!Approval!1425.110112:!!The!Effect!of!Group!Counseling!Intervention!on!the!
Performance!of!Rural!Students!on!the!Georgia!High!School!Graduation!Tests!or!EndJ
ofJCourse!Tests! !
!
!
Dear!Donna,!!
!!
We!are!pleased!to!inform!you!that!your!above!study!has!been!approved!by!the!
Liberty!IRB.!This!approval!is!extended!to!you!for!one!year.!If! data!collection!
proceeds!past!one!year,!or!if!you!make!changes!in!the!methodology!as!it!pertains!to!
human!subjects,!you!must!submit!an!appropriate!update!form!to!the!IRB.!!The!forms!
for!these!cases!were!attached!to!your!approval!email.!
!
Thank!you!for!your!cooperation!with!the!IRB!and!we!wish!you!well!with!your!
research!project.!!
!
Sincerely,!
!
!
Fernando Garzon, Psy.D.
Professor, IRB Chair
Counseling
(434) 592-4054
Liberty University | Training Champions for Christ since 1971
!
129
130
APPENDIX E: Parent or Guardian Permission Letter
Dear Students and Parents:
My name is Donna Caudell. I currently work as the Sophomore Counselor at Habersham Central High
School. Additionally, I am an Ed.D candidate in the School of Education at Liberty University, under the direction
of Dr. Casey Reason. I am working to conduct research on a method that will help students pass the Georgia High
School Graduation Tests. I am seeking students and high school completers, who still need to pass one or more of
the Georgia High School Graduation Tests to participate in group guidance sessions using a curriculum entitled
Student Success Skills. The Student Success Skills program has been very successful in helping students
successfully complete the FCAT (Florida’s Comprehensive Assessment Test), which is the test Florida students
must pass to earn their high school diploma. You are receiving a copy of this invitation letter for your possible
participation in this research project that seeks to explore the effectiveness of group guidance in helping students
raise their scores on the Georgia High School Graduation Tests.
Students returning the attached Informed Consent Form will be randomly chosen to participate in eight 45-
minute group guidance sessions which focus on skills that have been found to help students improve their test
scores. Students will participate voluntarily and no incentive will be offered. Additionally, you will find attached to
this letter an “Informed Consent” document that further explains the details of this research project. If you choose to
participate in this project, please read and sign that document, then return it to me with the attached Participant
Permission Form.
Your participation in this study is totally voluntary; nevertheless, your participation in each group guidance
session will contribute to the success of this research and will be greatly appreciated. Your participation will
contribute valuable information to the knowledge on best practices for assisting students in increasing their test
scores. All information on your participation in the group and any identifying information will be kept confidential
to the extent allowed by law. Please understand that students will be randomly assigned to groups and some of these
groups may not actively participate in the group sessions. These students will be considered as the control group
and only their anonymous test scores will be used in the study.
I’d like to assure you that your consent to participate will not affect your classroom grades or any other
aspect of your student life at HCHS and that you have the right to not participate or withdraw from participation at
131
anytime without prejudice, penalty or loss of benefits to which otherwise entitled. Finally, the results of the research
study may be published, but neither your name nor any other identifying information will be published.
Should you have any questions concerning this research study, please call me at (706) 778-7161, ext. 1130
or email me at [email protected] Thank you for your time and help in advance.
Sincerely,
Donna A. Caudell, Ed.D Candidate
706-778-7161, ext. 1130
132
APPENDIX F: Student Participation Consent Form
CONSENT FORM
THE EFFECT OF GROUP COUNSELING INTERVENTION ON THE PERFORMANCE OF RURAL
STUDENTS ON THE GEORGIA HIGH SCHOOL GRADUATION TESTS OR END-OF-COURSE TESTS
Donna A. Caudell
Liberty University
Department of Education
You are invited to be in a research study to determine if participating in the group guidance portion of the Student
Success Skills program helps students pass the Georgia High School Graduation Test. You were selected as a
possible participant because you still have one or more Georgia High School Graduation Tests to successfully
complete. We ask that you read this form and ask any questions you may have before agreeing to be in the study.
This study is being conducted by Donna A. Caudell, Liberty University Department of Education.
Background Information:
The purpose of this study is to determine if the 8-week guidance portion of the Student Success Skills program could
statistically impact the Georgia High School Graduation Test scores of students who had previously failed at least
one portion of the Georgia High School Graduation Test.
Procedures:
If you agree to participate in this study, we would ask you to do the following things:
1. Attend eight 45-minute group guidance sessions, conducted by Mrs. Caudell (Sophomore Counselor).
These sessions will be scheduled before, during, or after school, depending on what works best with all
group members’ schedules.
2. Listen and treat all other group members with respect.
3. Complete the goals you set for yourself at each group guidance session before the next session.
4. Take the Georgia High School Graduation Test that you still need to pass during the spring 2013
administration of the tests.
5. Maintain confidentiality of everything done and discussed during the group guidance sessions. While the
information discussed in these group sessions will be academic and social in nature, no on will be asked to
discuss information that is too personal or private to them. Respect for others, however, indicates that all
things discussed in these group guidance sessions are considered confidential and should not be discussed
with anyone outside of the group. There will be no audio or video taping of these group sessions.
Risks and Benefits of participating in the Study:
There are no risks involved in participating in this study that are greater than you face in your everyday school life.
The study has several risks:
1. Participation in this study does not guarantee that you will pass your Georgia High School Graduation Test.
2. If the meeting time of the group guidance session causes you to miss time from a class (no more than 45
minutes), you will be responsible to make up any work missed and may have to do that after school hours.
3. If you disclose information to the group or to Mrs. Caudell that indicates that you are being abused or
harmed in any way, that you pose a risk of harming yourself in any way, or that you intend to harm
someone else in any manner, Mrs. Caudell (as a mandated reporter) will be required to report this
information to the proper agency or authority. Mrs. Caudell will discuss her concerns with you and will not
make a report without your knowledge.
133
The benefits to participation are:
1. The group guidance sessions may help you pass the Georgia High School Graduation Tests.
2. This project may help you develop the skills for setting personal goals.
3. This research project may help you improve your ability to complete tasks and personal goals.
4. This project will expose you to methods for Social Problem Solving, which can be a large part of enjoying
school and being successful.
5. The group guidance sessions will teach you the “Seven Keys to mastering any course” as part of the Study
Skills curriculum.
.
Compensation:
You will not receive payment or any form of compensation for participating in this research project.
Confidentiality:
The records of this study will be kept private. In any sort of report we might publish, we will not include any
information that will make it possible to identify a subject. Research records will be stored securely and only
researchers will have access to the records.
All records from this study will be stored in a locked vault in the Guidance Office of Habersham Central High
School. The only persons who have access to this vault are HCHS Guidance Counselors, the HCHS Graduation
Coach, HCHS teachers and HCHS administrators all of whom have access to students’ test scores as a routine part
of their job in this school.
There are limits on confidentiality that cannot be controlled when working with students in a group setting. During
the first group session, each student will sign an agreement not to discuss anything said in group sessions with
persons outside of the group. Students are expected to abide by the agreement they will sign. If a student chooses to
violate this agreement, however, this will be outside the control of the researcher.
The researcher (Mrs. Caudell) will not share anything said in the group with anyone outside the group, unless there
is a statement of abuse, harm, or imminent danger as indicated in the previous “Risk” statement.
Voluntary Nature of the Study:
Participation in this study is voluntary. Your decision whether or not to participate will not affect your current or
future relations with Liberty University or Habersham Central High School. If you decide to participate, you are
free to not answer any question or withdraw at any time without affecting those relationships.
Contacts and Questions:
The researcher conducting this study is Donna A. Caudell. You may ask any questions you have now. If you have
questions later, you are encouraged to contact Mrs. Caudell at [email protected] or 706-778-7161,
ext. 1130. You are also welcome to come to the HCHS Guidance Office and speak with Mrs. Caudell with any
questions or concerns you may have regarding this research project.
You may also contact the Liberty University faculty advisor for this study. He is Dr. Casey Reason
([email protected]). His phone number is 419-724-3391.
If you have any questions or concerns regarding this study and would like to talk to someone other than the
researcher, or advisor, you are encouraged to contact the Institutional Review Board, Dr. Fernando Garzon, Chair,
1971 University Blvd, Suite 1837, Lynchburg, VA 24515 or email at [email protected].
You will be given a copy of this information to keep for your records.
134
Statement of Consent:
I have read and understood the above information. I have asked questions and have received answers. I consent to
participate in the study.
Signature: _________________________________________ Date: ________________
Signature of Investigator:___________________________ Date: __________________
IRB Code Numbers: [Risk]
IRB Expiration Date: [Risk]