DOI: https://doi.org/10.64010/PDRR7239
Abstract
As textbook technology supplements (TTS) become more prolific in higher education as complimentary tools to assist student learning, it is crucial that these technologies be evaluated for their usefulness and impact on student learning outcomes. The purpose of this case study was to understand the efficacy of implementation of an adaptive learning technology (ALT) in an introductory business course, and its impact on student performance in relation to course learning outcomes. The sample of this study consisted of two sections of introductory business course students at a Mid-Atlantic four-year institution. This study divided the sample into two groups, one as a control without the use of the ALT, and one as a treatment group with the ALT. The students were then assessed via a pre-test/post-test analysis as well as through aggregate exam performance. The results of this experimental design study showed a statistically significant positive difference in post-test scores, as well as aggregate exam scores for those students within the treatment group as compared to the students within the control group. The results of this study provides preliminary support that particular student populations could experience improved performance by utilizing an ALT deployed in a classroom.
Introduction
As textbook technology supplements (TTS) become more prolific in higher education as complimentary tools to assist student learning, it is crucial that these technologies be evaluated for their usefulness and impact on student learning outcomes. To date, this evaluative research is still in its infancy, and existing studies are rife with complications. Textbook publishers offer testimonials espousing the effectiveness of their particular learning technologies; however, these remain largely non-peer reviewed case studies provided in sales literature (Pearson, 2014; McGraw Hill, 2013a). On the academic front, a meta-analysis conducted by Timmerman and Kruepke (2006), computer-assisted instruction (CAI) was found to enhance undergraduate student performance in traditional lecture/discussion-type classes, particularly when the technology was used across multiple units. This study has been criticized for the large number of moderating variables, including various media richness constructs, students’ field of study, and publication time, all of which could potentially “cloud” the impact of the effectiveness of select technologies (Gearhart, 2016, p. 9). As a result, it has been suggested that the component parts of TTS be evaluated separately, in hopes of understanding the contribution of individual technologies to student learning outcomes (Sellnow, Child, & Ahlfeldt, 2005; Gearhart, 2016).
A few studies have been conducted isolating a particular component of the TTS technology. One such technology studied on a standalone basis is LearnSmart, an adaptive learning tool offered by McGraw-Hill as part of their Connect Package. This tool assesses students understanding on a variety of topics and can redirect struggling students to more content, while students who have mastered the material can progress through the text. This is accomplished by tapping into student meta-cognitive responses as to their confidence level for questions answered (McGraw Hill, 2013a). This technology also generates progress and usage reports, allowing instructors to assess student proficiency or areas to target for improvement. Although McGraw Hill claims greater learning efficacy as a result of LearnSmart usage (McGraw Hill, 2013a), several studies have demonstrated no correlation between LearnSmart usage and student exam performance.
In one instance, Gearhart (2016) conducted a posttest only experimental study using students at a mid-size, southwest university. The participants of the study were juniors and seniors in an interpersonal communications class. Results of this study demonstrated no significant differences in exam scores between treatment and control groups [t (55) = -.71, p = .48, d = .19]. In another instance, Griff and Matter (2013) conducted a pretest/ posttest experimental design of physiology and anatomy students at several 2-year and 4-year institutions. Their results demonstrated no significant differences between pretest and posttest scores [F (1,581) = .19, p = .67], student grades (G = 9.05, d.f. = 4, p = .06), or retention (t = 1.68, d.f. = 5, p = .15). This research also concluded that the amount of time spent in the LearnSmart application did not impact student performance (r = .07, d.f. = 262, p = .25). Interestingly, two institutions involved in this study demonstrated improvement in both posttest scores and grades, potentially suggesting that some students could benefit from the usage of this technology.
The researchers of the present study hypothesize that the level of student preparedness for collegial academic work could be a factor to the acceptance of, and benefits derived from, adaptive technologies. The business department considered empirical evidence collected by the college and data culled from book store sales, as well as faculty anecdotes that suggested: (a) approximately 60% of business students did not have text books, (b) as high as 65% of incoming students in the business program were First Generation College students (survey data), and (c) that more than 50% received Pell grants, indicating that more than half of the students were from lower socio economic status (SES) households. In addition, data gleaned from incoming math, writing and reading assessment scores suggest that many students come to the college under prepared for college-level work.
As indicated above, previous studies of ALT’s analyzed upper classmen, who usually have time to develop and refine their meta-cognitive strategies, and health science students, who traditionally enter school more academically prepared (Salvatori, 2001). As such, it was determined to test the effectiveness of an ALT on incoming freshman, a population generally considered to be under-prepared (Balduf, 2009). Additionally, our college enrollment tends to be highly skewed to first-generation and lower socio-economically status (SES) students. First-generation college students have been shown to be underprepared academically, lacking in study and time management skills, and having lower self-efficacy compared to their non-first generation counterparts (Maietta, H. 2016; McCarron, G.P. & Inkelas, K.K., 2006; Pascarella, E.T., Pierson, C.T., Wolniak, G.C., & Terenzini, P.T., 2004). Lower SES students demonstrate higher anxiety and attention problems. In addition, SES is positively correlated to cognitive development, influencing intelligence and academic achievement (Hackman, D.A., Farah, M.J., & Meaney, M.J., 2011).
This study was initiated to examine the effectiveness of the LearnSmart application specifically for college freshman, skewed toward lower SES and first generation students enrolled in an introductory-level business course. This population may come to school lacking the cognitive and adaptive strategies to succeed in a highly rigorous academic environment. The LearnSmart technology could assist this population in developing the cognitive strategies necessary to persist and develop efficacious academic beliefs and behaviors. The effectiveness of the ALT was gauged through student performance on posttest and aggregate exam score results. Additionally, LearnSmart could potentially allow instructors to redesign their instruction to one that copies a flipped classroom or blended learning archetypal, rather than a traditional lecture model. Given what is reported in extant CAL literature, along with the works of researchers Griff and Matter (2013) and Gurung (2015), the following hypotheses are presented:
H01: There is no difference in pretest/posttest scores between a treatment group utilizing the LearnSmart adaptive learning technology and a control group.
H02: There is no difference in aggregate exam scores between treatment group utilizing the LearnSmart adaptive learning technology and a control group.
LEARNSMART: OVERVIEW
LearnSmart is an electronic metacognitive reading comprehension tool embedded into SmartBook that provides the ability to tag learning objectives by the instructor, and allow students to engage with an adaptive learning tool via an algorithm-based quiz system. The goal of the system is to allow students to zero in on the specific areas of the text deemed critical by the professor to maximize the time in text for students. Additionally, LearnSmart’s adaptive technology forces students to continue to engage in the text until they have reached mastery of those key learning objectives. McGraw-Hill Higher Education touts Connect as a “digital teaching and learning environment that saves students and instructors time while improving performance over a variety of critical outcomes (McGraw Hill, 2013a).”
Griff and Matter (2013) assessed the tool’s effectiveness in introductory anatomy and physiology courses and described how the LearnSmart resource works. Within the LearnSmart reading module, students are presented with various types of questions to include multiple choice, drag and drop, fill-in-the-blank, and choose-all-that-apply. Once students have selected or written their answers, they are asked to evaluate their own self-awareness of understanding by indicating their level of confidence on a scale from “I know it” to “No idea”. If correct, the number of items remaining will decrease by one, whereas if they get it incorrect, they will be given another question assessing that specific learning objective later in the series. Students continue answering these questions until they have demonstrated mastery of the learning objectives assigned by the instructor. The algorithmic portion of this adaptive learning tool uses the frequency of questions answered correct and incorrect and the student’s metacognition of confidence to select subsequent questions to personalize learning (MGHHE, 2013a). Grading is done based upon completion by the due date. Instructor can then gain access to reporting from these LearnSmart modules to provide just-in-time teaching and reinforcement of learning objectives by viewing frequently misses questions, most difficult learning objectives, and metacognitive scores for individual students. Such an adaptive learning tool can benefit both students and instructors by changing the paradigm of classroom instruction (MGHHE, 2013b).
A primary benefit of student LearnSmart usage advocated by MGHHE is greater learning efficiency, as demonstrated in the numerous case studies they provide on their website (McGraw Hill, 2013a). Learning efficiency is the degree to which a TTS tool can help reduce overall study time or maximize gain in students’ already limited study time. Theoretically, students are better able to understand areas of proficiency and deficiency through the LearnSmart tool (McGraw Hill, 2013a, p. 4). As a result, it can pinpoint students’ knowledge gaps helping to direct their attention and study time where it is needed, therefore allowing for a more focused study plan. Better focus, they claim, is realized and manifested through increased student performance. Although the MGHHE LearnSmart website offers results of case studies that support claims regarding this benefit (e.g., McGraw Hill, 2013b), relatively few unbiased, published studies document the influence of LearnSmart on student performance.
METHOD
This experimental study examined the impact of an adaptive learning tool on students’ performance in an introductory-level business course. Student performance, in the form of individual and aggregate test scores, along with pre and post-test results, were compared between various sections to test if LearnSmart would have any significant impact on student performance outcomes. This sections describes the study participants, including pre-study group comparisons, research procedures, and statistical procedures.
Participants
Participants (N = 57) were comprised of students enrolled in three sections of an introductory-level business course during the fall 2016 semester at a small business college in central Pennsylvania. Students were randomly assigned at the classroom level to either treatment or control groups resulting in an experimental research design with results from 26 in the treatment group and 31 in the control group. One of the daytime sections was designated to treatment conditions, while the other was designated as a control group. Since only one online section of the course was running in the fall term, half of the students were designated to treatment conditions, while the remaining students were used as a control group. Students in the online section were unaware of the requirements of fellow students. The three sections were being taught from a pre-designed and approved course template to ensure all work, assignments, and grading protocols were the same in each of the classes.
Although students were randomly assigned to the class sections, the control and treatment groups were compared across several demographic variables, to include gender, age, program of study, credit hours completed, military status, and prior college experience. The treatment and control groups exhibited no significant differences across these demographic variables. As a whole, the online section had a higher mean age, but randomly assigning the class to either treatment or control conditions did not skew the age of either group. Next, the pretest scores of the groups were compared to distinguish any significant differences in knowledge of course-related content prior to any instruction. An ANOVA test revealed no significant differences between the groups, t (57) = .074, p = .787. A few studies have used overall GPA as an indicator of initial equivalence since it has demonstrated a positive correlation to student performance (Cheung & Kan, 2002); however, since this study consisted of mostly freshman in an introductory-level course, this was not possible.
In this study, students were randomly assigned to either the control or treatment group. For the online group, students were randomly selected by lot. Interpersonal communication between students of these two daytime groups could not be limited or monitored, so students were kept unaware of the exact nature of the study being done outside of what was noted in the informed consent waiver the students signed at the beginning of the course (see Appendix A). The online course was split within the class, as there was little concern for interpersonal communication between students since students were unaware that there was a control group in the class. The BlackBoard Learning Management System (LMS) was set up so students could only see the assignments, discussion boards, and tests pertaining to their specific group through the built-in function of adaptive release. Adaptive release allows an instructor to set an assignment in Blackboard LMS so only specific students can see and access the assignment. No specific mention was made of Connect, SmartBook, or LearnSmart in the course through the use of alternative terminology such as a “reading assessment”.
Students in the treatment groups were given Connect access by McGraw-Hill Higher Education for free for the purposes of this study. A representative for McGraw-Hill Higher Education was also present one day during the first week of the term to assist the daytime treatment group. They provided this daytime treatment group assistance logging in and gaining access to the Connect system.
Procedures
The classes were taught by different full-time faculty of the School of Business, at different times of the day, and in different rooms on the college campus. To mitigate extraneous variables on student outcomes, instructors followed a prescribed course design, including pre-designed quizzes, tests, written assignments and rubrics, and weighting scale. Additionally, the two professors teaching the two courses met on a weekly basis to discuss teaching methodologies, topic covered, and grading so as to minimize the individual affect they had on student grading and the study habits. It can also be assumed that the time of day and classroom had no significant difference in student learning and outcomes as they were both morning classes in comparably sized classrooms on the main campus of the college.
In the control groups, students completed 25 point online reading assessment quizzes each week for ten weeks, which comprised 25% of the final course grade. These quizzes were given to ensure students were reading the text and were engaged in their own learning prior to coming to class for the week. In the treatment groups, students completed 14 LearnSmart modules covering one chapter apiece for 25 points a week over 10 weeks to comprise 25% of the final course grade. The course ran for 11 weeks, so no quiz or LearnSmart module was assigned in the eleventh week in any section. The LearnSmart module is an online quiz through McGraw-Hill Connect that adapts to the student responses to ensure mastery of content via an algorithmic process. Students were repeatedly quizzed on chapter content until mastery was demonstrated. LearnSmart was given in place of the weekly quizzes in the treatment groups
All LearnSmart modules were created by the course designer as 40 minute modules for each chapter covered, as research by Rogers (2016) showed that 40 minutes in LearnSmart each week showed the most noticeable improvement in student outcomes. In accordance to this, students were required to complete 80 minutes of LearnSmart modules in some weeks, as the college runs on an accelerated eleven-week format. It must be noted that within LearnSmart, the course designer can adjust the amount of content for each chapter delivered to students. Although 40 minutes was assigned by the course designer for each module, we recognize that each student learns at a different pace and could take more of less time to complete each module. Forty minutes was an approximate guideline for completion. The quizzes were twenty questions each, but were randomly selected from a pool of 40 questions.
The students were also given five non-cumulative 40 question multiple-choice tests in the course covering the 14 chapters covered in the text. All quizzes and tests were created by the course designer and were consistent throughout all sections of this study. The tests were given in weeks two, four, six, eight, and ten, with the post-test given during week eleven. Each test covered no more than four chapters of material from the text, and all questions were generated from the text material.
Of the features provided through Connect, we focused on LearnSmart specifically for the cognitive benefit it could provide our student population. There are other available resources available through Connect to include interactive assignments, quizzes, tests, case studies, and videos. These tools were excluded from the case study and were not used in the treatment group so as to isolate the effect of LearnSmart.
Statistical Analysis
Several measures were utilized to understand the contribution of the LearnSmart technologies to student performance outcomes over that of the control group. Before any analysis of performance outcomes, potential bias in the assignment of students to treatment or control conditions was examined by evaluating whether significant mean differences existed between groups for pretest scores. Ideally, student GPA’s would have also been considered, since student GPA is a predictor of student performance (Cheung & Kan, 2002); however, since this was an introductory-level course, student GPA’s were not available. To evaluate the LearnSmart adaptive learning tool, comparisons were made on pretest scores relative to posttest scores, both within group and between groups. For between group scores differences, an analysis of covariance (ANCOVA) was employed. Because systematic error was addressed by the randomization of assignment to treatment or control conditions, ANCOVA was chosen in an attempt to reduce error variance (Dimitrov & Rumrill, 2003). Tests on the assumptions associated with conducting an ANCOVA, particularly the linear relationship between pretest and posttest scores and homogeneity of regression slopes, were conducted prior to analysis. In addition to pretest/posttest differences, independent sample t tests on aggregate exams scores of treatment and control groups were conducted to compare between group differences.
RESULTS
This study was initiated to examine the impact of an adaptive learning tool, LearnSmart, on student performance outcomes. The participants were enrolled in an introductory-level business course at a small central Pennsylvania college. Within group comparisons for the treatment and control groups were conducted on pretest/posttest differences, while between group comparisons were conducted to detect pretest/posttest differences between the treatment and control groups. In addition, aggregate exam scores taken throughout the term were compared to identify significant differences.
Hypothesis Testing
H01 states that there would be no significant difference between the treatment and control group in relation to pretest/ posttest scores. This consisted of three steps for the purposes of data analysis: (a) testing for initial group equivalence, (b) examining the within group differences between the pretest and posttest, and (c) demonstrating the effectiveness of the ALT by comparing between group differences with pretests and posttests. Before conducting within and between group tests, a test of initial group equivalence was conducted on pretest scores between the two groups. This was done to confirm minimal sample differences between the treatment and control groups and to ensure the groups started at the same level of business-related knowledge. An ANOVA revealed no significant difference in business-related knowledge (Treatment = .445, Control = .4606, F = .074, p = .787). This would suggest that, at least initially, both groups were equal with respect to introductory business-related knowledge. To test hypothesis H01, a paired sample t-test was first conducted to examine within group differences at the time of the pretest and the posttest. As evidenced in Table 1, both treatment and control groups exhibited significant increases from Time 1 to Time 2. This suggests that both groups experienced exposure to content and materials sufficient enough to demonstrate an understanding of course related materials according to course objectives.
Table 1. Paired Sample t-tests for Within Group Differences

Next, a between group ANOVA was conducted to determine the extent to which the treatment group demonstrated significantly higher posttest scores at the end of the intervention as compared to that of the control group. At the completion of the course, the treatment group demonstrated a higher posttest score, M=83.68 (SD=.132), than the control group, M=68.58 (SD=.197), F=14.82, P< .000. To test the result in a more robust manner, an analysis of covariance (ANCOVA) was conducted to determine if the differences between the treatment and control groups were significantly different at Time 2 after controlling for the pretest scores at Time 1. Results from the ANCOVA reveal that when controlling for the pretest scores at Time 1, there was still a significant difference between the treatment and control groups for posttest scores (F=14.88, p< .000). Lastly, exam scores taken throughout the term by the treatment and control groups were compared to determine differences. The treatment group demonstrated significantly higher average exam scores, M= 85.70 (SD=.10) throughout the term over that of the control group M=78.33 (SD=.164), t= 4.73, p<.000.
In sum, this analysis suggests that (a) the groups were equivalent at the beginning of the term in regards to initial knowledge of basic business concepts, (b) both the Treatment group and the Control group experienced significant increases from the pretest to the posttest, suggesting that predetermined teaching strategies were successful, (c) the Treatment group experienced significant improvement in posttest scores over pretest scores than those of the Control group, and (d) the Treatment group experienced significantly higher average exam scores taken throughout the term over those of the Control group.
DISCUSSION
This study examined the impact of MGHHE’s LearnSmart adaptive learning technology on student performance in an introductory-level business course. The paucity of evaluative research on component parts of Textbook Technology Supplements (TTS) is surprising given their proliferation at all levels of education. To understand the impact of these tools, they must be studied in conjunction with each other, and in isolation, within various student populations. The participants of interest for this study, as opposed to previous studies with non-positive results, were college freshman students, highly skewed to first-generation and lower socio-economic status.
The results of this study provides preliminary support that particular student populations could experience improved performance by utilizing an adaptive learning technology deployed in a classroom. Treatment participants experienced significant within group improvements between pretest and posttest scores, as well as higher between group improvements over those of the Control group. Additionally, the Treatment group demonstrated higher aggregate exam scores over the course of the entire term over those of the Control group. These findings are consistent with prior research results of Gurung (2015), who compared the effectiveness of three separate TTS offerings across three semesters of an introductory psychology course. In investigating the relationship between the amount of time spent using LearnSmart and student exam performance, the authors identified a significant, positive correlation such that the more time students spent with the LearnSmart modules, the higher students scored on exams (average r = .17).
The findings of this study however, are contradictory to the conclusions reached by Gearhart (2016) and Griff and Matters (2013), two studies that found no significant differences in exam scores between the Treatment and Control groups. Of note, the participants of these studies were upper classmen and health science students, who usually experience more rigorous high school curriculum and have higher college acceptance standards. Prior to the start of this study, the authors hypothesized that in-coming freshman, who typically lack the well-developed cognitive strategies to properly engage with course reading materials, would benefit most from the use of an adaptive learning technology. In addition, the participants of this study were highly skewed first-generation college students and of lower socio-economic status, suggesting benefits could be derived from a metacognitive development application, which is why the choice was made to utilize the LearnSmart tool.
It is possible that other technologies examined in isolation could have a positive influence on student performance, or that various technologies utilized in conjunction could boost the effects on student performance. It is also realistic to suggest that the over usage of various technologies within one class presents confusion and frustration to students. More studies are necessary to understand the individual and combined contribution of these technologies. Additionally, this study focused on freshmen, with a large portion consisting of first generation, low SES college student. These groups should be studied separately, to understand if one particular group would benefit more from this, or other, ADT’s.
It is also reasonable to suggest that at some point this technology becomes redundant to college students, as they develop their own cognitive strategies to digest and comprehend text and other materials. At some point in their educational journey, resentment may set in regarding the usage of these technologies; the same may be said if these technologies are introduced later in a student’s tenure. Studies should be conducted to determine if a point of diminishing returns exists, where negative consequences outweigh potential benefits.
Limitations
According to Griff and Matters (2013), determining the impact of a specific learning tool within a diverse, dynamic academic environment is difficult, to say the least. This study identified a significant increase in student performance; however, there are many variable to consider when examining student learning. Any number of student characteristics, including self-efficacy, anxiety, and motivation could potentially influence performance. In addition, since different instructors were assigned to treatment or control conditions, it is possible that instructor proficiency played a part in student performance, regardless of the efforts made to maintain consistency between the content, delivery, and assessment of courses. Future studies should strive to have the same instructor deliver the course for both Treatment and Control groups to limit instructor influence on performance.
Conducting research in “real-world” conditions presents many challenges, and in this case the researchers would have preferred a larger sample size to study the effectiveness of the LearnSmart technology. Unfortunately, the sample size was limited to participants enrolling in an introductory business course. The researchers decided to add an online course in order to boost the sample size. In doing so, the overall mean age of the groups was raised. Although the mean age between groups remained statistically insignificant, based on the hypotheses presented, the researchers would have preferred to study all traditional college students, as opposed to including non-traditional students in the study.
Lastly, although every effort was made to conceal the fact that some students were utilizing the LearnSmart technologies while others were not, it is possible that students in Treatment and Control conditions discussed technology utilization in their respective classes. A result of these discussions could have resulted in a situation where Control participants were demoralized by not being selected to engage with these learning technologies, and as a result negatively impacted their performance in the class (Cook & Campbell, 1979).
Conclusion
With the expanded use of textbook technology supplements in college courses, comes the greater need to evaluate these technologies for their effectiveness to enhance student experiences and learning. As suggested by previous research, these technologies should be examined in isolation from other supplemental materials, to better understand their individual impact on student performance. This research was designed to evaluate the impact of McGraw-Hill’s LearnSmart technology on students in an introductory-level business class. The results demonstrated a significant increase in pretest/posttest scores, as well as aggregate exam scores of the Treatment participants over those of the Control participants, suggesting this technology improved student performance. The researchers of this study speculate that the participants, highly skewed toward first-generation and lower socio-economic status freshmen, reap more benefits from this particular ALT, since they enter college lacking the cognitive strategies to fully engage course reading materials.
Declaration of conflicting Interests
At the time of this study, the researchers involved were not employed, or in any way compensated by the McGraw-Hill Higher Education company.
References
- Balduf, M. (2009). Underachievement among college students. Journal of Advanced Academics, 20(2), 274-294.
- Cheung, L., & Kan, A. (2002). Evaluation factors related to student performance in a distance-learning business communication course. Journal of Education for Business, 77(5), 257-263. doi:10.1080/08832320209599674
- Cook, T.D., & Campbell, D.T. (1979). Quasi-experimentation: Design and analysis issues foro field settings. Houghton Mifflin Company: Boston..
- Dimitrov, D. M. & Rumrill, P.D. (2013). Pretest-posttest designs and measurement of change. Work 20 IOS Press, 159-165.
- Gearhart, C. (2016). Does Learn Smart connect students to textbook content in an interpersonal communication course. International Journal of Teaching and Learning in Higher Education, 28(1), 9-17.
- Griff, E. R., & Matter, S. F. (2013). Evaluation of an adaptive online learning system. British Journal of Education Technology, 44(1), 170-176.
- Gurung, R. A. (2015). Three investigations of utility of texbook technology supplements. Psychology Learning and Teaching, 14(1), 26-35.
- Hackman, D. A., Farah, M. J., & Meaney, M. J. (2011). Socioeconomic status and the brian: Mechanistic insights from human and animal research. National Review of Neuroscience, 11(9), 651-659. doi:10.1038/nrn2897
- HeikkilÃ, A., & Lonka, K. (2006). Studying in higher education: Students’ approaches to learning, self-regulation, and cognitive strategies. Studies in Higher Education, 31(1), 99-117. doi:10.1080/03075070500392433
- Ishitani, T. T. (2006). Studying attrition and degree completion behaviour amoung first generation college students in the United States. Journal of Higher Education, 77(5), 861-885.
- Jackson, D., & LeFebvre, E. (2012). Why am I behaind? An examination of low income minority students preparedness for college. McNair Schoolars Journal, 13, 121-138.
- Maietta, H. (2016). Unfamiliar territory: Meeting the career development needs of first-generation college students. NACE Journal.
- McCarron, G. P., & InkelasThe, K. K. (2006). The gap between educational aspirations and attainment for first-generation college students and the role of parental involvement. Journal of College Student Development, 47(5), 534-549.
- McGraw-Hill Higher Education (2013a). LearnSmart works. Retrieved from http://connect.customer.mcgraw-hill.com/studies/effec-tiveness-studies/learnsmart-works/
- McGraw-Hill Higher Education (2013b). Digital course solution improves student success. Retrieved from http://connectmarketing. mheducation.com.s3.amazonaws.com/wp-content/uploads/ 2013/07/Case_Psych_NMSU_Dolgov.pdf
- Pascarella, E. T., Pierson, C. T., Wolniak, G. C., & Terinzini, P. T. (2004). First-generation college students. Journal of Higher Education, 75(3), 249-270.
- Pearson (2014). MyLab & Mastering Humanities and Social Sciences: Efficacy implementation and results. Retreived from http:// pearsonmylabandmastering.com/northamerica/results/files/HSSLE_EfficacyResults_2014.pdf
- Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82(1), 33-40. doi:10.1037//0022-0663.82.1.33
- Rogers, T. A. (2016). An evaluation of an adaptive learning tool in an introductory business course. (Doctoral dissertation). Retrieved from PQDT Open. (10128158)
- Salvatoi, P. (2001). Reliability and validity of admissions tools used to select students for the health professions. Advances in Health Science Education, 6, 159-175.
- Sellnow, D.D., Child, J.T., & Ahlfeldt, S.L. (2005). Textbook technology supplements: What are they good for? Comunication Education, 54(3), 243-253. doi:10.1080/03634520500356360
- Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19, 460-475.
- Timmerman, C.E., & Kruepke, K.A. (2006). Computer-assisted instruction, media richness, and college student performance. Communication Education, 55, 73-104. doi: 10.1080/03634520500489666