Tecnologías digitales
Impacto moderado, Costo moderado, Evidencia exhaustiva
Technical Appendix
Definition
In a school context, digital technology is mainly associated with computer- or technology-assisted strategies to support learning. Approaches in this area are very varied, but a simple distinction can be made between:
1. technology for students, including:
- learners using technology tools for problem solving and open-ended learning,
- technology taking on a teaching or tutoring role (e.g. computer-assisted instruction), and
2. technology for teachers such as interactive whiteboards or learning platforms.
By far the majority of technology studies are focused on student use.
Search Terms: digital technology; information and communications technology; word processing; computer/educational technology; online/e-learning; computer assisted instruction
Evidence Rating
There are 32 meta-analyses and quantitative syntheses all suggesting a positive impact of digital technologies on pupil’s learning. 20 of these have been conducted in the last ten years. However, there is a very wide range of focus across these different meta-analyses, including on particular technologies (e.g. laptops or digital games or intelligent tutoring), particular curriculum areas (e.g. reading or mathematics), and particular countries (e.g. in Turkey or Taiwan). There is also considerable variation in the technical quality of the meta-analyses. The variation in effects is also very wide (from 0.16 to 1.6) making it difficult to draw out specific messages. Average impact has remained relatively consistent over time suggesting that the general message of moderate positive impact is likely to remain relevant. Overall, the evidence is rated as extensive.
Cost Information
The total costs of using digital technologies – including all hardware – can be high, but most schools are already equipped with hardware such as computers and interactive whiteboards. Digital technology approaches often require additional training and support for teachers which can be essential in ensuring the technology is properly used and learning gains are made. Expenditure is estimated at £300 per pupil for new equipment and technical support and a further £500 per class (£20 per pupil) for professional development and support. Costs are therefore estimated as moderate.
References
-
A meta-analysis of the impact of technology on learning effectiveness of elementary students
Computers & Education, 105, 14-30
(2017)
-
A meta-analytic study concerning the effect of computer-based teaching on academic success in Turkey
Educational Sciences: Theory & Practice, 15(5), 1-16
(2015)
-
A Meta-Analysis of the Effectiveness of Computer Assisted Instruction in Science Education
Journal of Research on Technology in Education, 42.2: 173-188
(2000)
-
Computer-assisted instruction in support of beginning reading instruction: A review
Review of Educational Research, 72(1), 101-130
(2002)
-
A Meta-Analysis on the Effectiveness of Computer-Assisted Instruction: Turkey Sample
Kuram Ve Uygulamada Egitim Bilimleri 8: 497-505
(2010)
-
How features of educational technology applications affect student reading outcomes: A meta-analysis
Educational Research Review, 7(3), 198-215
(2012)
-
Educational Research Review, 9, 88-113
(2013)
-
Digital games, design, and learning: A systematic review and meta-analysis.
Review of Educational Research, 86(1), 79-122
(2016)
-
The Impact of Technology: Value-added classroom practice
Final report Coventry: Becta
(2010)
-
Simulations for STEM Learning: Systematic Review and Meta-Analysis.
Menlo Park, CA: SRI International
(2014)
-
Does ICT Improve Learning and Teaching in Schools?
British Educational Research Association
(2003)
-
The Impact of Digital Technology on Learning: A Summary for the Education Endowment Foundation
EEF: London
(2012)
-
Arlington, VA: SRI International
(2003)
-
Effectiveness of intelligent tutoring systems: a meta-analytic review.
Review of Educational Research, 86(1), 42-78
(2016)
-
The Effects of Computer-Assisted Instruction in Reading: A Meta-Analysis
Doctoral dissertation, University of Minnesota
(2015)
-
A Meta-analysis of the Effects of Computer Technology on School Students’ Mathematics Learning
Educational Psychology Review, 22.3: 215-243
(2010)
-
Effects of Computer-Assisted Instruction on Students' Achievement in Taiwan: A Meta-Analysis
Computers and Education, 48.2 pp 216-233
(2007)
-
Small group and individual learning with technology: A meta-analysis.
Review of Educational Research, 71(3), 449-521
(2001)
-
ABRA Online Reading Support Evaluation Report
EEF: London
(2016)
-
US Department of Education
(2009)
-
Journal of Literacy Research, 40(1), 6-58
(2008)
-
EEF: London
(2016)
-
Word Processing Programs and Weaker Writers/Readers: A Meta-Analysis of Research Findings
Reading and Writing, 25, 641-678
(2012)
-
(2007)
-
University of Illinois/North Central Regional Educational Laboratory
(2005)
-
Journal of Educational Computing Research, 36(1), 1-14
(2007)
-
Available from ProQuest Dissertations & Theses Global
(2006)
-
Computers & Education, 53(3), 913-928
(2009)
-
Units of Sound evaluation report
Education Endowment Foundation: London
(2015)
-
Journal of Educational Psychology, 105(4), 970-987
(2013)
-
Journal of Child Psychology and Psychiatry, 52(3), 224-235
(2011)
-
Review of Educational Research, 81, 4-28
(2011)
-
Effects of mobile devices on K–12 students' achievement: a meta‐analysis.
Journal of Computer Assisted Learning (Early View)
(2017)
-
The effects of computer algebra systems on students' achievement in mathematics (Order No. 3321336)
Available from ProQuest Dissertations & Theses Global
(2008)
-
Journal of Research in Reading, 25, 129-143
(2002)
-
EPPI-Centre, Social Science Research Unit, Institute of Education
(2003)
-
http://www.citeulike.org/group/572/article/364048
(2002)
-
A meta-analysis of the effectiveness of teaching and learning with technology on student outcomes.
Learning Point Associates.
(2003)
-
A meta-analysis of the cognitive and motivational effects of serious games.
Journal of Educational Psychology, 105(2), 249-265
(2013)
-
Learning in one-to-one laptop environments: A meta-analysis and research synthesis.
Review of Educational Research, 86(4), 1052-1084
(2016)
Summary of effects
Meta-analyses | Effect size | FSM effect size | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Chauhan, S. (2017) |
|
|||||||||||
Batdı, V. (2015) |
|
|||||||||||
Bayraktar S. (2000) |
|
|||||||||||
Blok, H., Oostdam, R., Otter, M. E., & Overmaat, M. (2002) |
|
|||||||||||
Camnalbur & Erdogan (2010) |
|
|||||||||||
Cheung, A. C., & Slavin, R. E. (2012) |
|
|||||||||||
Cheung, A. C., & Slavin, R. E. (2013) |
|
|||||||||||
Clark, D. B., Tanner-Smith, E. E., & Killingsworth, S. S. (2016) |
|
|||||||||||
D’Angelo, C., Rutstein, D., Harris, C., Bernard, R., Borokhovski, E., Haertel, G. (2014) |
|
|||||||||||
Kulik, J. A., & Fletcher, J. D. (2016) |
|
|||||||||||
Kunkel, A. K. (2015) |
|
|||||||||||
Li & Ma (2010) |
|
|||||||||||
Liao, Y.C. (2007) |
|
|||||||||||
Lou, Y., Abrami, P. C., & d’Apollonia, S. (2001) |
|
|||||||||||
Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009) |
|
|||||||||||
Moran, J., Ferdig, R. E., Pearson, P. D., Wardrop, J., & Blomeyer, R. L. (2008) |
|
|||||||||||
Morphy P. & Graham S. (2012) |
|
|||||||||||
Onuoha, C. O. (2007) |
|
|||||||||||
Pearson, D.P., Ferdig, R.E., Blomeyer, R.L. & Moran, J. (2005) |
|
|||||||||||
Rosen, Y., & Salomon, G. (2007) |
|
|||||||||||
Sandy-Hanson, A. (2006) |
|
|||||||||||
Seo, Y. J., & Bryant, D. P. (2009) |
|
|||||||||||
Steenbergen-Hu, S. & Cooper, H. (2013) |
|
|||||||||||
Strong, G. K., Torgerson, C. J., Torgerson, D., & Hulme, C. (2011) |
|
|||||||||||
Tamim R.M., Bernard R.M., Borokhovski E., Abrami P.C., & Schmid R.F. (2011) |
|
|||||||||||
Tingir, S., Cavlazoglu, B., Caliskan, O., Koklu, O., & Intepe‐Tingir, S. (2017) |
|
|||||||||||
Tokpah, C. L. (2008) |
|
|||||||||||
Torgerson, C.J. & Elbourne, D. (2002) |
|
|||||||||||
Torgerson, C, & Zhu, D. (2003) |
|
|||||||||||
Waxman, H. C., Lin, M. F., & Michko, G. (2003) |
|
|||||||||||
Wouters, P., Van Nimwegen, C., Van Oostendorp, H., & Van Der Spek, E. D. (2013) |
|
|||||||||||
Zheng, B., Warschauer, M., Lin, C. H., & Chang, C. (2016) |
|
|||||||||||
Single Studies | ||||||||||||
McNally, S., Ruiz-Valenzuela, J., Rolfe, H. (2016) |
|
|||||||||||
Motteram, G., Choudry, S., Kalambouka, A., Hutcheson, G., Barton, A. (2016) |
|
|||||||||||
Sheard, M., Chambers, B., Elliott, L. (2015) |
|
|||||||||||
0.29 |
The right hand column provides detail on the specific outcome measures or, if in brackets, details of the intervention or control group.
Meta-analyses abstracts
The existing studies suggest that if technology is interwoven comprehensively into pedagogy, it can act as a powerful tool for effective learning of the elementary students. This study conducted the meta-analysis by integrating the quantitative findings of 122 peer-reviewed academic papers that measured the impact of technology on learning effectiveness of elementary students. The results confirmed that the technology has a medium effect on learning effectiveness of elementary students. Further, this study analysed the effect sizes of moderating variables such as domain subject, application type, intervention duration, and learning environment. Finally, the impact of technology at different levels of moderating variables has been discussed and the implications for theory and practice are provided.
This research aims to investigate the effect of computer-based teaching (CBT) on students’ academic success. The research used a meta-analytic method to reach a general conclusion by statistically calculating the results of a number of independent studies. In total, 78 studies (62 master’s theses, 4 PhD theses, and 12 articles) concerning this issue were researched based on the literature review of the articles and theses which involved pre-test and post-test control groups and were conducted in Turkey between 2006 and 2014. The CMA and MetaWin statistical programs were used to calculate the effect sizes and variations for comparing the groups with regard to each study in the context of the meta-analysis. The effect size for the 78 studies was calculated as ES=1.13 based on analysis using the random effects model. This value is large, positive, and significant. Aside from this, the mean effect sizes of the CBT were large with regard to the independent variables such as grades level, subject area, types of course, implementation period, and publication year. As a result, it can be seen that the effect of CBT in terms of academic success was high and more successful than traditional teaching methods.
This meta-analysis investigated how effective computer-assisted instruction (CAI) is on student achievement in secondary and college science education when compared to traditional instruction. An overall effect size of 0.273 was calculated from 42 studies yielding 108 effect sizes, suggesting that a typical student moved from the 50th percentile to the 62nd percentile in science when CAI was used. The results of the study also indicated that some study characteristics such as student-to-computer ratio, CAI mode, and duration of treatment were significantly related to the effectiveness of CAI. (Keywords: academic achievement, computer-assisted instruction, instructional effectiveness, meta-analysis, science education.)
How effective are computer-assisted instruction (CAI) programs in supporting beginning readers? This article reviews 42 studies published from 1990 onward, comprising a total of 75 experimental comparisons. The corrected overall effect size estimate was d = 0.19 (± 0.06). Effect sizes were found to depend on two study characteristics: the effect size at the time of pre-testing and the language of instruction (English or other). These two variables accounted for 61 percent of the variability in effect sizes. Although an effect size of d = 0.2 shows little promise, caution is needed because of the poor quality of many studies.
Studies focusing on the effectiveness of computer-assisted instruction have been growing recently in Turkey. In this research, quantitative studies comparing the effectiveness of computer- assisted instruction to traditional teaching method and conducted between 1998 and 2007 are studied by meta-analysis. Seventy eight studies that have eligible data were combined with meta analytical methods by coding protocol from the 422 master’s and doctoral degree and 124 articles. As a result for the study, the effect size of computer-assisted instruction method for academic achievement calculated 1.048. This is large scale according to Thalheimer and Cook, large and Cohen, Welkowitz and Ewen (2000). Recommendations were made based on the results of the study.
The purpose of this review is to learn from rigorous evaluations of alternative technology applications how features of using technology programs and characteristics of their evaluations affect reading outcomes for students in grades K-12. The review applies consistent inclusion standards to focus on studies that met high methodological standards. A total of 84 qualifying studies based on over 60,000 K-12 participants were included in the final analysis. Consistent with previous reviews of similar focus, the findings suggest that educational technology applications generally produced a positive, though small, effect (ES = +0.16) in comparison to traditional methods. There were differential impacts of various types of educational technology applications. In particular, the types of supplementary computer-assisted instruction programs that have dominated the classroom use of educational technology in the past few decades were not found to produce educationally meaningful effects in reading for K-12 students (ES = +0.11), and the higher the methodological quality of the studies, the lower the effect size. In contrast, innovative technology applications and integrated literacy interventions with the support of extensive professional development showed more promising evidence. Although many more rigorous, especially randomized, studies of newer applications are needed, what unifies the methods found in this review to have great promise is the use of technologies in close connection with teachers’ efforts.
The present review examines research on the effects of educational technology applications on mathematics achievement in K-12 classrooms. Unlike previous reviews, this review applies consistent inclusion standards to focus on studies that met high methodological standards. In addition, methodological and substantive features of the studies are investigated to examine the relationship between educational technology applications and study features. A total of 74 qualified studies were included in our final analysis with a total sample size of 56,886 K-12 students: 45 elementary studies (N=31,555) and 29 secondary studies (N=25,331). Consistent with the more recent reviews, the findings suggest that educational technology applications generally produced a positive, though modest, effect (ES=+0.15) in comparison to traditional methods. However, the effects may vary by educational technology type. Among the three types of educational technology applications, supplemental CAI had the largest effect with an effect size of +0.18. The other two interventions, computer-management learning and comprehensive programs, had a much smaller effect size, +0.08 and +0.07, respectively. Differential impacts by various study and methodological features are also discussed.
In this meta-analysis, we systematically reviewed research on digital games and learning for K–16 students. We synthesized comparisons of game versus nongame conditions (i.e., media comparisons) and comparisons of augmented games versus standard game designs (i.e., value-added comparisons). We used random-effects meta-regression models with robust variance estimates to summarize overall effects and explore potential moderator effects. Results from media comparisons indicated that digital games significantly enhanced student learning relative to nongame conditions ( g = 0.33, 95% confidence interval [0.19, 0.48], k = 57, n = 209). Results from value-added comparisons indicated significant learning benefits associated with augmented game designs (g = 0.34, 95% confidence interval [0.17, 0.51], k = 20, n = 40). Moderator analyses demonstrated that effects varied across various game mechanics characteristics, visual and narrative characteristics, and research quality characteristics. Taken together, the results highlight the affordances of games for learning as well as the key role of design beyond medium.
This report presents an overview of the process and initial findings of a systematic review and meta-analysis of the literature on computer simulations for K–12 science, technology, engineering, and mathematics (STEM) learning topics. Both quantitative and qualitative research studies on the effects of simulation in STEM were reviewed. Studies that reported effect size measures or the data to calculate effect sizes were included in the meta-analysis. Important moderating factors related to simulation design, assessment, implementation, and study quality were coded, categorized, and analyzed for all the articles.
This review describes a meta-analysis of findings from 50 controlled evaluations of intelligent computer tutoring systems. The median effect of intelligent tutoring in the 50 evaluations was to raise test scores 0.66 standard deviations over conventional levels, or from the 50th to the 75th percentile. However, the amount of improvement found in an evaluation depended to a great extent on whether improvement was measured on locally developed or standardized tests, suggesting that alignment of test and instructional objectives is a critical determinant of evaluation results. The review also describes findings from two groups of evaluations that did not meet all of the selection requirements for the meta-analysis: six evaluations with nonconventional control groups and four with flawed implementations of intelligent tutoring systems. Intelligent tutoring effects in these evaluations were small, suggesting that evaluation results are also affected by the nature of control treatments and the adequacy of program implementations.
The purpose of this study was to investigate the effectiveness of computer assisted instruction (CAI) to improve the reading outcomes of students in preschool through high school. A total of 61 studies met criteria for this review, and 101 independent effect sizes were extracted. Results indicated that the mean effects for students receiving reading CAI were small, positive, and statistically significant when compared to control groups receiving no treatment or non-reading CAI. Categorical moderator analyses and meta-regression were conducted to explore the variation in effects. Results of an analysis of research quality indicated that, on average, about half of quality indicators were met. The results of this meta-analysis show that CAI in reading can effectively enhance the reading outcomes of students in preschool through high school. Future, high-quality research should be conducted to identify effective programs and establish best practice in the instructional design of CAI to enhance the reading skills of all students.
This study examines the impact of computer technology (CT) on mathematics education in K-12 classrooms through a systematic review of existing literature. A meta- analysis of 85 independent effect sizes extracted from 46 primary studies involving a total of 36,793 learners indicated statistically significant positive effects of CT on mathematics achievement. In addition, several characteristics of primary studies were identified as having effects. For example, CT showed advantage in promoting mathematics achievement of elementary over secondary school students. As well, CT showed larger effects on the mathematics achievement of special need students than that of general education students, the positive effect of CT was greater when combined with a constructivist approach to teaching than with a traditional approach to teaching, and studies that used non- standardized tests as measures of mathematics achievement reported larger effects of CT than studies that used standardized tests. The weighted least squares univariate and multiple regression analyses indicated that mathematics achievement could be accounted for by a few technology, implementation and learner characteristics in the studies.
A meta-analysis was performed to synthesize existing research comparing the effects of computer- assisted instruction (CAI) versus traditional instruction (TI) on students’ achievement in Taiwan. Fifty- two studies were located from our sources, and their quantitative data was transformed into effect size (ES). The overall grand mean of the study-weighted ES for all 52 studies was 0.55. The results suggest that CAI is more effective than TI in Taiwan. In addition, two of the seventeen variables selected for this study (i.e., statistical power, and comparison group) had a statistically significant impact on the mean ES. The results from this study suggest that the effects of CAI in instruction are positive over TI. The results also shed light on the debate of learning from media between Clark and Kozma.
This study quantitatively synthesized the empirical research on the effects of social context (i.e., small group versus individual learning) when students learn using computer technology. In total, 486 independent findings were extracted from 122 studies involving 11,317 learners. The results indicate that, on average, small group learning had significantly more positive effects than individual learning on student individual achievement (mean ES = +0.15), group task performance (mean ES = +0.31), and several process and affective outcomes. However, findings on both individual achievement and group task performance were significantly heterogeneous. Through weighted least squares univariate and multiple regression analyses, we found that variability in each of the two cognitive outcomes could be accounted for by a few technology, task, grouping, and learner characteristics in the studies.
A systematic search of the research literature from 1996 through July 2008 identified more than a thousand empirical studies of online learning. Analysts screened these studies to find those that (a) contrasted an online to a face-to-face condition, (b) measured student learning outcomes, (c) used a rigorous research design, and (d) provided adequate information to calculate an effect size. As a result of this screening, 50 independent effects were identified that could be subjected to meta-analysis. The meta-analysis found that, on average, students in online learning conditions performed modestly better than those receiving face-to-face instruction. The difference between student outcomes for online and face-to-face classes—measured as the difference between treatment and control means, divided by the pooled standard deviation—was larger in those studies contrasting conditions that blended elements of online and face-to-face instruction with conditions taught entirely face-to-face. Analysts noted that these blended conditions often included additional learning time and instructional elements not received by students in control conditions. This finding suggests that the positive effects associated with blended learning should not be attributed to the media, per se. An unexpected finding was the small number of rigorous published studies contrasting online and face-to-face learning conditions for K–12 students. In light of this small corpus, caution is required in generalizing to the K–12 population because the results are derived for the most part from studies in other settings (e.g., medical training, higher education).
The results of a meta-analysis of 20 research articles containing 89 effect sizes related to the use of digital tools and learning environments to enhance literacy acquisition for middle school students demonstrate that technology can have a positive effect on reading comprehension (weighted effect size of 0.489). Very little research has focused on the effect of technology on other important aspects of reading, such as metacognitive, affective, and dispositional outcomes. The evidence permits the conclusion that there is reason to be optimistic about using technology in middle-school literacy programs, but there is even greater reason to encourage the research community to redouble its efforts to investigate and understand the impact of digital learning environments on students in this age range and to broaden the scope of the interventions and outcomes studied.
Since its advent word processing has become a common writing tool, providing potential advantages over writing by hand. Word processors permit easy revision, produce legible characters quickly, and may provide additional supports (e.g., spellcheckers, speech recognition). Such advantages should remedy common difficulties among weaker writers/readers in grades 1–12. Based on 27 studies with weaker writers, 20 of which were not considered in prior reviews, findings from this meta-analysis support this proposition. From 77 independent effects, the following average effects were greater than zero: writing quality (d = 0.52), length (d = 0.48), development/organization of text (d = 0.66), mechanical correctness (d = 0.61), motivation to write (d = 1.42), and preferring word processing over writing by hand (d = 0.64). Especially powerful writing quality effects were associated with word processing programs that provided text quality feedback or prompted planning, drafting, or revising (d = 1.46), although this observation was based on a limited number of studies (n = 3).
The purpose of this research study was to determine the overall effectiveness of computer-based laboratory compared with the traditional hands-on laboratory for improving students’ science academic achievement and attitudes towards science subjects at the college and pre-college levels of education in the United States. Meta-analysis was used to synthesis the findings from 38 primary research studies conducted and/or reported in the United States between 1996 and 2006 that compared the effectiveness of computer-based laboratory with the traditional hands-on laboratory on measures related to science academic achievements and attitudes towards science subjects. The 38 primary research studies, with total subjects of 3,824 generated a total of 67 weighted individual effect sizes that were used in this meta-analysis. The study found that computer-based laboratory had small positive effect sizes over the traditional hands-on laboratory (ES = +0.26) on measures related to students’ science academic achievements and attitudes towards science subjects (ES = +0.22). It was also found that computer-based laboratory produced more significant effects on physical science subjects compared to biological sciences (ES = +0.34, +0.17).
This article reports the results of a meta-analysis of 20 research articles containing 89 effect sizes related to the use of digital tools and learning environments to enhance literacy acquisition. Results (weighted effect size of 0.489) demonstrate that technology can have a positive effect on reading comprehension, but little research has focused on the effect of technology on metacognitive, affective, and dispositional outcomes. We conclude that although there is reason to be optimistic about using technology in middle-school literacy programs, there is also reason to encourage the research community to redouble its emphasis on digital learning environments for students in this age range and to broaden the scope of the interventions and outcomes they study.
Different learning environments provide different learning experiences and ought to serve different achievement goals. We hypothesized that constructivist learning environments lead to the attainment of achievements that are consistent with the experiences that such settings provide and that more traditional settings lead to the attainments of other kinds of achievement in accordance with the experiences they provide. A meta-analytic study was carried out on 32 methodologically-appropriate experiments in which these 2 settings were compared. Results supported 1 of our hypotheses showing that overall constructivist learning environments are more effective than traditional ones (ES = .460) and that their superiority increases when tested against constructivist-appropriate measures (ES = .902). However, contrary to expectations, traditional settings did not differ from constructivist ones when traditionally-appropriate measures were used. A number of possible interpretations are offered among them the possibility that traditional settings have come to incorporate some constructivist elements. This possibility is supported by other findings of ours such as smaller effect sizes for more recent studies and for longer lasting periods of instruction.
Meta-analytical research has shown that computer technology can play a significant role in increasing positive learning outcomes of students. Research on this topic has resulted in conflicting findings on academic achievement and other measures of student outcomes. The current meta-analysis sought to assess the level of differences that existed between students being instructed with computer technology versus the academic achievement outcomes of students instructed with traditional methods. Based on specified selection criteria, 31 studies were collected and analyzed for homogeneity. From this original group, 23 studies were systematically reviewed under standard meta-analytical procedures. According to Cohen's (1988) classification of effect sizes in the field of education, the obtained weighted mean effect size of .24 shows a medium difference. This finding indicates that students who are taught with technology outperform their peers who are taught with traditional methods of instruction. In addition, five secondary analyses were conducted on higher-order thinking skills, ES = .82, motivation, ES = .17, retention-attendance rates, ES = .16, physical outcomes, no data were found, and social skills, ES = .21. Eleven ancillary analyses were then conducted to assess study findings across various dimensions including duration of study, type of technology used, and grade-level analyzed.
The purpose of this study was to conduct a meta-study of computer-assisted instruction (CAI) studies in mathematics for students with learning disabilities (LD) focusing on examining the effects of CAI on the mathematics performance of students with LD. This study examined a total of 11 mathematics CAI studies, which met the study selection criterion, for students with LD at the elementary and secondary levels and analyzed them in terms of their comparability and effect sizes. Overall, this study found that those CAI studies did not show conclusive effectiveness with relatively large effect sizes. The methodological problems in the CAI studies limit an accurate validation of the CAI’s effectiveness. Implications for future mathematics CAI studies were discussed.
In this study, we meta-analyzed empirical research of the effectiveness of intelligent tutoring systems (ITS) on K–12 students’ mathematical learning. A total of 26 reports containing 34 independent samples met study inclusion criteria. The reports appeared between 1997 and 2010. The majority of included studies compared the effectiveness of ITS with that of regular classroom instruction. A few studies compared ITS with human tutoring or homework practices. Among the major findings are (a) overall, ITS had no negative and perhaps a small positive effect on K–12 students’ mathematical learning, as indicated by the average effect sizes ranging from g = 0.01 to g = 0.09, and (b) on the basis of the few studies that compared ITS with homework or human tutoring, the effectiveness of ITS appeared to be small to modest. Moderator analyses revealed 2 findings of practical importance. First, the effects of ITS appeared to be greater when the interventions lasted for less than a school year than when they lasted for 1 school year or longer. Second, the effectiveness of ITS for helping students drawn from the general population was greater than for helping low achievers. This finding draws attentions to the issue of whether computerized learning might contribute to the achievement gap between students with different achievement levels and aptitudes.
Fast ForWord is a suite of computer-based language intervention programs designed to improve children’s reading and oral language skills. The programs are based on the hypothesis that oral language difficulties often arise from a rapid auditory temporal processing deficit that compromises the development of phonological representations. Methods: A systematic review was designed, undertaken and reported using items from the PRISMA statement. A literature search was conducted using the terms ‘Fast ForWord’ ‘Fast For Word’ ‘Fastforword’ with no restriction on dates of publication. Following screening of (a) titles and abstracts and (b) full papers, using pre-established inclusion and exclusion criteria, six papers were identified as meeting the criteria for inclusion (randomised controlled trial (RCT) or matched group comparison studies with baseline equivalence published in refereed journals). Data extraction and analyses were carried out on reading and language outcome measures comparing the Fast ForWord intervention groups to both active and untreated control groups. Results: Metaanalyses indicated that there was no significant effect of Fast ForWord on any outcome measure in comparison to active or untreated control groups. Conclusions: There is no evidence from the analysis carried out that Fast ForWord is effective as a treatment for children’s oral language or reading difficulties.
This research study employs a second-order meta-analysis procedure to summarize 40 years of research activity addressing the question, does computer technology use affect student achievement in formal face-to-face classrooms as compared to classrooms that do not use technology? A study-level meta- analytic validation was also conducted for purposes of comparison. An extensive literature search and a systematic review process resulted in the inclusion of 25 meta-analyses with minimal overlap in primary literature, encompassing 1,055 primary studies. The random effects mean effect size of 0.35 was significantly different from zero. The distribution was heterogeneous under the fixed effects model. To validate the second-order meta- analysis, 574 individual independent effect sizes were extracted from 13 out of the 25 meta-analyses. The mean effect size was 0.33 under the random effects model, and the distribution was heterogeneous. Insights about the state of the field, implications for technology use, and prospects for future research are discussed.
In this meta-analytic study, we investigated the effects of mobile devices on student achievement in science, mathematics and reading in grades K–12. Based on our inclusion criteria, we searched the ERIC and PsycINFO databases and identified 14 peer-reviewed research articles published between 2010 and 2014.We identified the device type, subject area, intervention language, grade level, study design and implementer (i.e., of the intervention) as potential moderator variables that may influence student achievement in the targeted content areas. We followed a three-level meta-analytic procedure to estimate the overall effect of these variables and explain the variation in outcomes. The results suggest that use of mobile devices in teaching yielded higher achievement scores than traditional teaching in all subject areas. With regard to the analysis of moderator variables, the results suggest that using mobile devices in reading is significantly more effective than doing so in mathematics.
This meta-analysis sought to investigate the overall effectiveness of computer algebra systems (CAS) instruction, in comparison to non-CAS instruction, on students’ achievement in mathematics at pre-college and post-secondary institutions. The study utilized meta-analysis on 31 primary studies (102 effect sizes, N= 7,342) that were retrieved from online research databases and search engines, and explored the extent to which the overall effectiveness of CAS was moderated by various study characteristics. The overall effect size, 0.38, [0.35 for pre-college] was significantly different from zero. The mean effect size suggested that a typical student at the 50th percentile of a group taught using non-CAS instruction could experience an increase in performance to the 65th percentile, if that student was taught using CAS instruction. The fail-safe N, Nfs, hinted that 11,749 additional studies with nonsignificant results would be needed to reverse the current finding. Three independent variables (design type, evaluation method, and time) were found to significantly moderate the effect of CAS. The current results do not predict future trends on the effectiveness of CAS; however, these findings suggest that CAS have the potential to improve learning in the classroom. Regardless of how CAS were used, the current study found that they contributed to a significant increase in students’ performance.
Recent Government policy in England and Wales on Information and Communication Technology (ICT) in schools is heavily influenced by a series of non-randomised controlled studies. The evidence from these evaluations is equivocal with respect to the effect of ICT on literacy. In order to ascertain whether there is any effect of ICT on one small area of literacy, spelling, a systematic review of all randomised controlled trials (RCTs) was undertaken. Relevant electronic databases (including BEI, ERIC, Web of Science, PsycINFO, The Cochrane Library) were searched. Seven relevant RCTs were identified and included in the review. When six of the seven studies were pooled in a meta-analysis there was an effect, not statistically significant, in favour of computer interventions (Effect size =0.37, 95% confidence interval=70.02 to 0.77, p=0.06). Sensitivity and sub-group analyses of the results did not materially alter findings. This review suggests that the teaching of spelling by using computer software may be as effective as conventional teaching of spelling, although the possibility of computer-taught spelling being inferior or superior cannot be confidently excluded due to the relatively small sample sizes of the identified studies. Ideally, large pragmatic randomised controlled trials need to be undertaken.
The overall aim of the two-year project is to determine the impact of ICT on literacy learning in English for 5-16 year olds. The main aim of this in-depth sub-review is to investigate whether or not ICT is effective in improving young people’s literacy learning in English. Subsidiary aims are to assess the effectiveness of ICT on different literacy outcomes and, within those outcomes, to assess whether effectiveness varies according to different interventions. For this review, studies were only included if they had randomly allocated pupils to an ICT or no ICT treatment for the teaching of literacy. Both individually randomised trials and cluster randomised trials were included. We identified 12 relatively small RCTs for the in-depth review. Some were so small that they could only really be considered to be pilot studies. This group of tiny trials represent the sum of the most rigorous effectiveness evidence available to date upon which to justify or refute the policy of spending millions of pounds on ICT equipment, software and teacher training. Given that the trials showed little evidence of benefit large, rigorously design, randomised trials are urgently required.
To estimate the effects of teaching and learning with technology on students’ cognitive, affective, and behavioral outcomes of learning, 282 effect sizes were calculated using statistical data from 42 studies that contained a combined sample of approximately 7,000 students. The mean of the study-weighted effect sizes averaging across all outcomes was .410 (p < .001), with a 95-percent confidence interval (CI) of .175 to .644. This result indicates that teaching and learning with technology has a small, positive, significant (p < .001) effect on student outcomes when compared to traditional instruction. The mean study-weighted effect size for the 29 studies containing cognitive outcomes was .448, and the mean study-weighted effect size for the 10 comparisons that focused on student affective outcomes was .464. On the other hand, the mean study-weighted effect size for the 3 studies that contained behavioral outcomes was -.091, indicating that technology had a small, negative effect on students’ behavioral outcomes. The overall study-weighted effects were constant across the categories of study characteristics, quality of study indicators, technology characteristics, and instructional/teaching characteristics.
It is assumed that serious games influences learning in 2 ways, by changing cognitive processes and by affecting motivation. However, until now research has shown little evidence for these assumptions. We used meta-analytic techniques to investigate whether serious games are more effective in terms of learning and more motivating than conventional instruction methods (learning: k = 77, N= 5,547; motivation: k = 31, N = 2,216). Consistent with our hypotheses, serious games were found to be more effective in terms of learning (d = 0.29, p = .01) and retention (d = 0.36, p = .01), but they were not more motivating (d = 0.26, p = .05) than conventional instruction methods. Additional moderator analyses on the learning effects revealed that learners in serious games learned more, relative to those taught with conventional instruction methods, when the game was supplemented with other instruction methods, when multiple training sessions were involved, and when players worked in groups.
Over the past decade, the number of one-to-one laptop programs in schools has steadily increased. Despite the growth of such programs, there is little consensus about whether they contribute to improved educational outcomes. This article reviews 65 journal articles and 31 doctoral dissertations published from January 2001 to May 2015 to examine the effect of one-to-one laptop programs on teaching and learning in K–12 schools. A meta-analysis of 10 studies examines the impact of laptop programs on students’ academic achievement, finding significantly positive average effect sizes in English, writing, mathematics, and science. In addition, the article summarizes the impact of laptop programs on more general teaching and learning processes and perceptions as reported in these studies, again noting generally positive findings.