The Effect of a Metacognitive Intervention on College Students’ Reading Performance and Metacognitive Skills

The goal of the present study was to investigate the impact of a metacognitive intervention, with explicit training in monitoring and control on college students’ reading performance and metacognitive skills. The intervention that is five sessions long, used an instructional design that combines reflective dialogue, modeling, and group-practice. Effects were contrasted with an active control group who received strategy training in reading comprehension. Results showed that the experimental group gained in metacognitive skills and reading performance while the control group did not show any change from pretest to posttest. Finally, we found that within the experimental group high achievers as measured by GPA showed greater gains in reading than low achievers.


Introduction
In the constantly changing and demanding context of higher education, students need more than just content knowledge to be successful learners. In order to achieve in college, they need skills in reflection and self-assessment so that they can self-regulate their approaches to learning. Metacognition refers to thinking about and regulating one's own cognition. It is a construct that has been linked to effective learning and higher academic achievement presumably because it enables students to evaluate and effectively control their own learning (Young & Fry, 2008;Schraw, 1994;Sperling, Howard, & Staley, 2004). There is evidence that college students with higher metacognitive knowledge and skills are more likely to perform better in measures of academic performance than peers with low metacognition (Steinberg, Bohning, & Chowning, 1991;Maki, 1998a;Commander & Stanwyck, 1997). Although there is evidence that, if taught to students, metacognitive skills can boost academic performance, most research that has been carried out in this area has been done with school-aged children (Hennessey, 1999;Kramarski & Mevarech, 2003). Moreover, most of the cognitive/metacognitive interventions reported in the literature provide training in specific metacognitive strategies such as concept mapping, summarizing, and/or self-questioning, rather than metacognitive skills such as monitoring and control that are the two main operative elements of metacognition. Furthermore, one of the shortcomings in the interventions reported in the literature is the unrealistically long training periods (year/semester-long). This puts serious time-constraints on professors who want to implement metacognitive interventions to help their students exert more control over their learning, and thus be more active while learning.
Thus, the present study set out to investigate the impact of a metacognitive intervention on college students' metacognitive awareness and reading performance. In the same vein, it examined individual differences in high and low achievers after the training. model of metacognition (Brown, 1987;Jacobs & Paris, 1987;Schraw & Dennison, 1994;Schraw, 1997;Otani & Widner, 2005) as a combination of knowledge and regulation processes.
Knowledge of cognition or metacognitive knowledge refers to knowledge individuals have about themselves as cognitive beings, their capabilities, and limitations. This knowledge is of three types: declarative, procedural, and conditional (Brown, 1987;Jacobs & Paris, 1987;Kuhn, 2000;Schraw, Crippen, & Hartley, 2006;Schraw & Moshman, 1995). Declarative knowledge is statable knowledge about ourselves, factors impacting our performance, and cognitions in general. For example one's knowledge that they are more skilled in reading comprehension than listening comprehension, that they work better when they are under pressure, or that they learn better while listening to music. Procedural knowledge refers to knowledge about how to use strategies effectively in order to facilitate learning. Conditional knowledge is knowledge about when and why to use strategies or conditions and contexts for using strategies appropriately (Schraw & Dennison, 1994). As Schraw (1998) puts it "conditional knowledge involves knowing when, where, and why to use declarative and procedural knowledge" (p. 114).
Regulation of cognition on the other hand, refers to the active monitoring of cognitive processes as they occur and use of regulatory strategies to optimize cognitive performance (Baker & Brown, 1984;Schraw & Moshman, 1995). Regulation of cognition involves a number of processes or skills such as planning, information management, monitoring, debugging, and evaluation, all aspects of control and monitoring skills.
While Monitoring refers to one's online awareness of comprehension and task fulfillment together with ability to engage in periodic self-testing while learning, control refers to the conscious and non-conscious decisions one makes as a reaction to output from monitoring processes. (Pressley & Ghatala, 1990;Schwartz & Perfect, 2004).
One of the widely cited metacognitive interventions was carried out by Palincsar and Brown (1984) who trained meta-strategic skills to middle school students using a reciprocal teaching procedure. The instruction involved extensive modeling and practice in four strategies deemed ideal for comprehension fostering and comprehension monitoring: self-questioning, summarizing, making predictions about texts, and debugging. Students worked in groups first with teacher guidance and later without a teacher. The training started with the teacher modeling of the strategy and coaching/guiding students as they learned how to use it. Eventually, students were able to use the strategies on their own, and took turns in being the coach for their small groups. The researchers reported significant gains in self-questioning, summarizing, making predictions, and debugging over a 10-to 12-week period that are maintained over time, but no impressive gains on standardized reading tests. They asserted though, that the success of the reciprocal teaching intervention could be attributed to the particular strategies trained, to the reciprocal teaching procedure or to the combination of both. Similar findings are reported by Payne and Manning (1992) who trained 20 fourth-graders in comprehension monitoring strategies. The results indicated that metacognitive reading strategies can be taught to students and that students who received metacognitive skills training had better reading comprehension, and greater knowledge about reading strategies. This study had a major impact on instructional designs that favor teaching subjects/skills together with their relevant metacognitive knowledge and skills.
Another metacognitive intervention in the area of reading was done by Cross and Paris (1988) who taught metacognition to 171 third and fifth-graders using an experimental curriculum (Informed Strategies for Learning). The curriculum was designed to improve the students' metacognitive awareness and use of reading strategies. During the intervention, students were exposed to strategy training that included explicit directed attention to declarative, procedural, and conditional knowledge about reading strategies. For both grades, students in the experimental group showed significant gains in metacognition and use of reading strategies compared to the control groups from both grades. More specifically, students in experimental classes gained in evaluation of task difficulty, planning, and monitoring progress.
In the context of physics Koch (2001), trained 30 pre-university students in metacognitive self-assessment while reading physics texts and ranking their abilities and disabilities hierarchically. Results showed that the experimental group outperformed the control group in reading comprehension of physics texts.
Empirical support for the teachability of metacognition also comes from studies in computer science, and mathematics. Delclos and Harrington (1991) reported findings suggesting that monitoring ability improves with training and practice. In their study, they examined fifth-and sixth-graders' ability to solve computer problems after assignment to one of three treatment groups. Groups received either problem-solving training, problem-solving and self-monitoring training, or no training. The monitored problem-solving group solved more complex problems than either of the other two groups, and took less time to do so.
In the area of mathematical problem-solving, Kramarski and Mevarech (2003) investigated the effects of four instructional methods on 384 students' mathematical reasoning and metacognitive knowledge. The instructional methods were (1) cooperative learning combined with metacognitive training (COOP+META), (2) individualized learning combined with metacognitive training (IND+META), (3) cooperative learning without metacognitive training (COOP), and (4) individualized learning without metacognitive training (IND). Reported results showed that the COOP+META group clearly outperformed the IND+META, which in turn significantly outperformed both the COOP and the IND groups in graphs interpretation and different aspects of mathematical explanation. Moreover, the META groups outperformed their non-META counterparts in graph construction and metacognitive knowledge. In a similar study, Mevarech and Amrany (2008) trained 31 high-school students in metacognition. While the experimental group (called IMPROVE) received explicit metacognitive guidance, the control group (N=30) studies with no explicit teaching of metacognition. Results indicated that the IMPROVE group outperformed the control group in mathematics achievement and regulation of cognition, but not in knowledge of cognition.
In a more recent study, Khosa and Volet (2013) conducted a metacognitive intervention to help students engage in productive learning from each other while working on a clinical case-based group assignment. Using a semi-experimental design (control data was taken from the previous cohort), students were trained in two metacognitive strategies, namely meaning-making interactions and high-level questioning. The intervention had a positive impact on the students' personal goals, perceived difficulty of the assignment, and evaluation of learning. Moreover, it resulted in the students spending more time on case-content discussion.
The studies summarized above show that there is solid evidence that metacognition can be taught to students thereby impacting their academic performance and metacognitive skills. However, most evidence comes from primary and middle-school contexts even though it is widely acknowledged that college students often lack metacognitive skills (Nietfeld, Cao, & Osborne, 2005;Wade & Reynolds, 1989). Moreover, most studies focus on training only one or two strategies (summarizing, concept-mapping, self-questioning, etc) and neglect training monitoring and control comprehensively. Finally, most studies report long-term interventions that are difficult to incorporate into a college course.

The Current Study
The present study sets out to investigate the effect of metacognitive intervention on university students' reading performance and metacognitive skills. The intervention focuses on the operative elements of metacognition, i.e. monitoring and control, and uses an instructional design that combines explanation, reflective dialogue (metacognitive dialogue), modelling, and group practice. Based on the goals of this study, the following research questions were formulated.
1) To what extent does a metacognitive intervention that is five-sessions long, impact college students' reading performance and metacognitive skills?
2) How do high and low achievers within the experimental group differentially benefit from the intervention?

Participants
Thirty students from the Faculty of Letters and Human Sciences-Rabat in Morocco volunteered to participate in the experiment reported here. This was a subset of 88 students who were administered a metacognitive assessment as a class activity, and then volunteered for the experiment that was to take place two weeks hence. All the students were third-year students in the English department, which enabled us to use standard English-language instruments and avoid the limitations and reliability issues related to translating existing www.ccsenet.org/jedp Journal of Educational and Developmental Psychology Vol. 4, No. 1;2014 instruments. Participants were randomly assigned to an experimental or a control group. Four students from the control group did not attend the sessions and their data were thus unavailable. The final sample (15 Experimental and 11 Control) consisted of 14 females and 12 males, the age of which ranged between 19 and 27 inclusive (M= 21.07; SD= 1.48).

Measures
The Metacognitive Awareness Inventory (MAI). The Metacognitive Awareness Inventory (Schraw & Dennison, 1994) is one of the most comprehensive surveys that assess metacognitive awareness for adult learners. This comprehensive inventory consists of 52 statements allowing an in-depth assessment of metacognition. The MAI was selected because it provides a reliable assessment of metacognitive awareness among older students, it has good psychometric properties, and easily adapts to the three-component model of metacognition tested in this study.
Its two component categories, Knowledge and Regulation of Cognition, can be further divided into 8 sub-components, which allow computing scores for individual subcomponents. While the Knowledge component comprises statements of declarative knowledge (knowledge about self and strategies), procedural knowledge (knowledge about strategy use), and conditional knowledge (why and when to use strategies), the regulation component provides statements about planning (setting goals), information management (organization), monitoring (assessment of learning and strategy use), debugging (comprehension-error correction strategies), and evaluation (end of task analysis of performance and learning effectiveness ). Statements from the inventory are rated in a 5-point Likert scale ranging from 1: I never or almost never do this, to 5: I always of almost always do this.
Reading Comprehension. A reading pretest and posttest were administered to the participants prior to and after the metacognitive training for the experimental group or the reading comprehension exercises for the control group. The reading comprehension measure was taken from Peterson's Master TOEFL Reading Skills (2007) which is a preparation resource for EFL students intending to take the TOEFL exam. Due to time limitations, the 5 passages in the pretest and posttest were reduced to 4, each of which was followed by 10 multiple choice questions that probed for an understanding of vocabulary, main idea, specific information, inferences, and connection between ideas.
Students' Cumulative GPA. Academic performance of the participants was measured by their cumulative GPA for the two years spent at University. The GPA is known in Morocco (following the French system) as the DEUG ("Diplôme d'études universitaires générales", which can be translated into "Diploma of General University Studies"). In this system, which is used from middle school through college, scores range from 0 to 20 with 10 being the average score and 12+ being the criterion for distinction at the end of the second year.
This mark is a cumulative average of 16 modules the students have taken during their first two years. The "DEUG" GPA was used as a measure of academic performance rather than a one-time test that would not reflect the students' real academic level and would not be reliable enough to help categorize students as high and low achievers. Finally, a number of studies in the area of metacognition and self-regulated learning used cumulative GPA as a measure of academic performance and argued that it is a reliable measure in research (Trainin & Swanson, 2005;Young & Fry, 2008;Everson & Tobias, 1998;Nietfeld, Cao, & Osborne, 2005;Schraw, 1994).

Procedure
Eighty-eight students were given the MAI as a pretest of their metacognitive knowledge and skills. Students were encouraged to volunteer for the experiment (introduced to the students as a workshop) that was to take place two weeks from the administration of the MAI. 30 students expressed willingness to participate in the experiment, and were randomly assigned to the experimental and control groups. The experimental group was scheduled from 10 to 12 a.m. while the control group was scheduled from 12:0 to 2:30 for the 5 sessions. The course planning was as follows: Reading posttest and re-administration of MAI.
After administration of the reading pretest to the experimental group, a discussion was generated around the challenges they faced to deal with the test, and possible ways of dealing with those challenges. After eliciting students' possible approaches to dealing with test-taking in general and comprehension in particular, a model of metacognitive regulation was suggested to the students as a way of approaching exam-preparations, test taking, and reading for comprehension. The model is based on the two main components of metacognitive regulation: Monitoring and control.
Each session started with a general discussion about the metacognitive skill and a link to the previously covered one, followed by a reflective dialogue guided by the researcher's questions. Sub-skills were then explained to students, and examples were provided before the skill was modeled by the researcher in a think-aloud activity as described in detail hereunder. By the end of each session, students were divided into 3 groups to practice the skills and solicit feedback from peers and from the researcher. Sessions ended with the researcher's feedback about strategy use and importance of being strategic to optimize learning and performance.
During the last session, a summary of all the skills was given, followed by the administration of the reading posttest and MAI. Finally, students were invited to coffee and discussion on the following day. The whole training was evaluated and students were congratulated on their seriousness throughout the 5 sessions of the training.

The Intervention
The intervention consisted of 5 sessions of training in Metacognitive Control and Monitoring skills. To facilitate the task for students, control was divided into three sub-skills: Planning, Information Management, and Debugging, whereas monitoring was divided into two sub-skills: monitoring and evaluation. The 5 sub-categories of monitoring and control are based on Schraw's model of Metacognition.
Session 1: In the first session, the goal was to introduce students to the overall training model. To initiate this discussion, participants were asked about difficulties they encountered in the reading test, as well as general difficulties they face while reading/preparing for exams, and possible ways of dealing with them. Participants were also asked if they used any strategies to cope with test difficulty and improve comprehension. The researcher then suggested the model of skills to improve general reading performance, reading for exams, and test-taking. Students were introduced to a PowerPoint Presentation of the model (See Appendix 1), and course structure as well as the instruction method (Figure 1).

Figure 1. Instruction method of the metacognitive intervention
The metacognitive training was begun during the first session by introducing the first metacognitive skill which is planning. The participants were asked to reflect on a proverb (If you do not know where you are going you will probably end up somewhere else) and think of answers to the following questions: what's your understanding of planning? , Can you think of a situation where planning can save you either time or money? Would you rather plan a trip or just pack up and hit the road? Why? What could be consequences for both? And how in your opinion can planning improve comprehension and reading for tests? The objective was to help students find the link between planning everyday life activities and planning one's learning and studies.
After the participants demonstrated understanding of planning as a concept, they were introduced to the Metacognitive strategies underlying it. Students were shown slides with a set of metacognitive strategies (e.g., decide on your learning goals and assess your progress to meeting them, decide what strategy or strategies you will use to learn the material and notice whether they are effective for you as you are studying) and were asked to identify the ones that relate to planning one's learning. In fact, the slide showed a mixture of monitoring and planning strategies, and students were asked to justify their categorization and reflect on differences between strategies. This reflective dialogue on the strategies was guided by the researcher's questions (what is the difference between strategy 1 and 2?, when can we use strategy x or y? , can text structure define the type of strategies one uses?, in what type of text are we more likely to use strategy X or Y? ).
Students were divided into three groups of 5 and were given copies of a passage about car problems (Peterson's Master TOEFL Reading Skills, 2007, p. 39). Four alternative objectives (Memorizing the steps, learning cars/mechanics vocabulary, writing a summary, or answering comprehension questions) were written on the board for the students to select from. The groups were told that the type of objective usually defines the type of strategies to be used, and the time spent studying the text.
To illustrate this, the researcher modeled planning strategies for the first reading goal "Memorizing" by thinking aloud while reading: "ok my objective for this reading is clear: I want to memorize the steps to follow in case of a car problem. What are the possible ways to do so? I can isolate the steps and highlight them in the text, I can rewrite them using my own words; would it be easier if I try to find a how-to video that explains the steps? How much time do I need for that?". The researcher explained to the students that at that point he did not start using the strategies, but was rather planning how to approach the task in the light of the objective he had chosen. A student from each group was selected to model each of the other objectives for the whole class, and then the rest of the students followed within the small groups.
Session 2: The goal of the second session was to introduce the skill of monitoring and its underlying strategies. The session started with a review of the planning strategies that had been covered in the previous session. Students were reminded how important it was to plan one's learning, studies, and the resulting benefits in terms of time, energy, and performance. The training power-point was displayed again and students were asked about their own understanding of monitoring and its relation to planning. Students also had the planning and monitoring strategies in the previous session. To help guide students' answers, questions such as: what is the difference between planning and monitoring? Why do you think monitoring comes after planning in our workshop?, can you think of example situations where one has to use both planning and monitoring to perform a task or an action successfully?.
After eliciting students' answers and providing feedback (some of the feedback was about correctness in order to make sure the students do not misunderstand the relationship between planning and monitoring, and part of it was further questions such as why, How, that asked for more elaboration and thus more reflective thinking) to them, the researcher provided an example situation to demonstrate the importance of monitoring and its relationship to planning. Example 1: if I gave you an amount of money to spend in a month, would you rather spend carelessly from day one, or plan (think about the things you really need), and monitor (have an eye on the calendar and the amount of money that is left) your spending?. The purpose of the example was to give students a practical situation where both planning and monitoring are used in order to be perform a task affectively.
The text on car problems was revisited and students groups were asked to keep in mind the same objectives that were set in the previous session. The researcher modeled monitoring strategies for the whole group using think aloud procedure: remember that my goal is to memorize/learn the steps to follow in case of a car problem.

Researcher starts reading the passage and stops after the first 3 steps: Should I finish reading the whole passage and then try to recall or should I start highlighting from now? Would the pace in which I am reading allow me to focus on and memorize the steps or should I probably read slower? The researcher goes on reading then stops again and asks: Am I memorizing anything? What other strategies can I use other than reading slowly and highlighting? Is there a logic in the succession of steps that could help me remember all of them easily?
Students in the different groups were given some time to figure out how monitoring strategies can be used in a way that aligns to their respective goals and objectives, then one student from each group was selected to model monitoring strategies to the whole class. As in the previous session, the rest of students were asked to rehearse monitoring strategies under the guidance of the researcher who was providing feedback to each group.
Session 3: The purpose of session 3 was to teach the skill of information management. The session started with a review of what had been covered in the 2 previous sessions, and students were encouraged to exchange ideas about what they had learned and different situations where planning and monitoring could be applied.
The researcher introduced students to information management skills and explained that the nature of information plays an important role in the way one should approach reading and reading for tests. Also, that the nature of information determines allocation of time and attention while reading, and that the goal set for reading often determines what information one should focus on and how to manage it.
Example of new information Vs old, and important information/facts Vs secondary were discussed with students. The researcher highlighted the importance of identifying important information in a text as a key phase in the reading process especially when it takes into consideration the goals and objectives of reading, individual characteristics of the reader, and time constraints. One of the examples that one introduced is a text where one has to remember key events and their dates. What is important in addition to the goal is how good one is at remembering numbers (link was drawn here to knowledge of the self as a key aspect of strategic behavior); which will determine how long one is going to spend studying/rereading the text, and what strategy/strategies are most effective within the time-period one has to perform the task. Another example was taken from the text about car problems and how prior knowledge affects information management, allocation of time and attention in a context when a person having to learn about car problems already has some background knowledge about mechanics as opposed to one who has to figure out what the technical words mean before engaging in remembering the steps.
The strategies were explained and modeling use of information management skills was performed on a passage (Peterson's Master TOEFL Reading Skills, 2007, p. 81) about Native Americans which has a number of historical facts and dates about America's original population and first settlers. The researcher modeled using information management skills using think aloud procedure, and highlighting how monitoring and managing information work simultaneously to guide the learning/reading process. boring than a chart? Researcher drew a timeline on the board and started reading. When the first date in the text was encountered, the researcher stopped, wrote the event on the timeline and read the date's corresponding event slowly to emphasize the adjustment of reading speed strategy when encountering new or important information. Researcher went on reading and writing dates and events on the timeline.
Students were divided into 3 groups and were each assigned a different goal: Group1: learn dates and their corresponding events.
Group2: learn places and their corresponding events.
A student from each group was asked to model strategy use for the whole class and then the other students followed. The researcher guided the activity and encouraged students to use other strategies that help processing information from text more effectively. Feedback was given to students and comments were encouraged.
Session 4: The fourth session started with a reminder of what had been covered in previous sessions and emphasis was put on the interactive nature of the skills.
Evaluation and debugging were presented to students and the class engaged in reflective dialogue about both skills. First, students were asked about the importance of evaluation in general and how it guides improvement in general and learning in particular. Some students showed negative attitudes towards evaluation as a result of some "bad" experiences either at the baccalaureate or at university. The researcher emphasized that the number one role of academic evaluation is not to judge people and categorize them but assess the effectiveness of the teaching/learning process in general in order to improve academic outcomes for all education stakeholders. Also, the researcher emphasized that evaluation should not only come from outside but is more effective when it comes from within. The students were shown the different evaluation strategies from the course PowerPoint and were told that the list of strategies is not exhaustive of the skill and that they can think of and use other evaluation strategies.
The last slide, which is about debugging strategies, was displayed, and students were asked about the last time they had to change approach/strategy or ask for help as a result of difficulty. The purpose of this question was to draw a link between debugging in everyday-life situations and debugging in reading/learning. Students talked about different life situations that pushed them to reform their way of doing things or ask others for help (friendship conflicts, money management, eating habits…etc.). A link was also drawn between evaluation and debugging, and explanation was given about the importance of the former in initiating the latter, meaning that it takes careful evaluation to initiate effective debugging while facing learning or reading difficulties.
Three different passages were given to the three groups of students. The first was a contemporary journalistic article about "the Subway Syndrome" (Peterson's Master TOEFL Reading Skills, 2007, p. 73), the second was a historical passage about Titanic (Peterson's Master TOEFL Reading Skills, 2007, p. 85), and the third was a scientific passage about the moon (Peterson's Master TOEFL Reading Skills, 2007, p. 92). The students were given 10 minutes to read the passage and answer the comprehension questions. Furthermore, they were asked to report on evaluation and debugging strategies that they used, and context where they used them. After the 10 minutes were over, students reported their use of evaluation (checking that all goals have been met and thinking of how well they answered the comprehension questions) to make sure they grasped the essential information to answer the questions, and most of them used debugging (going back on track, stopping to think of connection between ideas, and soliciting help of other students with vocabulary) when faced with difficult vocabulary and in questions where they had to make inferences. The session ended with feedback from the researcher and comments from students about simultaneous use of all the strategies and the amazing capacity of the brain to do that without us being aware of that sometimes.
Session 5: The session started with a recapitulation of the 5 skills and how they relate to each other. Students were then given the reading posttest and were encouraged to be as strategic as they could. After the time for the reading test was over, the MAI was administered to the students who were later invited to a last session to evaluate the training and discuss its outcomes.
It is worth noting that throughout the 5 sessions, reference was made to knowledge of the self, the task, and the strategies, to explain to the students the fact that the same strategies do not necessarily work for everybody, nor www.ccsenet.org/jedp Journal of Educational and Developmental Psychology Vol. 4, No. 1;2014 do they work with all tasks, and that it is their metacognitive knowledge (about self, task, and strategies) that should inform and guide their monitoring and control skills.

The Control Group
During the five sessions with the control group, short texts from Peterson's Master TOEFL Reading Skills, appendix 3 were presented to the students and focus was on answering comprehension questions and discussing the strategies the students relied on in order to understand the texts. Unlike the experimental group, the control group was not given any instruction whatsoever, nor were they given any feedback on the strategies they used. Another difference between the experimental and control groups is that the students in the control group were given texts to read and comprehension questions to answer at home. While the experimental group was taught metacognitive skills, the control group sessions were a mere discussion of the texts, the strategies the students used to guide comprehension, and the difficulties they found.
The sessions did not follow any systematic teaching model as the goal was not to contrast the model used with the experimental group with another one to see which is more effective.
During the first session, the students were presented with the text about mechanics in class and a discussion was generated around cars, mechanics, and driving in Morocco. The students were then told to read the text and answer the comprehension questions. After the correction was conducted with the whole class, the students were asked if they used any strategies while reading the text or answering the questions. No feedback was given to the students about their strategy use. In other words, there was no reference from the researcher to procedural nor conditional knowledge.
Concerning the 4 other sessions, the students were given shorts texts from Peterson's Master TOEFL Reading Skills to read and answer at home, so as to devote more time in class to discussing contents of texts, student strategies, and difficulties they had with texts.

Reading Comprehension
Table 2 and Figure 2 present the reading comprehension scores for each group at pretest and posttest. Unexpectedly, there was a small but significant difference between the experimental and control groups at pretest t(24)= 2.33, p= .02, with the control group performing slightly higher than the experimental group. However, by posttest the experimental group outperformed the control group. In order to examine differences in means from pretest to posttest and thus evaluate the effect of the intervention on reading performance, a 2 (Group: experimental, control) x 2 (time: pretest, posttest) repeated measures ANOVA was conducted. The results indicated a significant difference in change between the experimental and control groups from pretest to posttest. A main effect of the intervention was found F(1, 24)= 19.86, p<.001. Figure 1 shows a significant improvement of the experimental group from pretest to posttest t(14)= -5.41, p<.05, while performance of the control group did not significantly improve t(10)= .00, p= 1 T test also shows a significant difference between both groups in reading comprehension change from pretest to posttest t(24)= 4.45, p<.05.

Metacognition
The experimental and control groups' mean scores in MAI total were not significantly different at pretest, t(24)= -2.06, p= .05. Surprisingly, the control group's mean scores for knowledge and regulation of cognition were higher, though marginally, than those of experimental group: MAI Knowledge: t (24) In order to examine main effects of the intervention on metacognition and its components, 2 x 2 repeated measures ANOVAs were calculated. For general metacognitive awareness measured via the MAI, a 2 (group: experimental, control) x 2 (Time: pretest, posttest) showed a significant difference between groups at posttest and thus a main effect of the intervention was found F(1, 24)= 10.25, p< .05. A t-test for MAI total change also shows a significant difference between groups, which highlights gain in metacognition for the experimental group t(24)= 3.20, p< .05. Figure 4 shows a plot of both groups' mean scores in pretest and posttest, and Figure 5 shows a per-case plot of the two groups before and after the intervention.  Figure 6. Plot of mean scores in metacognitive components before and after the intervention

Individual Differences
Finally, we were interested in whether high and low achievers differentially benefited from the intervention. In fact, within the training group, change in reading comprehension from pretest to posttest was significantly correlated with GPA, r= .63, p< .05. This correlation was not significant in the control group, r= .17, p= .63.

Discussion
The main goal of the present research was to investigate the effect of a five-session metacognitive intervention that targeted college students' monitoring and control skills on their reading performance and metacognitive awareness.
The results indicated that the experimental group outperformed the control group in both reading comprehension and metacognitive awareness at posttest. Although the control group performed slightly higher at pretest, the experimental group showed great gains in reading comprehension, metacognitive knowledge, monitoring, and control. Thus, one concludes that college students can be trained in metacognition, and that specifically training students in monitoring and control skills can improve their reading ability and overall metacognitive awareness. These findings lead us to two main conclusions: first that metacognitive skills are essential vehicles of effective learning because they allow students to be in control of their learning, starting from the goal-setting phase, through monitoring, then to the control and evaluation phase. Second, that metacognition is malleable even at a relatively older age (college-level students). These results substantially corroborate Butler's (1998), who developed an intervention called the Strategic Content Learning (SCL) approach, which was used for low-performing college students. The students were trained to set goals, use, monitor, and adjust various strategies, and evaluate performance. The intervention resulted in significant gains for the treatment sample in metacognitive knowledge, monitoring, and strategy use together with the construction of more positive perceptions of task-specific self-efficacy.
Another major finding of the present study is that achieving gains in metacognitive skills and reading performance is feasible in a relatively short training-time (5 sessions). This provides a more realistic/economical solution to problems of implementing year-/semester-long metacognitive interventions which are difficult to incorporate into other courses. It is essential to emphasize though, that providing opportunities for students to practice the strategies and witness the skills in action from peers and professors is of crucial importance in interventions like the present one. This has been advocated by a number of researchers (e.g., Pressley, Borkowski, & O'Sullivan, 1985;Pressley & McCormick, 1995) who repeatedly emphasized that merely teaching students how to use a strategy does not guarantee that they understand how the strategy benefits performance.
Finally, this research provided evidence that both high and low achievers' reading performance improved as a result of the metacognitive intervention but that high achievers' performance improved more significantly. This leads us to conclude that high achievers find it easier to follow interventions targeting metacognition and reading, very likely because they are predisposed to learn new ways of reading, learning from text, and reflecting.
To conclude, the present metacognitive intervention was a success. However, it presents few limitations, the most delicate of which is specifying the variable that induced the effect. In other words, it is not very clear whether the achieved results are an outcome of the skills/strategies taught, the way they were taught, or the combination of both. This is clearly reflected in the indirect relation between MAI change scores and reading comprehension change scores. As Brown et al. (1983) point out, "if the intervention is successful, follow up studies can be designed to track down the more specific components responsible. Such tracking down is theoretically necessary". Hence the necessity to investigate and isolate the factor that contributed the most to the effect. This could be done by using two intervention/treatment groups taught in two different ways, which would facilitate attributing the effect of the intervention either to the method, the skills/strategies, or both combined.
Another limitation of the present study is the small sample size due to time-constraints for both the researcher and the larger group of students from which the subset of subjects was taken. Future intervention-studies should use a larger sample in order to account for the challenges related to the generalizability of results to the larger population of college students.