MERLOT Journal of Online Learning and Teaching
Vol. 6, No. 1, March 2010

Share on Facebook


Techniques for Enhancing Reflection and Learning in an Online Course


Nancy O’Hanlon
Professor, University Libraries
Ohio State University
Columbus, OH 43210 US
ohanlon.1@osu.edu

Karen R. Diaz
Associate Professor, University Libraries
Ohio State University
Columbus, OH 43210 USA  

Abstract  

The authors designed new content for an online research skills course, to provide instruction and expert modeling of the process for determining bias when evaluating information sources. They also introduced a specific metacognitive strategy (self-questioning) to enhance student self-awareness. Students were encouraged to complete a self-regulated learning survey to raise their awareness of metacognitive strategies. The instructional content, an Adobe Captivate movie, described a cognitive strategy for identifying bias, MAPit, and included activities and questions throughout for students to assess their understanding. Instruction was followed by an online quiz that provided practice in applying the MAPit strategy. Metacognitive prompts within the quiz encouraged students to reflect on and assess their learning. The final course assignment (Capstone) also included application questions, with a reminder about the MAPit strategy. A review of performance on both assignments showed improvement after this intervention. When compared to a later offering of the same course where a more efficient approach to encouraging student self-questioning was applied, the improvement was sustained. This approach can be effectively implemented in a large enrollment online course.

Keywords: Metacognition, online learning, reflection, self-questioning, self-regulated learning



Introduction

Introspection may become a lost art. News media report regularly on the growing popularity of short bursts of communication, such as SMS text messages and “tweets,” and declaim our inability to focus and concentrate on longer texts. As Miller (2009, p.2) notes, “we are hard-wired to like the shiny. The attention we bring to bear on less exciting objects and activities, where the payoff may be long-term rather than immediate, requires a conscious choice.” Inundated with information from the Internet, we have developed a habit of skimming rather than deep reading (Perez, 2008). We make snap judgments about relevance of information content based on surface observations of these sources. Yet some critical intellectual tasks, such as determination of accuracy and bias, require more time, careful reading, and reflection. Lin (2001) cites a variety of studies that report enhanced learning when students engage in metacognitive activities such as self-assessment and monitoring, noting that the effect for weak students is multiplied. Lin also reports that students typically do not engage in these reflective activities unless specifically encouraged to do so. With so much competition for our attention, how can we develop traits of conscious reflection in our students? Further, is pursuit of this goal likely to improve learning outcomes in an online instructional environment? In this study, the authors utilized an action research model to investigate the effectiveness of a new instructional component designed to encourage student reflection on their learning.

Literature Review

Kauffman, Xun, Kui, and Ching (2008) assert that self-monitoring is particularly important in Web-based environments, where learners are asked to complete complex tasks independently, with little support from others, and self-regulated learning strategies are essential to success. A U.S. Department of Education meta-analysis and review of online learning studies concludes: “Online learning can be enhanced by giving learners control of their interactions and prompting learner reflection. Studies indicate that manipulations that trigger learner activity or learner reflection and self-monitoring of understanding are effective when students pursue online learning as individuals” (Means, Toyama, Murphy, Bakia, & Jones, 2009, p.xvi). This report cites nine recent studies that found that tools or features prompting students to reflect on learning were effective in improving outcomes. For example, Bixler (2007) as well as Saito and Miwa (2007) focused on techniques for encouraging student reflection; both studies found that these additional elements improved student online learning. Cook, Dupras, Thompson, and Pankratz (2005) used self-assessment questions after instructional modules in a randomized controlled experiment and found an improvement in student performance immediately after completion of modules, but this effect did not persist to an end-of-course test.

Prompts (questions) are an important tool for encouraging reflection. There are many different types of instructional prompts. A study by Chen, Wei, Wu, and Uden (2009) examined the effect of “high level prompts” on the learner’s reflection levels in an online course. The authors defined high level prompts as questions related to how well the student comprehends and can integrate instructional content. Kauffman et al. (2008) identify “problem solving prompts” as questions designed to procedurally guide learners through steps of a problem-solving process. Others describe prompts more directly related to metacognition, encouraging students to reflect about their own cognitive processes. Lin (2001) discusses “process prompts,” questions asking students to monitor how and why decisions were made and to explain specifically where and what they did not understand, and notes that they are likely to be effective. Lin notes that some researchers have used process prompts to help students assess their learning against a set of criteria. Subjects in an experiment described by Kauffman (2004) received self-monitoring prompts related to effective note taking. Self-monitoring is defined here as the awareness students have of their comprehension and their task performance during or shortly after completing an academic task. The study “suggests that simply asking students if they are certain, 'they have gathered all the important information' and providing them with cues and opportunities to go back to improve their note taking are powerful instructional techniques that can be automated in Web-based settings" (p.157).

In an experiment designed to improve participants’ ability to find useful information on a health research topic and evaluate quality of search results, Stadtler and Bromme (2007) also focus on metacognitive prompting, to encourage students to reflect on and monitor their own learning. Subjects received evaluation prompts and also monitoring (metacognitive) prompts.  Participants who received monitoring prompts acquired significantly more factual knowledge on the topic (cholesterol) and were able to determine how much more information was needed. Wopereis, Brand-Gruwel, and Vermetten (2008) note that solving information problems is a complex cognitive skill. In an experiment related to learning research techniques in an online psychology course, “driving questions” that focus on regulation skills like orientation, monitoring, steering, and testing, were prominently used in worksheets. The authors found this “embedded instruction,” particularly use of reflective questions, to be effective.

Lovett (2008) describes an attempt to teach self-monitoring skills to first year science students at Carnegie Mellon University and introduces the term “wrapper,” defined as an activity that surrounds a pre-existing learning or assessment task and fosters students’ metacognition. She asserts that a self-monitoring wrapper can be built around any part of a course (lecture, homework, or test). Homework wrappers can take the form of prompts, self-assessment questions that focus on skills students should be monitoring. Lovett believes that wrappers are efficient and effective, because metacognition practice is integrated with the task where it is needed. The intervention described in her report produced desirable results. After introducing self-monitoring skills through various wrappers, the majority of students reported using new strategies to improve their learning. While the wrapper study did not measure learning, it built on her previous work with more time-intensive techniques for building metacognition that showed improved learning when students used new strategies.

Lazonder and Rouet (2008) reviewed studies that describe approaches to supporting metacognition in addition to prompts, such as strategy training programs and modeling. Lin, Hmelo, Kinzer, and Secules (1999, p.43) write that “reflective thinking involves actively monitoring, evaluating, and modifying one’s thinking and comparing it to both expert models and peers.” Process modeling focuses on the steps an expert would take when solving a problem and may be demonstrated through a video in an online course. “An important finding is that simply watching the video is not as effective as participating in the cycle of watching, commenting, reflecting on the processes that were modeled, practicing, and reflecting on the students’ own processes” (p. 51).

In the present study, the authors incorporate both process modeling, using a video, and self-monitoring prompts into a new online course assignment related to solving a complex problem, determining bias in information sources. An existing course assignment, the Capstone, which includes questions related to identifying bias, was also modified. Student performance on both new and existing course assignments during autumn 2008 is reviewed here and compared to a later offering of the same course, during winter 2009. The later course employed a revised version of the new assignment. The purpose was to assess the effectiveness of incorporating the instructional techniques of modeling and self-monitoring prompts into an online course assignment.

Methodology

Fifty undergraduate students enrolled during autumn quarter 2008 in an Ohio State University credit course developed and taught by the authors were the research subjects. The course, Internet Tools and Research Techniques, is a two credit graded course that is taught entirely online, using the university course management system, Carmen. Students complete 18 online assignments over a four-week period. The assignments are grouped topically, and cover effective use of browsing and communication tools, searching skills, research techniques, evaluation of content, and ethical use of information. A new group of assignments opens each week and all are due by the end of the course, so that students have flexibility in managing their on-task time for course assignments. The syllabus is available at: http://liblearn.osu.edu/courses/120/index.html

Core concepts are introduced by online tutorials that include practice and self-test components as well as links to author-developed movies, which provide demonstrations of key research techniques. Additional graded quizzes and practice opportunities are required, as is a final “Capstone” assignment that focuses on searching and evaluation skills developed during the course. During almost ten years of teaching this online course, the authors have noted that students have great difficulty understanding the concept of bias and applying it in a real world setting when evaluating information sources in their Capstone assignment. Pace and Middendorf (2004) recommend that instructors can help students overcome learning obstacles by reflecting in depth on the steps an expert would take to accomplish a particular task and then modeling it for students. They also recommend constructing assignments that allow students to practice the task and providing feedback on their success.

The authors employed an action research model in an attempt to address this problem. As Riding, Fowell, and Levy (1995) note, action research takes many forms, but all utilize an iterative approach that involves problem identification, action planning and implementation, evaluation and reflection. The authors decided to provide new instruction in their online course and model the process of determining source bias by developing and incorporating a new movie, Recognizing Bias (http://liblearn.osu.edu/movies/bias.htm), and a companion quiz.   The movie was developed using Adobe Captivate.  This software supports screen capture, insertion of images or PowerPoint slides, audio recording, and also includes features that allow developers to incorporate practice tasks with feedback into the movie. Recognizing Bias provides a definition of bias, offers a rationale for learning the skill of assessing bias, and introduces a cognitive strategy, represented as a mnemonic, MAPit.  The MAPit strategy focuses on critically examining Message, Author, and Purpose of the information source.  Examination of message includes determining whether it:

  • states facts or opinions;
  • cites reputable sources;
  • uses neutral or loaded language;
  • provides a balanced or one-sided point of view;
  • offers fair or selective coverage of the topic or issue.

For each of these message attributes, examples and some practice opportunities are provided.  The movie also offers instruction on specific methods for determining author credibility and the primary purpose of an information source or site.  

After viewing the movie, students were required to complete the companion quiz.  The quiz contains four multiple-choice questions that assess recall of content from the movie, two multiple-choice questions with links to websites, to assess students’ ability to apply the MAPit strategy to actual sources, and two open-ended questions that are metacognitive prompts, intended to encourage reflection by students about their level of understanding and ability to apply the strategy as well as any actions needed to improve their ability to recognize bias.  Before offering the course again, the authors reviewed and categorized student responses to the open-ended quiz questions, and used this data to construct multiple-choice responses to the questions containing metacognitive prompts.  This facilitated comparison of student responses between the two course offerings (autumn 2008 and winter 2009), described later in this article.  Also, by making the quiz entirely multiple-choice and thus available for automatic grading by the course management system, immediate feedback on success is available to the student.

The authors utilized several other methods to encourage students to reflect on their learning practices and content knowledge related to recognizing bias.  A Self-Regulated Learning Survey was developed and incorporated into the course structure.  The purpose of this survey was to encourage students, as they began the course, to focus on self as learner, to reflect on their own learning strategies, and to raise their awareness of metacognitive processes, such as planning and monitoring learning.  The ten item survey, which was optional but encouraged, included items on meeting deadlines, remembering information, planning and organizing work, arranging the work environment, paraphrasing and rewriting information, seeking assistance, and reviewing work before submitting. Questions were based on the Self-efficacy for Self-Regulated Learning Scale (SESRL) described in Gredler and Schwartz (1997). Eighty percent of students completed this survey during autumn quarter 2008. The survey was not included in later course offerings. Additionally, the authors made some revisions to the Capstone assignment, inserting a prompt to remember the MAPit strategy in a question related to identifying bias in the website being evaluated in the assignment. 

Bias Quiz Performance:

During autumn 2008, students were generally able to recall and apply the MAPit strategy presented in the movie, Recognizing Bias. The average grade on the Bias Quiz was 87.6%. Approximately one fourth of all 50 students achieved a perfect score of 100%. Four students scored in the 60-70% range on the assignment. Students in this low scoring group each missed one of the application questions along with one or two other recall questions. Two of these students submitted the assignment on the last day before the course closed, so procrastination was a likely factor in poor performance.

The first application question used the website Drug Policy Alliance (http://www.drugpolicy.org), an organization devoted to changing national policies related to the war on drugs. This site meets the criteria for bias described in the movie. Twenty-six percent of students during autumn 2008 answered this question incorrectly. The second example was more challenging and posed more difficulty for the class. Forty percent of students were not able to identify elements of bias in the article “Who’s Watching What You Eat?” (http://www.cspinet.org/biotech/pdtake.html) from the Center for Science in the Public Interest, also an advocacy organization.

Table 1 compares performance on the recall and application quiz questions for the autumn 2008 and winter 2009 sections of the course. Questions 2 and 4 in the recall group were more difficult and scores were lower for both classes. Each of these questions contained multiple correct answers and students were required to identify all of them to receive points. Scores by question are comparable across sections, although performance did improve slightly during winter 2009.

Table 1: Bias Quiz - Performance on scored questions by section

Question Focus

Percent of Correct Answers

 

Autumn 2008

Winter 2009

1. What letters in MAPit represent

98.04

100.00

2. Factors in analyzing message

84.31

87.76

3. Relevance of author credibility

96.08

100.00

4. Factors in determining purpose

62.75

67.35

5. Simple application to website

74.51

79.59

6. More complex application to site

58.82

53.06

Textual Analysis of Reflection Prompts

After reviewing autumn 2008 student answers to the open-ended questions, the authors grouped them into categories, and from those categories constructed a rubric for evaluating responses. For example, the rubric developed for question 7, “How well do you understand what is needed and how well prepared do you feel to recognize bias in other information you encounter?” was:

  • I understand the concepts related to, and feel confident that I will be able to recognize bias in all other information sources.
  • I understand the concepts needed to recognize bias, but think it may still be difficult to recognize bias in some information sources.
  • Not all the concepts related to recognizing bias are easy for me, and I am not confident in my ability to recognize bias in all information sources.
  • I do not feel prepared at all to recognize bias in information sources.

For reflection question 8, “What do you think you should do to improve your own abilities for recognizing bias?” the rubric was:

  • I need to remember to apply the MAPit strategy in the future.
  • There are one or two elements in the MAPit strategy that I know are challenging for me, and I need to pay attention to those when I try to detect bias.
  • I need to pay more attention to elements from information sources (such as author credentials, or “about us” links).
  • I should spend time practicing going to websites and applying the MAPit strategy to get good at it.

Each author separately reviewed student responses to Questions 7 and 8 and scored them according to the rubrics. Evaluations were compared and those answers that did not match were discussed, in order to resolve differences. Prior to winter 2009, the authors used the rubrics to develop new multiple choice questions to replace the two open-ended reflection prompts at the end of the quiz.

Table 2 compares the distribution of student responses to these questions (Q7 and Q8) across the autumn 2008 and winter 2009 sections. Significantly more students (83%) in autumn 2008 described themselves as very confident when responding to the open-ended question 7. When selecting a response from multiple-choice options, students were less likely to indicate highest confidence. Only 27% of winter 2009 students chose this multiple-choice option. The majority (61%) characterized themselves as somewhat confident. However, performance on other content knowledge questions was slightly higher for winter 2009 students, as shown in Table 1.

Table 2: Bias Quiz – Responses to reflection questions by section

Q.7 Confidence in detecting bias

Autumn 2008

Winter 2009

Very confident

83.33%

26.53%

Somewhat confident

14.58%

61.22%

Not very confident

2.08%

10.20%

Not at all confident

0.00%

2.04%

Q.8 How to improve ability to detect bias

Autumn 2008

Winter 2009

Remember to apply MAPit strategy

15.00%

30.61%

Improve understanding of elements of MAPit strategy

45.00%

12.24%

Pay more attention to elements of websites

17.50%

40.82%

Practice applying MAPit strategy to websites

22.50%

16.33%

When analyzing responses to both questions, autumn 2008 answers that were considered not relevant were excluded from tabulations. Almost one-fifth (20%) of student responses to question 8 fell into this category because they were either incoherent explanations or would simply not help the student with the task at hand. Autumn 2008 student responses focused most heavily on the need to practice and perfect their understanding of the MAPit strategy (45% v. 12%). Winter 2009 students emphasized the need to remember to apply the strategy (31%) or to pay more attention to specific aspects of websites (41%).

High confidence does not always reflect the student’s ability to apply concepts to a real world example. During autumn 2008, 65% of students who indicated the highest level of confidence for detecting bias (Q7) missed one of the two application examples (Q5, 6). One student (2.5%) missed both examples. In winter 2009, students exhibited the same pattern: 61.5% of the students indicating highest confidence in their own skills also missed one of the application questions. Pajares (1996) concludes, after reviewing research on self-efficacy beliefs, that most students are overconfident about their academic abilities. Dunning, Johnson, Ehrlinger, and Kruger (2003, p.84-5) write: “In many intellectual and social domains, the skills needed to produce correct responses are virtually identical to those needed to evaluate the accuracy of one’s responses.” They assert that low performers have less self-awareness (metacognitive ability) and also tend to overestimate their abilities to a greater extent than high performers. Our data affirms that conclusion.

Capstone Assignment Performance

The Capstone assignment consists of 25 questions and is worth a total of 40 points, which represents 1/5 of the grade for the course. There are a combination of multiple-choice questions that are auto-graded and open-ended questions requiring instructor grading. As the name suggests, this assignment draws on all of the content taught in the course.  Students are randomly assigned one of three predetermined topics.  They must first identify some factors about a website related to their topic, including URL, author, reputation and purposes of the author, and the site's purpose and currency. Next they evaluate the site for bias and perceived value to others. Then they use what they have learned about constructing a good search statement to find another site on the same topic as the one they were assigned. For this new site they must also do some analysis, looking for author, author's background, purpose and currency of the site, and determine bias or balance. Finally, students compare the two sites and indicate which one is better for academic research and why. 

Overall, the autumn 2008 study group scored 84.75% on this assignment, with 3 (5.8%) students receiving a perfect score, and 4 (7.8%) students scoring below 70%.  On questions relating to bias, students scored 80% successfully on question 8, which is a multiple choice question asking whether the primary purpose of the site is reference, advocacy or commercial.  Question 11 asks students to pick whether the site is biased or balanced and students scored 82% successfully. Question 24 is an open ended question that asks students to evaluate a website they found. One of several issues they are required to address is bias. A close review of this question shows that 18.3% of students lost a point because of a problem related to how they addressed bias. Reasons students lost points related to bias were because they simply did not address the issue of bias at all, they evaluated bias incorrectly, or they did not support their claim well. And finally, question 25, the most complex question in the assignment, asks students to compare two websites. They are directed to "consider at least one of the following aspect(s) of the site you chose (authority, purpose, content) and provide support" for this answer. Most students (86%) addressed more than one of these facets, and received an overall average of 7.78 out of 9 points. While not prompted specifically to do so, 41% of students addressed the issue of balance or bias overtly.

Performance Autumn 2008, Winter 2009, and Prior Quarters

As was the case with the Bias Quiz (Table 1), students in winter 2009 performed slightly better than their autumn 2008 counterparts on the Capstone exercise.  The autumn study group scored an average of 84.75% overall, while the winter class scored 88.5% overall.  As discussed, the issue of bias is one important component of the Capstone exercise, and in all but one question addressing bias, students in winter 2009 (see Table 2) did better than autumn 2008, even though autumn 2008 was the more confident group. Table 3 shows that the largest difference in scores between autumn 2008 and winter 2009 is for the most complex question, question 25, where the autumn class scored 79.9% and the winter class scored 90.31%.   Comparison to two quarters preceding any instructional intervention, autumn 2007 and winter 2008, shows a notable increase in scores on question 24 and question 25, approximately fifteen percent between autumn 2007 and winter 2009, and suggests that the intervention had an impact on learning.

Table 3: Capstone Assignment – Performance on bias related questions before and after instruction

Question Focus

Autumn 2007

n= 1444

Winter 2008

n=144

Autumn 2008

n=50

Winter 2009

n=49

BEFORE instruction

AFTER instruction

Average overall success rates 1

83.14%

85.08%

84.75%

88.5%

Q8 Site is reference, commercial, advocacy 2

81.94%

85.42%

80.39%

83.67%

Q11 Site is biased or balanced 2

81.94%

84.03%

82.35%

79.59%

Q24 Evaluation of a site, including issues of bias 3

76.85%

83.1%

86.49%

90.57%

Q25 Comparison of two sites, which may or may not address issues of bias

74.48%

77.43%

79.9%

90.31%

Notes:
1. All sections were evaluated by the same grader.
2. Sites evaluated vary within sections and between quarters, making comparisons difficult.
3. User selects new site to evaluate, based on search results.
4. Enrollment numbers (n) were different from 2007/08 to 2008/09.

One other comparison of these two sections is not simply the scores of the students, but the language used in the answers. Question 25 was open ended, requiring students to answer in their own words.  The authors looked at word counts of terms related to reflection and bias to examine performance through a different lens. 

Table 4 shows that despite use of different questioning strategies in the earlier assignment, the open-ended strategy used in autumn 2008 and the multiple-choice strategy used in winter 2009, appropriate terminology used by students was generally similar for both sections.   This suggests that adding self reflection in either format in a course assignment can carry to the end of the course.

Table 4: Capstone Assignment - Occurrence of terminology in question 25

Terminology

Autumn 2008

Winter 2009

Reflective terms total

40

40

think

16

6

seem/seems

12

21

believe

9

10

appear/appears/appeared

3

3

MAPit terms total

95

103

M = Message total

41

57

bias/biased/unbiased

15

27

balanced

8

11

data

8

8

facts

9

8

objective

0

2

neutral

0

1

slant/slanted

1

0

A = Authority total

27

25

author

10

7

authority/authoritative

16

14

expert/expertise

1

4

P = Purpose total

27

21

purpose

24

19

mission

3

2

Case Study

A final qualitative analysis given to student performance was to look more closely at the work of students in the autumn 2008 section receiving the three highest and the three lowest scores on the Capstone assignment. As the culminating experience of the course, performance on the Capstone assignment serves as a microcosm for course performance.  Students 1-3 scored highest on the Capstone assignment.  Students 4-6 scored lowest.  The authors examined overall course performance for these 6 students, described below.

Highest Scores

Student 1 is a rank 4 (senior) majoring in Engineering. He began working on assignments the first day of class.  He completed the extra credit assignment and did 4-5 assignments each week throughout the class. He scored 60% on one assignment other than the Capstone but the rest of his scores were all excellent (80% or higher).  In the Bias Quiz he correctly analyzed two sites for bias and spent 33 minutes completing that particular assignment, but submitted it after submitting the Capstone assignment. Therefore, it is not clear if he was exposed to the bias instructional content before or after completing the Capstone. His self reflection in the Bias Quiz indicated that he needs to pay attention to language, one technique addressed in the bias instructional module.  He did not complete the self-regulated learning survey which was not required. He scored 40/40 on the Capstone.  His site analysis covered authority and purpose very well. Although it cannot be determined exactly how long he spent on the Capstone in total, it was open for 2 days, giving him time to come back to it more than once. He submitted it a full week early. 

Student 2 is a rank 4 student in the School of Social and Behavioral Sciences. She began working on the first day of class and completed 4-6 assignments each week throughout the class. She scored 80% or higher on all assignments in the class. She correctly analyzed one of the sites in the Bias Quiz and incorrectly analyzed the other site. She spent 12 minutes on that assignment. Her self-reflection in that quiz indicated that she needs to put her own opinion aside when analyzing others, not a technique that was taught in the instructional material on bias. She completed the self-regulated learning survey and scored 41 out of 50 on that, indicating a fairly high sense of good study skills. She scored 40/40 on the Capstone, largely addressing issues related to message and content in her site analysis and comparison. These were techniques taught in the bias instructional content. Her assignment was open for 1 hour and 16 minutes, giving a sense that that is how long she took to complete it and review it, if she in fact did.  She submitted the assignment 5 days early.

Student 3 is a rank 2 (sophomore) majoring in Business Administration. She did not complete the extra credit assignment and did not begin working until week 3 of the 4 week class.  She completed five assignments in week 3 and the rest over the last two days of class. She scored 80% or better on all assignments.  She spent 23 minutes on the Bias Quiz, correctly analyzing one site and incorrectly analyzing the other. Her self-reflection in that quiz indicated the need to pay attention to author credentials and authority to address the topic, a technique that is addressed in the instructional material.  On the self-regulated learning survey she scored very high (45/50), ranking her study habits as very good. Student 3 received 40/40 on her Capstone assignment. In her site analysis and comparison on the Capstone assignment she addressed issues related to authority, mission, and content, all techniques addressed in the bias instructional content. Her assignment was open for 29 hours indicating that she probably did not complete it in one sitting and opening the possibility that she took time to review it before submitting it.  She submitted the Capstone assignment the day before it was due.

Lowest Scores

Student 4 is a rank 4 in the College of Education and Human Ecology.  He completed 3 assignments the first week, none the second, 2 the third week, and all the rest in the last 5 days of class. He scored 70% or lower on 5 of the course assignments. He spent 11 minutes on the Bias Quiz and did submit it before the Capstone but not until the same afternoon all assignments were due. He did correctly analyze both sites for bias in that assignment. His self reflection indicated the need to look through an "about us" link on websites to determine the perspective of the author, a technique that is taught in the bias instructional module.  On the self regulated learning survey he rated his own study habits fairly lowly with an overall score of 34/50. On the Capstone assignment student 4 incorrectly identified his site as a reference site rather than an advocacy site. He analyzed his site based on an argument of fact versus fiction, rather than employing any elements from the MAPit strategy. In his site comparison he only briefly mentioned content and gave a very vague answer. He received a 26/40 on that assignment, having it open for 30 hours, and submitting it early in the morning the day it was due. Given that several other assignments were completed during the 30 hours this was open, one can assume not all the time was spent on the Capstone.

Student 5 is a rank 2 in the Allied Medical Professions program.  He completed 2 assignments in the first week of class, and all the rest in weeks 3 and 4.  He scored 70% or lower on 5 of the course assignments. Student 5 had the Bias Quiz open for 30 hours so we don't know how much time was actually spent working on it.  He analyzed one site correctly and one incorrectly for bias. His self-reflection statement in the Bias Quiz is vague, indicating that he needs to learn more about the MAP (referring to the MAPit strategy).  This indicates he was aware of the instructional module, but does not show any understanding of it. Student 5 scored himself the highest of the six students in this case study on the self-regulated learning assessment, giving himself a 48/50 on his good study habits, even though they were not reflected in his work for this course.  Student 5 scored 24/40 on the Capstone assignment.  In his site analysis he incorrectly identified his site as a reference site when in fact it is an advocacy site and he incorrectly identified the site author. In his Capstone site comparison he actually did not do any comparison, instead focusing only on the content of one site.  His writing was poor and somewhat unintelligible.  Like the Bias Quiz, this assignment was open for 2 days, so the authors cannot determine how much time was actually spent on it.  He submitted it 2 hours before it was due.

Student 6 is a rank 2 in Exploration (indicating that she is undecided upon a major). She completed only the extra credit assignment during the first week of class and all the rest of the assignments in the last two days of class. She scored 60% or lower on 4 of her course assignments outside of the Capstone.  On the Bias Quiz she correctly analyzed one site for bias and incorrectly analyzed the other.  She spent 17 minutes on that assignment, completing it 4 hours before it was due. Her self-reflective comment in that quiz did not relate at all to the instructional module. She did not complete the self regulated learning survey so we do not know how she rates her study habits.  She spent 1 hour and 39 minutes and scored 26/40 on the Capstone. She did correctly identify the site she analyzed in the Capstone assignment as a biased advocacy site in the multiple choice questions, but later in her analysis described the site as balanced. Her site analysis was lengthy but largely copied text from the site itself and not particularly relevant to the question.  She did not support her claim of balance.  In her site comparison she did not actually provide any comparison, focusing her comments on purpose and content only. She submitted the assignment 1 1/2 hours before it was due.

While the course management system (CMS) used for this course allows tracking of quiz usage and timing, it does not provide the same information for course content, since much of the content is pointed at, rather than residing within the system.  This makes it impossible to gauge to what extent, if any, students engaged with the instructional material.  In some cases their responses seemed to indicate they were not familiar with the material. Additionally one high scorer and one low scorer completed the Capstone before completing the Bias Quiz, so we don't know if they had engaged with the instruction about bias at the time they completed the Capstone. The ability to correctly analyze for bias in the Bias Quiz was not a clear indicator of difference, as high and low scorers were equally successful on their site analyses.   The self regulated learning assessment over good study habits sometimes clearly reflected actual practice but did not always, so we cannot assess whether students have or don't have a firm grasp of their own habits and how they might affect their course work.

The authors are not able to draw any conclusions about time on task as an indicator of success since it is hard to judge in some cases, and fairly equal between low and high scoring students in other cases.  Completing work early contributed to success for two of the high scoring students, but one of the high scorers simply finished the work on time.  None of the low scoring students completed their work early and likely suffered from lack of time.

One pattern that seemed most to separate the highest scorers from the lowest on the Capstone assignment is overall performance.  Those who did best on the Capstone also did best over the entire course.  Those who did worst performed worse overall.  Another pattern that is somewhat related is that the students who performed better tended to work over a more extended period of time, where the low scorers "crammed" their work into a smaller time frame. Even Student 3, who completed all her work in the last two weeks and yet was a high scorer, distributed the work evenly over two weeks and did not work in spurts. A benefit of working evenly over time is the ability to do some self-regulation.  Another is the ability to allow oneself time to interact with the instructional components of an online course. Both patterns support the contention reported in Dunning et al. (2003) that better students have better metacognitive and self-regulatory skills.

Conclusions

This study provides some evidence, along with several others cited, that students' self confidence in their abilities is not reliable and that students are often over confident. This was noted both in the Bias Quiz, where students with high confidence in their skills missed application questions, and again in the Capstone case study analysis, where some low scoring students rated their study habits highly.

Addressing an obstacle to student understanding through modeling did enhance performance. There was a significant increase from autumn/winter 2007/08 scores to autumn/winter 2008/09 scores after the new instructional content on bias was added. It remains unclear whether the self-reflection questions also aided in this performance, since both pieces were added simultaneously.

Perhaps the most useful result of this analysis was learning that open-ended reflective questions could successfully be turned into multiple choice format. By starting with open ended reflective questions and analyzing and grouping responses, the authors were able to develop a way to fold reflective questions into a large enrollment online class, where it is helpful to minimize instructor grading. Providing multiple choice options also helped students pick from appropriate responses, and not consider inappropriate ones. Thus, it might actually be better to provide this sort of guided reflection. Using a multiple choice approach also enables the CMS to provide immediate feedback to students on their performance on the assignment rather than wait for the instructor to grade responses. This concrete feedback can also help students gain a more realistic perspective on their abilities. Through text analysis of the answers to complex, open-ended questions, it is possible to see that these two separate approaches to encouraging reflection (open-ended and multiple choice reflection) produced similar quality and terminology by students and also that the multiple choice approach is a viable option for instructors in large enrollment online courses

Suggested Further Research

This study introduces new teaching content and reflection prompts. Would adding only reflection prompts enhance student learning and performance in an online environment? This would need further examination through an experimental study with a control group receiving reflection prompts. Such a study could build on Lovett’s wrapper concept to determine the value of self-reflection for both short term and long term improvement in student work in an online course.


References

Bixler, B. A. (2007). The effects of scaffolding student’s problem-solving process via question prompts on problem solving and intrinsic motivation in an online learning environment. PhD diss., The Pennsylvania State University, State College, Penn. Retrieved from: http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-2059/index.html

Chen, N.S., Wei, C.W., Wu, K.T., & Uden, L. (2009). Effects of high level prompts and peer assessment on online learners’ reflection levels. Computers & Education 52, 283-91. doi:10.1016/j.compedu.2008.08.007

Cook, D. A., Dupras, D.M., Thompson, W.G. & Pankratz, V. S. (2005). Web-based learning in residents’ continuity clinics: A randomized, controlled trial. Academic Medicine 80 (1), 90–97. doi:10.1097/00001888-200501000-00022

Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science 12 (3), 83-87. doi:10.1111/1467-8721.01235

Gredler, M.E. & Schwartz, L.S. (1997). Factorial structure of the Self-Efficacy for Self-Regulated Learning Scale. Psychological Reports81, 51-57.

Kauffman, D. F. (2004). Self-regulated learning in web-based environments: Instructional tools designed to facilitate cognitive strategy use, metacognitive processing, and motivational beliefs. Journal of Educational Computing Research30 (1-2), 139-61. doi:10.2190/AX2D-Y9VM-V7PX-0TAD

Kauffman, D. F., Xun, G., Kui, X., & Ching, H. C. (2008). Prompting in web-based environments: Supporting self-monitoring and problem solving skills in college students. Journal of Educational Computing Research38 (2), 115-37. doi:10.2190/EC.38.2.a

Lin, X., Hmelo, C., Kinzer, C.K., & Secules, T.J. (1999). Designing technology to support reflection. Educational Technology Research and Development47 (3), 43-62. doi:10.1007/BF02299633

Lin, X. (2001).  Designing metacognitive activities. Educational Technology Research and Development49, 23-40. doi:10.1007/BF02504926

Lovett, M. C. (2008). Teaching Metacognition: Presentation to EDUCAUSE Learning Initiative Annual Meeting, 29 January 2008, retrieved from: http://net.educause.edu/upload/presentations/ELI081/FS03/Metacognition-ELI.pdf

Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington, D.C.: U.S. Dept. of Education, Office of Planning, Evaluation, and Policy Development. Retrieved from: http://www.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf

Miller, L. (April 29, 2009). Why can’t we concentrate? Salon.com. Retrieved from: http://www.salon.com/books/review/2009/04/29/rapt/

Pace, D. & Middendorf, J. (Eds.) (2004). Decoding the disciplines: Helping students learn disciplinary ways of thinking. New Directions for Teaching and Learning, 98. San Francisco: Jossey Bass.

Pajares, F. (1996). Self-efficacy beliefs in academic settings. Review of Educational Research66, 543-78. doi:10.2307/1170653

Perez, S. (May 7, 2008). The stats are in: You’re just skimming this article. ReadWriteWeb blog post, retrieved from: http://www.readwriteweb.com/archives/the_stats_are_in_youre_just_skimming_this_article.php

Riding, P., Fowell, S., & Levy, P. (1995). An action research approach to curriculum development. Information Research 1 (1), Retrieved from: http://informationr.net/ir/1-1/paper2.html

Saito, H., & Miwa, K. (2007). Construction of a learning environment supporting learners’ reflection: A case of information seeking on the Web. Computers & Education 49 (2), 214–29. doi: 10.1016/j.compedu.2005.07.001

Stadtler, M. & Bromme, R. (2008).  Effects of the metacognitive computer-tool met.a.ware on the web search of laypersons.  Computers in Human Behavior24 (3), 716-37. doi:10.1016/j.chb.2007.01.023

Wopereis, I., Brand-Gruwel, S., & Vermetten, Y. (2008).  The effect of embedded instruction on solving information problems. Computers in Human Behavior24 (3), 738-52. doi:10.1016/j.chb.2007.01.024

 


Manuscript received 12 Oct 2009; revision received 28 Feb 2010.



This work is published under a Creative Commons Attribution-Non-Commercial-Share-Alike License

For details please go to: http://creativecommons.org/licenses/by-nc-sa/3.0/us/


   
Copyright © 2005-2010  MERLOT. All Rights Reserved.
Portions Copyright by MERLOT Community Members. Used with Permission.
ISSN: 1558-9528
Questions? Email: jolteditor@merlot.org
Last Modified : 2010/03/15