MERLOT Journal of Online Learning and Teaching

Vol. 8, No. 1, March 2012


        

An Online Odyssey: A Case Study of Creating and Delivering an
Online Writing Course for Undergraduate Students


Jill A. Singleton-Jackson
Coordinator, Foundations of Academic Writing
Associate Professor, Department of Psychology
University of Windsor
Windsor, ON N9B 3P4 CANADA
jjackson@uwindsor.ca

Julia A. Colella
Assistant Coordinator, Foundations of Academic Writing
Graduate Student, Faculty of Education
University of Windsor
Windsor, ON N9B 3P4 CANADA
colell2@uwindsor.ca

Abstract

Online courses continue to become increasingly prevalent in higher education. The relationship between computers and writing is natural, as computers are now the primary tool for producing writing. The purpose of this case-study paper is to report on the design, development, and delivery of an online course that was created in response to the identification of a need for effective and efficient delivery of writing instruction to large numbers of university students. The paper describes an online academic writing course that evolved from an elective course enrolling 150 students to a required course enrolling over 2,000 arts and social sciences and engineering students at a mid-sized Canadian university. An account of the history of the course is included, along with discussion regarding institutional and student resistance to the course, technological challenges, use of peer review, cheating, course problems, and course successes. Course effectiveness data are also presented. Suggestions are offered for instructors wishing to create similar online writing courses.

Keywords: online writing instruction, academic writing skills, teaching with technology, student peer review, peer assessment

Introduction

Oral and written communication skills are key elements in North American culture and education. While speech acquisition is a fairly automatic process that occurs during normal human development, learning to write is a more deliberate and systematic process that often begins prior to the commencement of formal education and remains a required skill at all levels of academic study. However, the teaching of writing has proven to be one of the most exhausting and enduring challenges facing educators. Historically, writing instruction has been found to be time consuming and labor intensive. Education at all levels nevertheless continues to strive to produce citizens who are proficient, capable writers. As methods of delivery in education evolve and are influenced by computers and the Internet, educators must also consider technology-mediated approaches as a way to provide effective and efficient writing instruction.

Literature Review

Educators readily agree that "the ability to write clearly and fluently is undoubtedly one of the more important skills required of graduates" (Torrance, Thomas, & Robinson, 1999, p. 189). This ability to effectively "present information and ideas through their writing" (Hammann, 2005, p. 15) is critical for academic success. Unfortunately, the acquisition of writing skills is not always successfully achieved. A review of the literature reveals that at all levels of education, deficits in writing ability exist (e.g., Flateby, 2005; Knudson, Zitzer-Comfort, Quirk, & Alexander 2008; The National Commission on Writing in America's Schools and Colleges, 2003). Singleton-Jackson, Lumsden, and Newsom (2009) discovered that even after undergraduate matriculation and admission to graduate study, students still struggle significantly with attaining what would be considered a proficient level of writing skills.

As educators work to help students achieve writing skills, methods of teaching writing continually evolve. Current pedagogical practices for all content areas, not just writing instruction, often include a technologically based component. According to Barcelona (2009), more than 20% of college students report completing an online course. As stated by Keebler (2009), "as the landscape of higher education changes, the need to incorporate technological advances into a schools [sic] pedagogical design has become more pressing" (p. 546). Moreover, Lee (2010) states that "online learning will continue to proliferate in the near future" (p. 277). In their 2003 report, The National Commission on Writing in America's Schools and Colleges (2003) put forward the following recommendations in order to help schools create skillful, self-confident writers: (1) a national writing agenda; (2) time; (3) the measurement of results; (4) technology; and (5) professional development. Mazoué (1999) relates that educators should consider the two principal advantages that online instruction has over traditional approaches to instruction: increased time on task and increased opportunities for collaboration. Further, with regard to writing instruction, as practices change, education "is building a foundation of research on the impact of distance learning technology on composition" (Miller, 2001, p. 424). Although there are some who have concerns about the quality of online learning as compared to traditional face-to-face instruction, research has shown that online learning can be as effective as traditional learning modes (Ward, Peters, & Shelley, 2012), and can produce comparable levels of student satisfaction (Allen, Bourhis, Burrell, & Mabry, 2002). Online instruction can result in learners who function independently while engaging in rich learning experiences. While an uncritical embrace of technology as a means of teaching writing could prove to be unwise, technology affords an enhanced way of helping students make the link to literacy (Scott & Mouza, 2007). A thoughtful, critical evaluation of the advantages of teaching writing with technology is warranted and valuable. There is a natural link between technology and writing instruction, as writing in all forms is, for the most part, currently performed with technological devices. The computer has evolved from a tool used to improve writing to the tool used for writing (Stine, 2004).

Creating an Online Writing Course

Foundations of Academic Writing

To prepare students for the academic challenge of writing at university level, the authors' institution, a mid-sized Canadian university, considered many options for making writing instruction effective and efficient for large numbers of students. The traditional method of weekly lectures combined with small group, writer's workshop-style meetings run by graduate assistants had proven to be inefficient and expensive. Foundations of Academic Writing (FAW) was created in the Summer of 2004 and piloted in the Fall of 2004 as a hybrid course combining traditional weekly lectures with a substantial online component. FAW was offered as a general arts or general social sciences elective, and it initially enrolled 150 students. The hybrid format did not work well as students focused almost exclusively on what was done in class but did not initiate self-directed learning through use of the online instructional modules. In the Fall of 2005, 500 students enrolled in the course as an elective. At this point, given the large enrollment, it became necessary to change the course so that it could be delivered fully online. While a cursory review of the literature on the effectiveness of online teaching lent itself to a certain level of confidence regarding switching from a hybrid course to an online-only course (e.g., Johnson, 2003; Miller, 2001; Oates, 1981), the most compelling reason for the change was logistics, and a desire on the part of the Dean to deliver the course entirely online. It is important to recall that FAW was an experimental initiative, and thus while it may seem somewhat illogical to have switched from a hybrid to an online format in this context, there was a belief that perhaps if the student had only the online modules and no in-class instruction they would do the work instead of being "lazy" and relying on the traditional method of the instructor "feeding" them the information. In other words, with no instructor to "feed" them, they would rise to the challenge and show greater self-direction.

In January 2006, a part two (FAW II) of the course was piloted and showed the same pattern of increasing enrollment as an elective. By the Fall of 2007 there were 1,000 students enrolled in part one (FAW I). While FAW I is a sentence-to-paragraph-level course that involves extensive grammar review and instruction in paragraph writing, FAW II is focused on essay writing and includes a brief grammar review as well as an introduction to information literacy, research skills, and the American Psychological Association (APA) and Modern Language Association (MLA) citation systems. Both courses continued to grow and to be offered as electives until the Fall of 2008, when they became required courses for all arts and social sciences and engineering majors. The courses continue to be required, and the enrollments since Fall 2008 have been approximately 2,200 per term, divided into sections of approximately 400 students per section.

Institutional Resistance

An initiative of the Faculty of Arts and Social Sciences Dean's Office, FAW is a "Dean's course," meaning that it is not housed in any particular department within the Faculty (e.g., English, history, philosophy). The course was originally offered as a general arts credit or a general social sciences credit. Students took the course to fulfill these general requirements for their degree plans. The initial response to the course from the various departments within the institution was mixed, with two camps, one for the course and the other against it, rapidly emerging. From 2004 until 2007, numerous meetings were held in which effectiveness data collected through pre-test and post-test measures were presented to faculty and department heads. Despite statistical evidence of the effectiveness of the course as shown by significant improvement between the students' pre- and post-test scores, resistance to the course remained an issue, and this continues to the present day. The skepticism about the course centered around the choice of online delivery as the pedagogical basis. Numerous faculty from various disciplines voiced concerns regarding the feasibility of teaching writing online. The authors found that it was important to share online effectiveness literature and to collect quantitative effectiveness data in order to counter resistance to the course. It proved critical to address colleagues' doubts with the help of both theoretical and empirical evidence from the literature (e.g., Johnson, 2003; Mannan, 2003; Miller, 2001; Oates, 1981; Tallent-Runnels et al., 2006).

Course Structure

Following the one-time first offering of FAW as a hybrid course (i.e., in mixed mode combining face-to-face and online delivery), FAW has been offered as a completely online course. The multi-section course is overseen by a course coordinator (the first author of the present paper) who is a full-time faculty member. The students are put into sections of approximately 400 students per section. Each section has an assigned instructor and a team of teaching assistants (TAs). An assistant coordinator (the second author) assists the course coordinator and takes a very active role in TA management. There are a large number of TAs assigned to help with FAW (approximately 45-50 per term). The course coordinator is responsible for textbook selection and creation of course curriculum; the master course is copied over into the multiple sections within the online delivery platform to ensure standardization of content across the sections. The midterm and final exam are common to all sections and are taken on campus. Students must produce photo identification to be allowed to sit for the exams.

Initiation of the course at the beginning of every Fall and Winter term is chaotic and work intensive. Currently, FAW I and FAW II are required courses for all first-year arts and social sciences and engineering students. This means that over 2,000 nervous, confused, first-year students who are already overwhelmed with the transition from high school to university are also being asked to figure out how to get set up and begin working in an online course. For most of these students, this is their first experience with online study, and they are, understandably, anxious about what to expect. In order to reduce some of this anxiety and "put faces" on the course sections, the students are required attend a 1.5-hour orientation session held on campus in the first week of classes. This allows the instructors to make some level of personal contact with the students, explain the rationale and workings of the course, and demonstrate the online platform to them. Instructors hold multiple orientation sessions on the Friday evening and Saturday of the first week of the term so as to not conflict with other on-campus class meeting times that the students may be committed to. Additionally, representatives from Pearson, the publishing company that sells the books and provides the access to the online platform used for the course, attend these orientation sessions in order to field questions about the books, materials, and the online resource. It is important to note here that FAW is not delivered through an institutional learning management system such as Blackboard. Instead, the online content for the course is accessed via Pearson's MyCompLab website; each student must obtain an access code in order to gain access. The access code can be purchased either in a package with the required textbooks or as a standalone product. Thus, including representatives from Pearson as part of the orientation process was a very deliberate decision. They are brought in to talk about materials cost so as to allow the instructors to maintain a "purer" role as academics who are the content and course experts rather than agents whose objective is to attract sales and profit for the bookstore or publisher. It was discovered that in the absence of these live orientation sessions, there is a noticeable deterioration in students' attitudes toward the instructors and the course. It seems that when the instructors hold the live sessions and become real, three-dimensional humans in the eyes of the students, the anxiety and confusion experienced by the students is reduced; they are more likely to seek contact with the instructors and TAs during office hours, and this results in the course launch progressing much more smoothly.

Also during the first week of the term, students complete an online pre-course diagnostic test. They then repeat this test at the end of the term, prior to the final exam. The scores serve as the pre-test and post-test data that are analyzed to determine the level of course impact. The students undertake the coursework in an online environment that allows for weekly modules to be assigned on an electronic calendar. In addition to textbook and grammar handbook readings, the weekly modules also include practice exercises, podcasts, audio and video lectures, and grammar quizzes. Moreover, the students are given writing assignments that they submit online in both draft and final form.

Peer review is a cornerstone of FAW. The students participate in online peer-review groups that require them to review the draft writing assignments of their classmates. Peer review has been described as a form of collaborative learning in that students exchange information and learn from one another through the process of sharing and receiving knowledge, all of which results, ideally, in an improved collective outcome (Falchikov, 2001). While peer review has many uses, it "seems to be the most valuable in the collaborative writing process" (van den Berg, Admiraal, & Pilot, 2006, p. 136, emphasis in original). In the initial step of the peer-review process in FAW, the students are graded on the quality of the feedback they give to their classmates. This part of the process has evolved over time, and is currently marked according to a Four-Step Model created by Colella, Morrison, and Ouellette (2011) (see Appendix A). The students review the criticism they receive and make changes to their drafts in light of the criticism before resubmitting their assignments in final form. The online platform contains an automated grading program for the online grammar quizzes; the instructors and TAs manually grade the peer-review and writing assignments following standardized marking rubrics.

Issues with the Technology

The primary challenges with the course have been associated with the technology. With over 2,000 students working online, even the smallest "hiccup" with the online platform can set off a ripple effect that can call for hours of work on the part of the instructors and TAs. All instructors and students in FAW use the same online platform. As previously mentioned, a master course is created by the course coordinator and is duplicated for each section of the course. This ensures that all the course content is standardized; however, it also means that any problem with the platform affects each section of the course. The online platform that has been chosen for the course has gone through several iterations, some better than others in terms of stability and functionality. It has been discovered that while content is the most important element of the online platform, there are several other factors that should be considered as key for delivery of a course such as FAW. Specifically, students' being able to easily set up their online accounts is essential for a smooth start to the course which, in turn, allows teaching and learning to begin early, without delays. When students create their online accounts in the platform they need to have access to a registration process that is fairly simple, does not involve an excessive number of screens and steps, and is designed so as to prevent students from "geting off the path" and ending up in the wrong course section or arriving at a dead end. Most of the major publishing companies offer online platforms that can be linked to any course, textbook, or custom text, but these platforms are not all created equal.

A second issue related to platform choice has to do with the course management tools that are available within the platform. Again, the different online platforms from the major publishing companies that are geared toward writing instruction have very different capabilities with regard to things such as the gradebook and course communication tools. While there is some degree of trial and error involved in finding the best platform for the needs of a certain instructor or course, having the ability to communicate quickly and easily with the students and being equipped with an easy-to-use mechanism for entering, storing, and disseminating grades, from both the instructors' and the students' viewpoints, are important considerations. The larger the enrollment numbers, the more critical the communication and gradebook features become. As this course has evolved and enrollments have become larger over time, it has been discovered that the majority of technology problems are related to course section size, as the platforms available have not typically been designed to handle large class sizes. In the case of FAW, the large-class-size issue has been resolved by establishing subclasses or "pods" within the sections. In other words, there may be as many as 400 students enrolled in one section of FAW as reflected in the University's enrollment system, but within the online platform the students are broken into groups of 80. (For example, FAW I section 01 has 400 students enrolled according to the registrar's office, so the instructor responsible for the section runs the five online "pods.") The process is transparent to the students, who are unaware they have been subdivided to enhance platform performance.

Course Pedagogical Problems

FAW has not been without issues during its evolution from a small elective course to a large required course. The online format of the course, for example, has presented special challenges with getting students up and running in the course as a result of problems ranging from registration confusion to outdated computers and low levels of technological literacy for some students. Students facing problems such as these are encouraged to see instructors and TAs for help getting started in the course, and are referred to on-campus computer labs and the library as places to access up-to-date computers. This has, to some degree, solved these problems. Due to the large enrollments in FAW, online delivery is the only way it can practicably be offered, and the use of an online pedagogy leads to measurability of learning through pre- and post-test scores, as discussed later in this paper. Further, delivering the course online allows flexibility that would be unheard of in a traditional course in terms of when and how students work, organization and management of peer-review groups, and speed of providing feedback to students. It is important to select an online platform that meets the specific needs of the students, the instructor, and the institution while considering the campus resources available to support an online course. Running a small section of the course as an initial foray into teaching writing online can serve to identify problems with the platform and inform modifications needed in order for the course to succeed and grow. Other pedagogical (rather than technological) issues encountered in the course fall into the categories of peer-review issues, cheating, resistance to the requirement of the course, and the challenge of ensuring consistency among TAs. Each of these issues is discussed in turn below.

Issues with Using Peer Review. Three key points about using peer review have emerged over the course of the development and repeated delivery of FAW since 2004. These are as follows: (1) marks must be assigned for the draft submission; (2) instruction for giving constructive feedback cannot be ignored or neglected – this must be integrated as part of the course curriculum; and (3) students must have the skills needed to critically analyze the feedback they receive, along with assurance that they are free to disagree with or reject invalid feedback. First, while it would be ideal for students to appreciate the value in subjecting their draft assignments to peer review and be motivated to do so to obtain feedback to help them improve their final-version assignments, many did not participate to the degree necessary when submission of the draft assignments was non-mandatory and presented simply as being needed for the peer-review task (for which marks were allocated). It became clear early in the history of the course's development that marks would have to be given for both the uploading of the draft and for the peer-review task. Once marks became attached to the drafts, participation in draft submission rose from approximately 60% to 85%.

Secondly, it is not safe to simply assume that students know how to give or receive constructive criticism. As argued by VanDeWeghe (2004), "the ability to give appropriate and helpful feedback to other writers is a learned set of strategies and skills that all developing writers must be taught" (p. 95, emphasis in original). This aspect of peer review – giving appropriate and helpful feedback – is part of the enduring problem of teaching students to both write and think. Most first-year university students do not have prior experience with peer-review work. Thus it is important when planning an online writing course that uses peer review to designate time to teach students how to do peer review and then give feedback on the feedback. In FAW, students' feedback is evaluated for quality based on clear criteria.

Third is the fact that students have to learn when to accept and when to reject peer feedback. Students sometimes lack the confidence to reject bad advice, and often fall into the trap of actually adding mistakes to an assignment by applying incorrect feedback. Other students refuse to take heed of good advice from peers as they are receptive only to the "expert" feedback of an instructor or TA. Adequate preparation together with monitoring by instructors and TAs can help to mitigate these pitfalls of student peer review. It is important to communicate to students that they ultimately have to use their textbook, grammar handbook, online tools, and personal knowledge base in order to decide whether or not to accept a peer's suggestions. The problems encountered with peer review in an online course would likely be similar in a face-to-face course requiring students to engage in peer review. At the same time, instructors of online courses who teach students how to work with peer review arguably face the same challenges as when teaching students anything online. All instruction requires clear and organized communication, irrespective of the content or delivery system.

Cheating. Cheating is an ongoing issue in academia, both for courses delivered online and those delivered face to face. In a study by Burrus, McGoldrick, and Schulmann (2007), 50-75% of students indicated through self-report that they had previously cheated. Online courses are susceptible to some unique forms of cheating as much of the work is done outside of a proctored classroom, and though it might seem logical to expect that online courses offer not only unique but also greater numbers of opportunities for cheating than traditional courses, there is evidence to suggest this is not an accurate assumption (Sewell, Frith, & Colvin, 2010). The literature on cheating reveals that online course delivery does not necessarily inspire increased cheating (Burrus et al., 2007; Krsak, 2007).

In FAW, the most dramatic instances of cheating have manifested themselves in the weekly online quizzes. Each student's quiz answers are marked automatically upon submission, with immediate feedback displayed to the student in the form of answers correct and answers incorrect, accompanied by references that should be consulted to address the knowledge gaps indicated by the incorrect responses. It was discovered that students were exploiting this by getting together in groups and taking turns at attempting the quizzes, with one member of the group submitting his/her answers and then using the feedback as an "answer key" to share with the others. Once the instructors became aware of this, the quiz settings were modified so that students would not receive feedback until the due date for the quiz had passed. After the due date, the feedback feature was re-enabled and students could log on and view their marked quizzes with complete feedback. Unfortunately, the feature allowing quiz feedback to be delayed was eliminated in the most recent version of the online platform. The quiz cheating began again; to alleviate the problem the weekly quizzes were changed so that they did not carry as much weight toward the final course mark, whereas the midterm and final exam weightings were increased. Quiz marks for FAW have consistently been correlated with exam scores and final course marks. It was communicated to students that the weekly quizzes were "homework" that, if done conscientiously and honestly, would likely increase their chances of scoring well in the exams and in the course overall. Online quizzes, in this situation, cannot be controlled any more than any other out-of-class assignment. Instructors do not know how much "help" (or copying) is taking place when students do out-of-class work, regardless of whether it is submitted online or on paper. Instructors of both online and traditional courses cannot be absolutely guaranteed that a student turning in work has actually done the work him/herself. This lack of authentication is an additional driver for the inclusion of on-campus midterm and final exams in FAW – students must physically come to the University and show their identification cards in order to sit for their exams.

Plagiarism is another a form of cheating that plagues academics because of the high demand placed on students to produce written work in all their courses. Plagiarism also takes place in traditional and online courses, and the best defense in both cases is diligence. Instructors and TAs need to take the time to get to know their students' writing. In FAW, students are divided into class segments, each with an assigned TA, based on the first letter of the students' last names. FAW I TAs are assigned 80 students each, and FAW II TAs are assigned 40 students each (because in the latter the writing assignments are longer and take more time to mark). Because each TA marks the same students' writing week after week, if there is a drastic change in the quality of the submitted work or in the "voice" of the writer, the TA will be likely to notice and can work with the course instructor to take action to determine if plagiarism has occurred.

Ensuring Consistency Among TAs. There are approximately 45-50 TAs assigned to FAW each term. Criteria for becoming a FAW TA include having completed both FAW I and FAW II with an "A" or higher (the institutional marking system includes A+ marks) and being recommended for the position by the student's former instructor and TA from when he/she was a FAW student. Hiring successful former FAW students as TAs ensures they are familiar with the online platform and course content, and has the added advantage that they bring to the role a unique insight into the course. It is hoped that their experience being on "the other side" as students will help them be effective as TAs in the course. TAs are also required to attend a four-hour orientation/training session each term. This is required of all TAs, both new and returning. A TA manual has been written by former TAs that includes information about marking, office hours, keeping track of work time, how to handle difficult students, and so on.

Despite extensive training and the establishment of clear criteria meant to ensure TA quality, there still exists the reality of having 45-50 TAs per term working with over 2,000 students. Marking inconsistencies between TAs regularly arise, causing issues with students who feel they are being disadvantaged because their TA is a "hard marker" while their friend or roommate has been assigned an "easy marker." Keeping the TAs consistent in their marking is therefore an ongoing challenge. In striving for standardization across the course, the instructors hold regular meetings with TAs, carry out spot checks of graded assignments, and have Head TAs oversee and provide continuing support and guidance to the more junior TAs. The management of the TAs alone creates significant administrative responsibilities for the course coordinator and assistant coordinator. Again, however, these issues are not unique to the online environment, and would essentially be the same for any large course.

Course Successes

Pedagogical Power

In this paperless class, the use of an online platform to facilitate the creation of online peer-review groups as well as the tracking of assignments, marks, and submission times and dates makes it possible to maintain a database of grammar quizzes, peer reviews, writing assignments, due dates, and scores. This would be very cumbersome – if not impossible – to do with paper-based assignments; the hours it would take to manually collect, track, and mark this many papers is almost unfathomable. The online delivery and automation allows extensive writing and feedback to be provided without creating an unmanageable marking load for the instructors and TAs. Grading writing assignments generates a fair amount of repetition in comments and corrections given to students, since they often make a lot of the same errors. The online platform permits the building of a collection of comments that TAs and instructors can simply click on to reuse when marking. This is much less time consuming than having to write detailed feedback from scratch. Further, by using online peer review it is possible to give multiple writing assignments to over 2,000 students. Specifically, the students in FAW complete eight writing assignments in a 12-week term, and do each of them twice – once as a draft and once as a final version. This means the total number of writing assignments submitted could be in the order of 32,000, which would be impossible to manage if the assignments were all submitted as hard copies. The logistics alone of drafts being available and distributed to all the members of the peer-review group are insurmountable when working with cohorts of this size. If all students upload drafts, each student has the opportunity to perform peer review 32 times. Also, as a result of automation and online tools that enhance marking speed and efficiency, the students have grammar quizzes assigned weekly that are graded immediately upon submission (timely feedback enhances learning). They also have the quality of their peer-review feedback evaluated by TAs for each review they submit, on top of receiving marks for every revised, final-version assignment they submit. FAW thus affords students abundant opportunities for practice in writing, rewriting, and giving and receiving feedback, giving them a good start down the road toward becoming a proficient writer.

Effectiveness Data

The effectiveness of FAW has been evaluated using a pre-test/post-test design. Students complete pre-tests in the first two weeks of a term, and post-tests within the last two weeks of a term. The pre- and post-tests consist of multiple-choice questions that sample the content of the entire course. Students take these tests online. Points are awarded for completing the tests in an effort to maximize participation; however, this method of awarding points for completion represents a threat to the validity of the conclusions in that students are not required to perform well on the tests in order to obtain their points. Thus, some students may merely answer the questions haphazardly to obtain the points, investing minimal time and effort in the process. Table 1 contains the pre-test and post-test means for the six most recent semesters, along with sample sizes and standard deviations. For each semester other than Winter 2008, the difference is significant, with students improving on average. With the exception of the Winter 2008 semester, the effect sizes (Cohen's d) range from 0.27 to 0.78, but most fall into the small effect range. It is unclear as to why no significant difference was found in Winter 2008, although it is worth noting the course was offered as an elective in that semester.

With regard to the threat to validity arising from the awarding of pre-test and post-test completion points, there was one previous semester during the phase when the course was still being developed, namely Fall 2005, in which the decision was made to award points based on actual performance so as to maximize effort and participation. For that semester, scores obtained on both the pre- and post-tests counted toward the final grade. When this was done, the difference between pre- and post-test scores was much more dramatic (pre-test M = 66.96, SD = 12.55; post-test M = 80.93, SD = 20.58; Cohen's d = 1.11; N = 331). This may be taken as evidence that the actual impact of the course is likely much greater than is revealed in Table 1. However, there are obvious issues with counting the actual mark on a pre-test toward a student's course grade, so the pre-test needs to be scored as a completion mark.

Table 1. Pre- and post-test scores for six recent semesters

Semester

N

Pre-test Mean

Pre-test Standard Deviation

Post-test Mean

Post-test Standard Deviation

Cohen's d Effect Size

Winter 2010

361

69.02

11.30

72.49

15.22

0.31

Fall 2009

1,326

70.88

9.47

74.55

12.96

0.39

Winter 2009

317

68.19

11.20

71.96

13.92

0.34

Fall 2008

1535

63.10

12.20

72.57

15.27

0.78

Winter 2008

163

62.80

13.44

62.22

18.47

-0.04

Fall 2007

541

61.66

12.67

65.08

18.88

0.27

Note. All differences are significant (p < .05) except for Winter 2008. Cohen's d values were computed by subtracting the pre-test scores from the post-test scores and dividing by the pre-test standard deviation.

Furthermore, past analyses have shown that some students benefit more from taking the course than other students. For instance, in the same semester previously discussed (Fall 2005), participants were divided into four equally sized groups based on their pre-test scores. Those with the lowest pre-test scores showed the greatest improvement (Cohen's d = 1.93) when compared to those with the highest pre-test scores (Cohen's d = 0.43). This is not surprising or unexpected, as it can be argued that students who already have a good understanding of the material have less to gain from the course than those with a weaker background. This pattern has continued to repeat itself since 2005.

Finally, a question that one might legitimately ask has to do with the extent to which scores on the post-test relate to overall course performance and actual writing ability. In other words, do students apply the rules and principles of writing as assessed by a multiple-choice test when doing actual writing assignments? The answer is, "It would seem that they do." In the most recent large-enrollment semester (Fall 2009), post-test scores correlated positively and significantly with all writing assignments (r = .244 for overall writing score based on five assignments). Post-test scores also correlated positively with final exam scores (r = .38).

Overall, it is believed that the data and analyses support a tentative conclusion that the online writing course is indeed effective. When the students in the course are provided with an incentive to perform well on the pre- and post-tests, they show substantial improvement between the tests. Additionally, the post-test scores relate positively to actual writing ability as judged by TAs. Further research should focus on writing ability within the classroom in an effort to determine whether or not students who do well in the online writing course also tend to perform well on writing assignments in subsequent courses, later in their programs of study. More importantly, the course's effectiveness could be further validated by finding a positive association between improvement during the course (based on differences in pre- and post-test performance) and later writing ability.

Conclusion

Approaches to teaching in all subject areas and disciplines have changed as technology has advanced and adapted to meet the needs of students and instructors. This case study has been presented as an example of one way in which a university can approach the goal of delivering effective and efficient writing instruction to large numbers of students in an online environment. There have been many changes and revisions along the way as the course described in this paper has changed and grown. The course was a Dean's initiative and thus had the benefit of administrative support, but it was, unfortunately, not wholeheartedly embraced by faculty. The main challenges faced in the course have been related to technological issues, cheating, faculty and student resistance to the course as a degree requirement, and TA management and training in standardized marking. The major strengths of the course lie in the pedagogical power of the adopted approach to teaching writing online. The use of computer and Internet technology allows greater assignment opportunities and the ability to provide feedback to a large number of students in a more rapid manner than could ever be accomplished in a traditional, paper-and-pencil course. The comparison of pre- and post-test data from six semester offerings of the course as presented in the paper attests to the effectiveness of the course in improving students' writing abilities and skills.

References

Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with distance education to traditional classrooms in higher education: A meta-analysis. American Journal of Distance Education, 16(2), 83-97. doi:10.1207/S15389286AJDE1602_3

Barcelona, R. (2009). Pressing the online learning advantage: Commitment, content, and community. The Journal of Continuing Higher Education, 57(3), 193-197. doi:10.1080/07377360903262218

Burrus, R. T., McGoldrick, K., & Schumann, P. W. (2007). Self-reports of student cheating: Does a definition of cheating matter? Journal of Economic Education, 38(1), 3-16. doi:10.3200/JECE.38.1.3-17

Colella, J., Morrison, O.-P., & Ouellette, D. (2011). FAW peer review guide. Boston, MA: Pearson.

Falchikov, N. (2001). Learning together: Peer tutoring in higher education. London: RoutledgeFalmer.

Flateby, T. L. (2005). Maximizing campus responsibility for the writing assessment process. About Campus, 9(6), 22-25. doi:10.1002/abc.114

Hammann, L. (2005). Self-regulation in academic writing tasks. International Journal of Teaching and Learning in Higher Education, 17(1), 15-26. Retrieved from http://www.isetl.org/ijtlhe/pdf/IJTLHE14.pdf

Johnson, E. J. (2003). The role of online immediacy behaviours in student writing improvement and satisfaction in a web-based undergraduate course. (Doctoral dissertation). Retrieved from ProQuest Dissertations & Theses database. (UMI No. 3100597)

Keebler, D. W. (2009). Online teaching strategy: A position paper. MERLOT Journal of Online Learning and Teaching, 5(3), 546-549. Retrieved from https://jolt.merlot.org/vol5no3/keebler_0909.htm

Knudson, R. E., Zitzer-Comfort, C., Quirk, M., & Alexander, P. (2008). The California State University Early Assessment Program. The Clearing House, 81(5), 227-231. doi:10.3200/TCHS.81.5.227-231

Krsak, A. M. (2007). Curbing academic dishonesty in online courses. In C. P. Ho (Ed.), Proceedings of the 12th Annual Technology, Colleges & Community Worldwide Online Conference (TCC 2007) (pp. 159-170). Honolulu: University of Hawai'i at M?noa. Retrieved from http://etec.hawaii.edu/proceedings/2007/krsak.pdf

Lee, J.-W. (2010). Online support service quality, online learning acceptance, and student satisfaction. The Internet and Higher Education, 13(4), 277-282. doi:10.1016/j.iheduc.2010.08.002

Mannan, S. J. (2003). A different place: Student learning in an online course. (Doctoral dissertation). Retrieved from ProQuest Dissertations & Theses database (UMI No. 3103005)

Mazoué, J. G. (1999). The essentials of effective online instruction. Campus-Wide Information Systems, 16(3), 104-110. doi:10.1108/10650749910281269

Miller, S. K. (2001). A review of research on distance education in Computers and Composition. Computers and Composition, 18(4), 423-430. doi:10.1016/S8755-4615(01)00073-1

Oates, W. (1981). An evaluation of computer-assisted instruction for English grammar review. Studies in Language Learning, 3(1), 193-200.

Scott, P., & Mouza, C. (2007). The impact of professional development on teacher learning, practice and leadership skills: A study on the integration of technology in the teaching of writing. Journal of Educational Computing Research, 37(3), 229-266. doi:10.2190/EC.37.3.b

Sewell, J. P., Frith, K. H., & Colvin, M. M. (2010). Online assessment strategies: A primer. MERLOT Journal of Online Learning and Teaching, 6(1), 297-305. Retrieved from https://jolt.merlot.org/vol6no1/sewell_0310.htm

Singleton-Jackson, J. A., Lumsden, D. B., & Newsom, R. (2009). Johnny still can't write, even if he goes to college: A study of writing proficiency in higher education graduate students. Current Issues In Education, 12(10). Retrieved from http://cie.asu.edu/ojs/index.php/cieatasu/article/download/45/9

Stine, L. (2004). The best of both worlds: Teaching basic writers in class and online. Journal of Basic Writing, 23(2), 49-69. Retrieved from ERIC database. (EJ684128)

Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahern, T. C., Shaw, M. S., & Liu, X. (2006). Teaching courses online: A review of the literature. Review of Educational Research, 76(1), 93-135. doi:10.3102/00346543076001093

The National Commission on Writing in America's Schools and Colleges. (2003). The neglected "R": The need for a writing revolution. College Entrance Examination Board. Retrieved from http://www.collegeboard.org/prod_downloads/writingcom/neglectedr.pdf

Torrance, M., Thomas, G. V., & Robinson, E. J. (1999). Individual differences in the writing behaviour of undergraduate students. British Journal of Educational Psychology, 69(2), 189-199. doi:10.1348/000709999157662

van den Berg, I., Admiraal, W., & Pilot, A. (2006). Designing student peer assessment in higher education: Analysis of written and oral peer feedback. Teaching in Higher Education, 11(2), 135-147. doi:10.1080/13562510500527685

VanDeWeghe, R. (2004). "Awesome, dude!" Responding helpfully to peer writing. English Journal, 94(1), 95-99.

Ward, M. E., Peters, G., & Shelley, K. (2010). Student and faculty perceptions of the quality of online learning experiences. The International Review of Research in Open and Distance Learning, 11(3), 57-77. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/867/1610

Appendix A: Peer Review Marking Rubric

The Four-Step Model

Each comment must include the following:

Step 1 – Identification of the error (specify to the author where the error is located).

Step 2 – An explanation of why it is an error.

Step 3 – A way of showing the author how to correct the error.

Step 4 – A reference to the page in The Little, Brown Compact Handbook where the rule is found.

You will lose the mark if one of the four criteria or steps is missing; you must have all four steps on each constructive comment. Furthermore, you will not receive a mark for a comment regardless of whether or not the Four-Step Model was followed if the comment contains certain faults. The following is a list of those faults that will cost you that mark.
Each comment must:

  1. Not have spelling mistakes included in the comment.

  2. Not be degrading or detrimental in content.

  3. Not give an opinion as a correction.

  4. Use proper terminology.

  5. Indicate clearly to the student where the error is located.

  6. Avoid correcting something that does not need to be corrected.

 






This work is published under a Creative Commons Attribution-Non-Commercial-Share-Alike License

For details please go to: http://creativecommons.org/licenses/by-nc-sa/3.0/us/

   
Copyright © 2005-2011  MERLOT. All Rights Reserved.
Portions Copyright by MERLOT Community Members. Used with Permission.
ISSN: 1558-9528
Questions? Email: jolteditor@merlot.org