With the adaptation of the Internet to college instruction, the primary distance education mechanism began to shift from video-taped lectures or telecourses to the use of the Internet for curriculum delivery. Increasingly, institutions are embracing online learning as a way to provide greater student access and realize savings (Guskin & Marcy, 2002). Technology has moved from a novelty to a tool that supports scholarly missions, and some studies indicated specific gains on achievement tests (Chyung, 2001). The purpose of the study was to determine to what extent the application of student-centered teaching practices impact student learning, course completion, and overall satisfaction for the online composition course.
Review of the Literature
The research on student-centered teaching approaches revealed that teaching approaches which are deemed student-centered in the traditional classroom are considered to be student-centered in the online classroom as well (Boaz, Elliott, Foshee, Hardy, Jarmon & Olcott, 1999). For example, it was widely agreed upon that basic student-centered approaches include understanding the various student learning styles and the need to provide instruction in a variety of forms in order to reach as many learning styles as possible and to foster content retention (Chyung, 2001; Entwistle, 2001; Palloff & Pratt 1999). That is, using instructional methods such as peer interaction, group work, course discussion, multi-media, and individual reflection contribute to creating a more effective learning environment and fostering a learning community (Shank, 2004). Furthermore, the basic student-centered approaches include students as active participants in the learning process and instructors who take on the responsibility of facilitators of the learning process (Eison, 2002; Palloff & Pratt, 2000; Shank, 2004). Personality and learning style self-knowledge can create responsibility on part of the student for both the approach to learning as well as the responsibility for participating in the learning process, thus making students active participants (Gardiner, 2002).
It is important for instructors to note that students learn in different ways, either due to personality or learning style preference (Lawrence, 1984). That is, one type of instructional method or practice does not serve the needs of all, or most, students (Barbe & Swassing, 1988; Dunn & Dunn, 1978; Hiemstra & Sisco, 1990; Jensen, 1987; Keefe, 1979; Myers, 1974). Furthermore, Eison (2002) emphasized the importance of building course components on sound instructional principles that appeal to a variety of learning styles. According to Lick (2002), the shift to a learning-centered focus will require recognition of the components of the learning paradigm. This learning paradigm includes student focus, faculty facilitators, multiple modalities, integrated information sources, output orientation, possible classroom alternatives, individualized delivery, collaborative learning and group interaction, distributed infrastructure, learning readiness, flexibility, innovation and creativity, and a transformation approach and process that shifts to learning visions and technology.
Jamieson (2004) explained that the student-centered approach is considered more flexible for students, but this approach also presents new challenges for faculty and students. Diaz (2002) noted that learning styles may impact the online learning environment in several ways when examining the drop rate. Diaz further noted that, in general, online students are more likely to be independent learners than dependent learners. Gardiner (2002) indicated that conditions that foster learning must include learning that is active and learners who are self-aware and self-motivated. Gardiner defined active learning as knowledge that is created through the deep processing of knowledge. This is a pattern where the old knowledge is integrated with the new knowledge, thus, creating a level of achievement as identified in the deep approach. If students have the necessary self-knowledge, it is reasonable to assume that, at some point, students will also have an increased level of motivation for success that may not necessarily need to be fostered entirely by the facilitator.
Based on the findings in the literature, several course enhancements were made to one section of an online composition course and the evaluation included a comparison between two online course sections: one control course and one experimental course. For example, the course enhancements for the experimental included various forms of multi-media enhancements such as personality and learning style assessment links to two websites, multi-media support assignments from the grammar handbook, Internet search activities, peer review, group work, and discussion posts.
The experimental group participated in the enhanced course and the other group participated in the static control course (Johnson, 2007). Initially the control group contained 25 students: 8 males and 17 females. The experimental group contained 26 students: 6 males and 20 females. At the end of the term there were 12 students in the control group and 17 students in the experimental group. The students self-registered for each section of the online course, without being aware of any potential instructional variations.
The Criteria for the Effectiveness of the Assessment Design
To conduct an evaluation of the effectiveness of the assessment design, criteria that represent the ideal online course were established. The criteria were developed to define what is good, of value, or the ideal state for the online composition course. The ideal course criteria were established by linking English department’s course objectives to performance indicators such as the grading scale and overall course completion. The English department’s objectives for an effective course included three sections: skill development, knowledge development, and attitude development. The area of skill development included three sections: communication, general intellectual abilities, and social functioning.
STANDARD LEARNING OUTCOMES for ENG 101 Course
1. Communication: Through writing assignments and instructor to student e-mail communication.
2. General Intellectual Abilities: Through analysis of writing and reading materials. Creative as well as academic work is stressed.
3. Social Functioning: Through e-mail contact.
Across the curriculum knowledge and understanding is emphasized. Examinations, quizzes, and responses are used, reflecting topic and conceptual understanding as well as course content.
Interfacing between instructor and student, through e-mail and Q & A helps build student confidence, rapport, and tolerance.
In addition, instructor observations provided feedback using a feedback loop of observations and students comments. The overall goal of the recommendations was to make the course more successful in meeting its objectives and the established criteria. Based on the findings of the gap analysis and comparison of the group scores, the researcher prepared recommendations which were linked to the findings and included action strategies to modify and improve the student-centered approaches or to discontinue the new enhancements.
Two advisory panels, formative and summative, were established to assist with the development and validation of the criteria, assessment design, assessment instruments, and the overall evaluation report. The formative panel members were familiar with the content area in general and came from various higher education institutions. The summative panel members had expertise in online education and composition courses and also came from various higher education institutions. All members of the formative and summative panels received the necessary background information for this evaluation study. The proposal relating to this study was sent to all panel members. In addition, the online composition course enhancements for the experimental group were submitted to both panels. The proposal, course modifications, as well as the draft criteria, were sent via e-mail to members of both panels. A follow up telephone call was made to all panel members to clear up any possible areas of vagueness. Then the panel members provided input and the iterative review and modification process was employed until no new comments were brought forth.
The Kirkpatrick (1998) model was used for this evaluation study. The final effectiveness criteria included indicators of all four levels of Kirkpatrick’s evaluation levels. The first level included strategies for measuring the student reaction to the course enhancements using a questionnaire. The second level included strategies to assess student learning. In this level, learning from both the control and experimental group was measured. The third level included measuring a change in behavior or application of new skill levels; these indicators at Level 3 were measured for the control and experimental group. The fourth level measured the final results of both courses, such as course completion rates and overall instructional effectiveness assessed through final grades, course completion rates, and feedback from the expert reviewer.
The assessment design included strategies and approaches designed to obtain information relating to the Kirkpatrick levels. The approaches proposed included a student satisfaction survey, pre- and post-assessment tool, protocol for expert review, feedback loop instructor observations, and performance comparisons.
The assessment design provided for the format by which the effectiveness could be measured. The assessment design described what information was needed and how the information would be collected as it related to one of Kirkpatrick’s levels of evaluation and it was based around the standard learning outcomes for this composition course.
Criterion and Assessment Design
1. Reaction: assessing what student’ thought of the course-questionnaire
Positive online experience
Specially designed survey for Experimental Group
Standard course evaluations from control and experimental groups evaluations and feedback loop
2. Learning: measuring the principles, facts, skills and attitudes- pre-test, post-test,
(C average as acceptable
Test scores of control and experimental group
Open ended reflective questions about what students have learned
3. Behavior/Transfer: measuring the student performance as it relates to course objectives
Application of Learning 70%
(C average) and new skills (technology)
Comparing assignment grades of the control and experimental group: essays, midterms, finals, etc.
Volume of Group e-mails
Volume of Private e-mails
Expert Peer Review
4. Results: relating results to the course objectives and other criteria for effectiveness
(C average) for grades and researcher’s average course completion rates
Comparison of course completion rates for control and experimental group.
Expert Peer Review
The following are the results of the evaluation study with the overall focus of the effectiveness of the new approaches.
One measure of the effectiveness of the courses is the retention and completion rates for the two groups. These can be considered as indicators of Kirkpatrick Level 4 measure. There are two withdrawal rates to consider in looking at retention and completion rates. The first rate reflects the students withdrawing in the first three weeks of the semester, during add/drop period. The second withdrawal rate measure identifies the percentage of students who began the course but did not successfully complete the course. Therefore the two rates to consider are the early withdrawal and the overall retention/ completion rates. The experimental group lost approximately 19% during the add/drop period as compared to 40% withdrawals for the control group. The experimental group lost an additional 15% while the control group had an additional 8% attrition during the term. Overall, the control group had a completion rate of 52% while the experimental group rate was over 65%. The data revealed that the control group add/drop period withdrawal rate was approximately twice the rate of the experimental group. The overall completion rate for the experimental group was over 13% higher than the control group. These data were compared to the withdrawal rates for eight prior sections of the online English courses taught by the researcher. The average completion rate during the 4-year period for eight sections was 59.8%. The control group completion rate of 52% was slightly below the historic rate of almost 60%. The control completion rate is below the rates for three of the eight groups in the past. The experimental completion rate of over 65% was higher than the average as well as for six of the eight sections from the past.
Pre- and Post-assessment
In order to assess learning, Kirkpatrick recommends evaluating knowledge and skills “both before and after” (1998, p. 40). The pre and post knowledge assessment consisting of 20 multiple choice questions relating to grammar and MLA format provided indices of learning and knowledge aligned with Kirkpatrick’s Level 2. The grammar questions of the pre- an-assessment were questions taken from the actual course midterm exam. During the first two weeks of the course, all students in both courses were asked to complete the pre-assessment. At the end of the course, only students who completed the course were asked to complete the post-assessment. The pre- and post-assessment measures were limited to 10 out of 12 eligible students in the control group, and 15 out 17 eligible students in the experimental group.
The control group mean for the pre-assessment was 38.0% correct while the experimental group pre-assessment mean was 42.0%. The control group had a post mean of 55.0% which is 17 points higher while the experimental group had a mean of 50.0% which is only 8 points higher. It appears that greater learning occurred in the control group.
However, an analysis of the distribution of scores on the post-assessment indicated that over 50% of the experimental group (8, 53.3%) had scores in the range between 40 and 59 while only 40% (4) of the control group had scores in this range. One notable occurrence is the absence of scores in both the lowest and highest categories in this post-assessment for both groups. Neither group scored at a level of 80% or greater. Because both groups had 20% of the students with scores below 40 and the experimental group had approximately 27% compared to 40% with scores of 60 and higher, it is apparent that the distribution of the experimental differed with more in the mid-range and a smaller percent above the mean range (see Table 1).
The researcher reviewed the student comments for content and themes during the course. Throughout the course students were asked to interact with the online postings provided by the researcher: welcome, online learning tips, personality and learning styles, hands-on and visual assignments from the grammar text, and Internet search activities. In addition there were general discussion questions, questions about the peer review, and group essay activities. Student comments provided insights about the course enhancement activities and how students related to each other or to the instructor. The comments were also indicative of the extent to which students demonstrated use and application of any new knowledge learned through the course, as well as attitudes toward the online learning course. These actual observations of monitoring the actual content of student discussions are suitable for identifying changes in behavior and attitude (i.e., Kirkpatrick’s Level 3 measures).
Table 1. Frequency and Percentage Distribution of Post-assessment
Competency (%) f Success rate (%) f Success rate (%)
0-19 0 0.0 0 0.0
20-39 2 20.0 3 20.0
40-59 4 40.0 8 53.3
60-79 4 40.0 4 26.7
80 + 0 0.0 0 0.0
Note. There were 10 control group and 15 experimental group participants who participated in the post-assessment.
Comments from 106 students were categorized as very positive, somewhat positive, somewhat negative, or very negative. Less than 6.0% of the comments gathered from the online postings and discussion questions were found to be negative in nature, indicating a strong positive perception of the course enhancements overall. Negative comments were about Little Brown Brief Grammar text (LBB) hands-on assignments, peer review, and the group essay. All other course enhancements had either very positive or very positive feedback (see Table 2).
Student performance was observed and analyzed by the instructor. These observations were designed to assess student performance as it relates to course objectives provided by the English Department, which could also be indicators of Level 3. Skill development for this online composition course was noted in three areas: communication, general intellectual abilities, and social functioning. Two out of the three areas could serve as indicators of Level 3. These indicators are communication development and social functioning. Both of these indicators are present in the individual and bulletin board postings.
The data relating to forms of communication provided in Table 3 provides support that there is an increase in individual and group bulletin board posting (BBP) activities found in the experimental group. This interpretation is based on the calculated per student posting rate differences. The experimental group’s individual postings (IP) of 46.5 were 3.01 higher when compared to the control group posting rate of 43.4. In addition, it appears that students in the experimental group were more likely to post questions relating to course assignments to the instructor (QI) or other members in a public forum of the online course.
The additional indicators of behavior/transfer of learning [i.e. Level 3] utilized by the researcher were comparison of course performance measures. The comparison data were generated from regular student assignments scheduled throughout the course. There were five essays in total; each essay was assigned one week prior to the due date and the essays were due every three weeks. Since the comparisons were of those who completed the course, the grades for the essays of those who withdrew are not included in the analysis (see Table 4).
Table 2. Overview of Positive and Negative Comments Identified Through
Course enhancement P+ P N N+
Welcome 5 15 0 0
Online tips 1 8 0 0
Personality & learning style 3 11 0 0
LBB hands-on 0 4 3 0
LBB visual 0 0 0 0
Internet searches 0 12 0 0
Discussions 13 22 0 0
Peer review 2 1 1 0
Group essay 2 1 0 2
Note. P+ = very positive comments; P = somewhat positive comments;
N = somewhat negative comments; N+ = very negative comments.
Table 3. Volume of Individual and Bulletin Board Postings
Subject Control Experimental Difference
IP 521 790 269
M IP per student 43.4 46.5 3.1
BBP 39 273 234
M BBP per student 3.3 16.1 12.8
QI 20 46 26
M QI per student 1.7 2.7 1.0
Questions to group 1 7 6
Note. IP includes various initial instructor postings to the bulletin board such as the
discussion questions and prompts for additional research.
The means reported in Table 4 show that at the beginning of the term the control group had higher performance scores for essays by 16.3%; however, the means for the experimental group steadily increased and ultimately the mean was higher for Essay 5 by 1.4% compared to the control group.
Table 4. Mean Scores for Essay Assignments
Essay Control (%) Experimental (%) Difference (%)
1 89.9 73.6 -16.3
2 80.3 73.2 -7.1
3 87.3 79.5 -7.8
4 79.2 75.8 -3.4
5 77.8 79.2 +1.4
Note. Control group had 12 students and the experimental group had 17 students.
An additional indicator was the comparison of course exams. The midterm and final were each worth 50 points. The midterm exam was given following Essay 3, and it included questions asked in the pre-assessment. The final exam was administered following Essay 5 and students were asked to identify five predetermined fallacies in an every day publication such as, newspaper, periodical, literature, and so forth. Students were asked to discuss why each answer was an example of that particular fallacy and cite the source of the information in MLA format.
The midterm grade distributions for the students in the two groups (12 in the control and 17 in the experimental) show that means for the two groups were 35.0% for the control and 36.6% for the experimental group. The midterm percentage of failing grades (scores of less than 30) for the control group was approximately 17% as compared to approximately 12% of the experimental group. Almost 68% of the control and 59% of the experimental group had scores in the 30 to 39 point range. The experimental group had almost 30% of the scores in the 40 to 50 point range as compared to only approximately 15% of the control group. These scores suggest much better midterm scores, especially in the B or better category for the experimental group (see Table 5).
The distributions of the scores on the final exam for the two groups show that the experimental group had a mean of 37.7% while the control had a mean of 45.6%. The control group had over 16% of the students with failing grades (scores of less than 30) as compared to less than 6% of the experimental group. Over 82% of the experimental group had scores of 40 and higher compared to approximately 66% of the control group. Again, the experimental group had higher average performance as compared to the control group.
The expert reviewer was selected to provide input based on his affiliation as a member, contributor, and presenter with the Sloan Consortium. In addition, his background in English composition and creative writing as well as curriculum development was ideal for this evaluation study. The reviewer was afforded complete access to both the control and experimental course in order to provide feedback about the activities throughout the semester. The expert reviewer provided opinions, perceptions, and other feedback relating to the course content, delivery, and overall results of the course enhancements as they related to meeting the course objectives.
Table 5. Frequency and Percentage Distribution of Midterm and Final Exams
Points MT f MT % F f F % MT f MT % F f F%
0-9 0 0.0 1 8.3 0 0.0 0 0.0
10-19 0 0.0 0 0.0 0 0.0 0 0.0
20-29 2 16.7 1 8.3 2 11.8 1 5.9
30-39 8 66.7 2 16.7 10 58.8 2 11.8
40-50 2 15.4 8 66.7 5 29.4 14 82.4
Note: MT = Midterm and F = Final; the control group had 12 students, and the experimental
group had 17 students.
The expert reviewer responded with a strongly agree rating for the questionsrelating to the following aspects of the course: (a) the course provided the learners with clear knowledge of the course objectives, (b) the instructional interactions were appropriate for the objectives, (c) the instructional course enhancements were built on sound learning theory and principles, (d) the overall pace was appropriate, (e) the appropriateness of the level of difficulty of the course, (f) the group essay followed sound instructional principles, and (g) the course discussion enhancements followed sound instructional principles. In addition he provided the following comments,
There was a clear linkage from the discussion questions and supplemental material to the course objectives. The course enhancements worked well in actualizing the theoretical trajectories articulated in the literature review. Many of the students’ reactions (especially to the self-assessments) demonstrated the kind of interactivity described in the literature.
The group essay was well conceived and well placed within the curriculum. The interactivity coincided well with current educational theory and practice in the online modality.
The course discussion added an element of interactivity to the course that was sorely lacking in the control. This feature brought an affective element to a course that was previously fixed in the “correspondence” modality.
The control environment demonstrated little possibility for reflexivity and/or peer-based growth. This [experimental] classroom provided a guided atmosphere for the presence of students’ voices and dialogic interactivity. By allowing students a forum to enact metacognitive processes regarding their learning and offering tools to understand and evaluate these processes, the facilitator opens avenues for student-directed growth and more personally meaningful relationships with course content.
The expert reviewer agreed that overall the instructional enhancements provided an enriched online environment but was neutral about the timeliness of the course feedback. He disagreed with the peer review as following sound instructional principles and commented that the enhancement needs “more structural and procedural clarity, as evidenced in students’ comments. The process of allowing students to choose which peers to review could lead to increased confusion and a sense of exclusion. Additional procedures and facilitator roles will likely assist.”
In addition, there were several insights offered about the ability of the enhancements to foster the achievement of the course objectives. However, areas for improvement were identified as the assignment feedback and the peer review aspects of the course. Additionally, it was stated that more facilitator interaction within the discussion question threads (specifically, asking clarifying questions, calling for examples, etc.) could have improved student interactivity and critical thinking competence.
Student Reaction Survey
The questionnaire designed to seek reactions and satisfaction perceptions from students in the experimental section was administrated using Web Surveyor. This strategy did not produce useful information since only 2 of the 17 students responded. Therefore, the standard course evaluations were examined.
Standard Course Evaluations
The researcher reviewed the standard course evaluations provided to all students in both groups before the end of the semester to gauge student attitudes and overall student satisfaction. The standard course evaluation uses a Likert rating scale of 1 through 5 (1= strongly disagree to 5 = strongly agree).The standard course evaluation was available via a link on the student homepage and all students who had course access could participate in the standard course evaluation. There are three sections to rate in the standard course evaluations: course instructor, course interaction, and course content. There were 7 out of 12 (58%) respondents from the control group and 4 out of 17 (23%) respondents from the experimental group.
The noteworthy differences are that the control group rated the course instructor and course structure and content higher than the experimental group and the experimental group rated the course interaction higher overall (see Table 6). This could have been due to the fact that some instructions in the experimental group created confusion among students, as noted by the expert reviewer. Also, it appears that the experimental group, which was able to provide feedback and interact with the instructor throughout the course, was less likely to submit a final course evaluation. It is important to note that the control group provided more than double the feedback on the standard course evaluation based on group percentage over the experimental group.
Table 6 .Section Mean Comparison Between Control Group and Experimental Group
Section Control M Experimental M
Course instructor 3.8 3.5
Course interaction 3.9 4.3
Course structure & content 4.1 3.5
Discussion and Conclusion
Ten recommendations emerged from this evaluation study based on the findings in the gap analysis. The 10 areas for recommendation include modifications to the student-centered teaching, student-centered learning, Kirkpatrick Levels 1, 2, 3, and 4 as well as four recommendations for further study, in order to improve the effectiveness of the online composition course in overall satisfaction, learning, and course completion rates.
Student-centered teaching approaches
The course enhancements included several teaching approaches that are viewed as student-centered and are considered to be part of the emerging electronic pedagogy. Sound electronic pedagogical principles include the instructor as facilitator, active learner participants, a warm and inviting learning community, and multiple instructional approaches. These approaches included various instructional methods such as peer interaction, group work, discussion elements, multi-medial opportunities, and individual and group reflection. That is to say that it is recommended that the instructor continue to foster a strong learning community by serving as the learning facilitator. As such the instructor should continue to require learners to be active participants in their own learning by encouraging peer interaction through discussion board postings and individual reflection. It is recommended that instructor increase participation in the discussion board postings by prompting clarification questions. It is recommend that students participate in a self-evaluation for Essay 2, which will prompt individual reflection. This self-reflection will ask students (a) to evaluate how well they thought they did overall, (b) if they felt adequately prepared for the assignment, (c) what problems they encountered working on the assignment, (d) how well they referenced their work, (e) what learning occurred that will aid in future assignments, (f) desired additional information to aid with the assignment, (g) what would have helped before the start of assignment, (h) are they more aware of MLA as a result of this assignment. It is recommended that the group essay assignment remain unchanged.
Student-centered learning approaches
Although the learning style and personality style assessments which are available through various websites are not used consistently across the student-centered teaching models, it is recommended that these activities are kept as part of the course enhancements. Both of these enhancements created a lot of discussion on the bulletin board; although, some of the discussions were more meaningful than other. As mentioned by the expert peer reviewer, these enhancements are tools which allow students to understand and evaluate their own learning processes as it relates to the online environment. However, it is recommended that the learning and personality style assignment be modified in order to promote greater individual interaction and connection between students, thus building the foundation for the warm and inviting online environment. Therefore, it is recommended that this activity now require that students find similar personality and learning styles to other students and follow up through a private contact with each other. The goal of the peer to peer e-mail contact between students is to share online learning tips and other suggestions for success that students of a particular learning style have found helpful with students of that same learning style. Students will be asked to e-mail a copy of their suggestions to the instructor in order to earn credit for participation. Overall, since the information is coming from other peers, and it directly relates to a shared identity, the online tips may be still be received as important and possibly more effective (Durrington &Yu, 2004).
Kirkpatrick Level 1
The standard course evaluation provided three categories for student satisfaction: course instructor, course interaction, and course structure and content. The evaluation identified two gaps: course instructor and course structure and content. As it relates to the course instructor, the recommendation is that the instructor facilitates more interaction within the discussion threads as defined earlier in the student-teaching enhancement section. That is to say, the instructor should prompt reflection and follow up questions as well as promote discussion among the groups. The expert reviewer suggested that the instructor emphasize asking clarifying questions and calling for examples. This additional interaction may also been seen as a sign of instructor enthusiasm. The second gap was identified in course structure and content. The expert reviewer noted that increasing improvements in the instruction and descriptions of assignments may be needed. Therefore, the recommendation is that the instructors provide additional lecture postings for each essay assignment as it relates to the objectives of each essay assignment. Second, it is recommended that the instructor improve the instructions for the peer review. First, it is recommended that the peer review is completed with essay #4 instead of essay #2. By moving the peer review assignment to later on in the course, it will allow the instructor additional time to build a trusting learning community. Second, the instructions for the peer review will include new procedures. Instead of submitting the assignment early so that students can provide input before the assignment is due, it is recommended that the due is not changed and that the peer review will occur once the assignment has been submitted to the appropriate assignment drop box. Then each student will receive another student’s essay for review. Students will be asked to comment on certain prompts: (a) is it clear how the essay approached the assignment, (b) did the meaning of each idea come across smoothly, (c) did it demonstrate application of research or knowledge, (d) were sources cited effectively, (e) are arguments convincing, (f) does essay provide for developed conclusion, and (g) comments about grammar and spelling.
The instructor will allow students one week to complete the peer review and will ask students to submit the feedback directly to the appropriate student and as well as e-mail a copy to the instructor. These recommendations will support the active student learning model.
Other added course content, which could have contributed to the lower rating by the experimental group are the LBB hands-on assignments. The assignments were given, but since there was no verification function inherent in the assignments it was not possible to know if students actually completed the assignments. Based on the results of the midterm, it may be assumed that since there was no verification function, many students did not participate in this activity. Therefore, it is recommended that the hands-on assignments are eliminated until it is possible for students to submit a verification of the work. Many online grammar support books already have this function, and it can be assumed that either LBB will follow suit with this enhancement, or that the community college may select another text. The instructor has used three grammar support textbook within five years and it is very possible that the next text will have this online function.
Kirkpatrick Level 2
The data provided from Kirkpatrick’s Level 2 evaluation provided information about how well students were learning. The data were generated from the use of a non-graded pre and post course assessment and provided valuable information to the instructor. This information included areas of students’ strengths and weaknesses as it related to grammar and MLA format. Having this pre assessment information at the beginning of the course allows the instructor to make adjustments throughout the course to meet the needs of students (Angelo & Cross, 1993). Therefore, it is recommended that the pre- and post-assessment continue to remain a permanent element of this course.
Kirkpatrick Level 3
The data provided from Kirkpatrick’s Level 3 evaluation provided information about the changes in attitude as it relates to meeting the community colleges goals of communication skill development. The community college not only sought to foster increased communication through writing assignments and instructor to student e-mails, but it also sought to foster social functioning through e-mail contact. Based on the comparison of the two groups, the evidence would suggest that overall per student postings and e-mails had greater volume in the experimental group. It appears that many of the instructor postings generated additional responses from students. It is recommended that additional posting remain a permanent element of this course. However, it is recommended that the three discussion questions are changed to reflect an increase in specifically asking clarifying questions and calling for examples, in order to create improved student interactivity and critical thinking competence.
Kirkpatrick Level 4
The data provided from Kirkpatrick’s Level 4 evaluation provided a possible connection between the course enhancements and the increased completion rates. The overall effect of the course enhancements to the experimental course could be said to contribute to a warmer, inviting online learning experience. That is to say, students who logged on during the add/drop week may have seen the course enhancements as indicators of a supportive online learning environment. This in turn could have lead to the fact that more students may have been inclined to take chance with the course, regardless of their individual skill level, and possibly complete the course with a satisfactory passing grade. This may have been evident in the add/drop pattern discussed earlier.
As suggested by Isaac and Michael (1995) and Kirkpatrick (1998), it is not possible to make a direct empirical connection between the assessment information pertaining to the course enhancements and the changes in student learning or behavior due to many intervening variables. It is possible that students who improved learning or demonstrated new behaviors could have been impacted by several factors such as personal commitment or performance goals (Kirkpatrick; Chyung, 2001). For example, many online students are inherently more self-motivated and driven (Boaz et al., 1999). Lastly, limitations including design flaws, errors, and data analysis must be considered as well as the possibility of varying interpretations of the results.
The positive results can be considered as one set of measures in an effort to fully assess the value of these enhancements and approaches. To summarize, for this study there are limitations that preclude and restrict generalizablity of the results and conclusions. Angelo and Cross (as quoted in Honolulu Community College-University of Hawaii, 2000) stated that “classroom assessments have to respond to the particular needs and characteristics of the teachers, students, and disciplines to which they are applied. What works well in one class will not necessarily work well in another” (October Faculty Development Newsletter section, para. 13).
So, what does this mean for faculty who are planning to move part or all of a course into an online modality? The primary concept to take away from this evaluation study is that faculty preparation for the online environment is of the utmost importance. Taking on the position of facilitator comes with new functions and responsibilities. Although at first the functions and responsibilities may appear to be more time consuming, this change in focus to the student may improve student satisfaction, learning and course completion, ultimately resulting in a more satisfying and effective online experience for both faculty and students.
Aaron, J. (2005). LBB brief. Boston: Pearson Longman.
Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers. San Francisco: Jossey-Bass.
Barbe, W. B., & Swassing, R. H. (1988). Teaching through modality strength: Concepts and practices. Columbus, OH: Zaner-Bloser.
Boaz, M., Elliott, B., Foshee, D., Hardy, D., Jarmon, C., & Olcott, D. (1999). Teaching at a distance: A handbook for instructors. Mission Viejo, CA: League for Innovation in the Community College. (ERIC Document Reproduction Service No. ED432316)
California Virtual Campus. (2005). Professional development center. Retrieved September 26, 2005, from http://www.cvc.edu/catalog/content.asp?page=250
Chyung, S. Y. (2001). Systematic and systemic approaches to reducing attrition rates in online higher education. American Journal of Distance Education, 15(3), 36‑49.
Diaz, D. P. (2002). Online drop rates revisited. Retrieved September 8, 2005, from http://www.technologysource.org/article/online_drop_rates_revisited
Dunn, R., & Dunn, K. (1978). Teaching students through their individual learning styles: A practical approach. Reston, VA: Reston.
Durrington, V., & Yu, C. (2004). It’s the same only different: The effect the discussion moderator has on student participation in online class discussions. Retrieved April 29, 2006 from ProQuest database.
Eison, J. (2002). Teaching strategies for the twenty-first century. In R. Diamond (Ed.), Field guide to academic leadership (pp. 157-174). San Francisco: Jossey-Bass.
Entwistle, N. (2001). Learning styles and cognitive processes in constructing understanding at the university. In J. M. Collis & S. Messick (Eds.), Intelligence and personality (pp. 217-232). Mahwah, NJ: Lawrence Erlbaum Associates.
Gardiner, L. (2002). Research on learning and student development and its implications. In R. Diamond (Ed.), Field guide to academic leadership (pp. 89-110). San Francisco: Jossey-Bass.
Guskin, A., & Marcy, B. (2002). Pressures for fundamental reform: Creating a viable academic future. In R. Diamond (Ed.), Field guide to academic leadership (pp. 3‑14). San Francisco: Jossey-Bass.
Hiemstra, R., & Sisco, B. (1990). Individualizing instruction: Making learning personal, empowering, and successful. San Francisco: Jossey-Bass.
Honolulu Community College-University of Hawaii. (2000). October faculty development newsletter. Retrieved August 10, 2004, from http://www.honolulu.hawaii.edu/intranet/committees/FacDevCom/activity/news1000.htm
Isaac, S., & Michael, B. W. (1995). Handbook in research and evaluation (3rd ed). San Diego, CA: EDITS.
Jamieson, P. (2004). The university as workplace: Preparing lectures to teach in online environments. The Quarterly Review of Distance Education, 5(1), 21-17. Retrieved August 5, 2004, from ProQuest database.
Jensen, G. H. (1987). Learning styles. In J. A. Provost & S. Anchors (Eds.), Application of the Myers-Briggs type indicator in higher education (pp. 181-206). Palo Alto, CA: Consulting Psychologists Press.
Johnson, E. (2007). Promoting learner-learner interaction through ecological assessments of the online environment. Retrieved August 24, 2007, from http://jolt.merlot.org/currentissue.html
Keefe, J. W. (1979). Learning styles: An overview. In J. W. Keefe (Ed.), Student learning styles: Diagnosing and prescribing programs (pp. 1-17). Reston, VA: National Association of Secondary School Principals.
Kirkpatrick, D. L. (1998). Evaluating training programs: The four levels (2nd ed.). San Francisco: Berrett-Koehler.
Lawrence, G. (1984). A synthesis of learning styles research involving the MBTI. Journal of Psychological Type, 8, 2-15.
Lick, D. W. (2002). Leadership and change. In R. Diamond (Ed.), A field guide to academic leadership (pp. 27-48). San Francisco: Jossey-Bass.
Myers, I. B. (1974). Type and teamwork. Gainesville, FL: Center for Application of Psychological Types.
Palloff, R., & Pratt, K. (1999). Building learning communities in cyberspace: Effective strategies for the online classroom. San Francisco: Jossey-Bass.Palloff, R., & Pratt, K. (2000). Making the transition: Helping teachers to teach online (Report No. IR-020-614). Nashville, TN: EDUCAUSE 2000. (ERIC Document Reproduction Service No. ED452806)
Shank, P. (2004). Competencies for online instructors. Retrieved December 28, 2005, from http://www.learningpeaks.com/instrcomp.pdf