International Review of Research in Open and Distributed Learning

Volume 23, Number 2

May - 2022

 

Using the Critical Incident Questionnaire as a Formative Evaluation Tool to Inform Online Course Design: A Qualitative Study

 

Anita Samuel1 and Simone C. O. Conceição2
1Uniformed Services University of Health Sciences, Bethesda, Maryland, USA; 2University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA

 

Abstract

The online instructor plays a prominent role in influencing how students respond to an online course, from designing the course structure, course activities, and assignments to encouraging interaction. Therefore, to develop effective online courses, instructors need robust feedback on their design strategies. Student evaluation of teaching (SET) functions as a summative evaluation of the course design and delivery. Yet, the feedback from SETs can only be integrated into the next iteration of the course, thereby failing to benefit the students who provide the feedback. One suggestion is to use midsemester formative evaluation to inform course design in real time. A qualitative research study was conducted to explore whether the Critical Incident Questionnaire (CIQ) could be an effective formative evaluative tool to inform real-time online course design and delivery. Thematic analysis was conducted on the midcourse evaluations obtained from 70 students in six fully online master’s level courses. There are three key findings from this study. First, CIQ use can provide opportunities for real-time adjustments to online course design and inform future redesign of online courses. Second, responses received via the CIQ prioritize the student voice and experience by focusing on factors that are critical to them. Finally, this deep-dive analysis reinforces the enduring factors that contribute to effective online course design and delivery. A recommendation for practice is to use the CIQ as an effective tool to gather formative feedback from students. This feedback can then be used to adjust course design as needed.

Keywords: student evaluation of teaching, Critical Incident Questionnaire, online course design, formative assessment

Using the Critical Incident Questionnaire as a Formative Evaluation Tool to Inform Online Course Design: A Qualitative Study

Student evaluation of teaching (SET) is standard practice in higher education. Evaluations are administered, usually at the end of an academic semester, attempting to measure teaching effectiveness; they function as a summative evaluation of the course design and delivery. SETs have gained importance as they inform tenure, reappointment, and promotion decisions (Uttl et al., 2017). Unfortunately, the SET is a flawed tool, and issues of bias associated with SETs are well documented (Boring et al., 2016; Mitchell & Martin, 2018; Reid, 2010). In the context of course design, there are two flaws: (a) the feedback from SETs cannot be used to make changes to course design in real time, and (b) SETs predominantly use surveys to gather quantitative data (Uttl et al., 2017).

SETs provide feedback that is intended to inform and enhance the design and delivery of a course. Yet, as a summative evaluation tool, the feedback from SETs can only be integrated into the next iteration of a course. The experiences of the students who provide the feedback are used to inform the design of the course for another group of learners, who might have very different responses to the course design (Gehringer, 2010). Moreover, the students who complete the SETs do not benefit from this course redesign (Gehringer, 2010). One way to address this is to use midsemester formative evaluation to inform course design in real time.

SETs are predominantly conducted via surveys that usually provide quantitative data (Erikson et al., 2016). Surveys limit the responses that students can provide. However, qualitative feedback tools allow students to go beyond predefined responses, encouraging them to delve deeper into their ideas about teaching and learning (Steyn et al., 2019). The Critical Incident Questionnaire (CIQ), a five-question open-ended questionnaire, has been used extensively in education and organizations as a tool of critical reflection and evaluation. Nevertheless, more research into how the CIQ can enhance online course design is needed (Keefer, 2009).

A qualitative research study using thematic analysis of the CIQ responses was conducted to explore whether the CIQ could be an effective formative evaluative tool to inform online course design and delivery in real time. This study answers the following research question: In what ways does the use of the CIQ as a formative evaluation tool contribute to online course design and delivery?

Literature Review

This study lies at the intersection of three concepts: SETs, the CIQ, and online course design. In this section, we explore literature related to these three concepts.

Student Evaluation of Teaching

SETs were originally intended to serve as a measure of teaching effectiveness (McKeachie et al., 1971; Rodin & Rodin, 1973) and have gained popularity since the 1960s (Rodin & Rodin, 1973). However, concerns about SETs range from the quality of the tool, the tool’s lack of standardization, and cost implications of conducting these evaluations (Fisher & Miller, 2008). In addition, there are persistent issues with race and gender biases (Boring et al., 2016; Mitchell & Martin, 2018; Reid, 2010), and SETs primarily measure student perceptions of teaching rather than teaching effectiveness (Stark & Freishtat, 2014). Furthermore, SETs are summative evaluations provided at the end of a course. Any course design changes can only be made in the next iteration of the course. Gehringer (2010) notes that the feedback provided by summative SETs is too infrequent and broad in scope to effectively inform course design.

Formative midsemester student evaluations are more effective in providing actionable, real-time feedback for faculty during a course. There is an immediacy to the feedback, since it is provided while students are experiencing the course, giving it authenticity. Based on midsemester feedback, faculty can make changes to course design where possible and manage student expectations during the course (Veeck et al., 2016). These actions positively impact semester-end SETs, and faculty who receive midsemester feedback tend to receive higher ratings on end-of-semester evaluations (Cohen, 1980).

Formative midsemester evaluations have been conducted in various ways. Gehringer (2010) introduced a Google form evaluation tool that he used to gather student feedback on a daily basis after his face-to-face lectures. Fisher and Miller (2008) used a quantitative and qualitative midsemester assessment tool. Their qualitative questions focused on student expectations that elicited student responses that course instructors had not expected. Finelli et al. (2008) and Hurney et al. (2014) discuss the use of instructional consultants to gather midsemester feedback. The consultants conducted focus groups with the students and summarized their findings for the instructor.

The choice of evaluation tool used impacts the midsemester evaluation’s effectiveness. Quantitative surveys tend to reflect the assumptions and biases of the designer, limit student responses, and not provide space for unique student experiences and contexts to emerge (Rao & Woolcock, 2003). Fisher and Miller’s (2008) qualitative evaluation tool was designed for their specific context and focused primarily on student expectations, though they did get feedback about the design of the course as well. Focus groups with instructional consultants (Finelli et al., 2008; Hurney et al., 2014) are not anonymous and can be susceptible to groupthink. Furthermore, instructors were not receiving information directly from their students. Rather, they received a summarized version filtered through the instructional consultant’s personal lens. An effective, robust, easy-to-use tool is needed to inform instructors about how their students are experiencing an online course.

Critical Incident Questionnaire

The CIQ, a five-question open-ended questionnaire, was proposed by Stephen Brookfield (1998) as a qualitative tool to elicit student feedback and to develop critically reflective practice in educators. Brookfield (1998) contends that educators’ course design and pedagogical choices will be incomplete and ill-informed if they do not account for the student’s voice. He states that the quality of teaching can only improve when instructors understand how their students are experiencing the course and the difficulties they are struggling with. The CIQ tool, as used initially by Brookfield, consists of five questions (see Figure 1) that are administered at the end of every class. The questions are general and focus on students’ perceptions; students are not asked to identify what they liked or disliked in the class. Instead, the CIQ asks them to reflect on the class critically, and the responses students provide are guided by incidents that were significant for them in the class.

Figure 1

Critical Incident Questionnaire

  1. At what moment in the class this week were you most engaged as a learner?
  2. At what moment in the class this week were you most distanced as a learner?
  3. What action that anyone in the room took this week did you find most affirming or helpful?
  4. What action that anyone in the room took this week did you find most puzzling or confusing?
  5. What surprised you most about this class?

Note. From S. Brookfield, “Critically reflective practice,” 1998, Journal of Continuing Education in the Health Professions, 18(4), p. 201 (https://doi.org/10.1002/chp.1340180402)

The CIQ has been used as a formative evaluation tool and also as a reflective tool. Linstrum et al. (2012) used the CIQ in a graduate-level course to obtain formative assessment data over two years. Their study identified four themes of course design: impact of instructor, student personal awareness, discussion mode teaching, and practical and applicable activities. Jacobs (2015) used the CIQ to evaluate course design and make changes to the course. Hessler and Taggart (2011) adopted the CIQ as a formative assessment and reflective tool for their writing courses. They found the CIQ insufficient for their needs and adapted it to include two more relevant questions to their context. Rather than only focusing on student feedback about the course, they used the CIQ to encourage students to reflect on their learning practices as well. In all these instances, the CIQ has been used in traditional face-to-face environments.

As college courses started moving online, in 2006, Brookfield adapted the CIQ for critical reflection in the online environment. But minimal research has been published on the use of the CIQ as an evaluative tool in online courses. Keefer (2009) conducted a literature review on the use of the CIQ and identified only two studies that used the CIQ in the online environment: Glowacki-Dudka and Barnett (2007) used the CIQ to study group development in online asynchronous graduate courses; and Phelan (2012) used the CIQ to explore students’ perceptions of their online learning experiences. However, Glowacki-Dudka and Barnett (2007) and Phelan (2012) did not use the CIQ to inform course design, focusing instead on group and community development among the students. While anecdotal evidence exists that CIQ is used as a formative evaluation tool in online courses, research studies in this area are lacking.

Online Course Design

Best practices in online course design have been informed by various theories and models. Transactional distance, “a psychological and communications space” rather than a physical or temporal space (Moore, 1997, p. 22), was the defining theory of distance education. Moore (2013, p. 88) identifies three dimensions of distance education: (1) program or course “structure,” (2) “dialogue” (interaction between instructor and student), and (3) “autonomy” of the learner. These three dimensions have been foundational to the various elements of online course design identified in the ensuing years.

Garrison et al. (1999) developed the community of inquiry (COI) framework to define a “worthwhile educational experience” (p. 88) in online education. The COI framework integrates social, cognitive, and teaching presence. Social presence encompasses the dialogue that Moore referred to, and teaching presence includes structure. Through cognitive presence, Garrison et al. (1999) address issues of student agency and motivation. Several other models and theories of online learning have been proposed, including Anderson’s (2011) online learning model, Harasim’s (2017) online collaborative learning theory, and Picciano’s (2017) multimodal model for online education. These models and theories now include evaluation, reflection, learning resources, and learning modality.

Based on these models, several course evaluation instruments have been developed, such as the following:

These instruments vary in some ways, but they also share many course design elements that have been identified as best practices in online course design, including course structure and design, interaction, student activities, content or resources, course technology(ies), and assessment.

In addition to these evaluation tools, universities adopt their own evaluations of course design. The common feature of all these tools is that they are administered as summative evaluations or checklists prior to starting a course. These tools are not used for formative assessment of course design. Moreover, these tools do not prioritize student feedback.

The three intersecting bodies of literature—on student evaluations of teaching, the CIQ, and online course design—indicate that formative feedback, conducted via an appropriate tool, has the potential to provide instructors with real-time feedback on online course design.

Methodology

Study Context

This study was conducted at a public four-year university in the Midwestern United States, offering fully online courses. To study the effectiveness of the CIQ as an evaluation tool in online courses, the CIQ was incorporated as a midsemester evaluation tool in six fully online graduate-level courses in adult education and technology.

Different instructors taught the courses. However, the overall course design of all six courses was similar. All the courses were fully asynchronous with optional synchronous sessions with the instructor or other students. In addition, the courses included collaborative learning in the form of team projects and asynchronous group discussions. The asynchronous discussions in the courses were directed by student-generated discussion prompts that could generate meaningful dialogue. The students in these courses were practicing professionals and identified as adult students. Their experience with online learning ranged from no experience to having participated in multiple online courses.

The CIQ was distributed as a midsemester evaluation. The questions’ phrasing was slightly adapted to account for the online environment and deployment of the tool once during the semester:

  1. At what moment in the semester did you feel most engaged with what was happening? Why?
  2. At what moment in the semester did you feel most distanced from what was happening? Why?
  3. What action that anyone (you or anyone in your group or class) took in the online environment did you find most affirming and helpful? Why?
  4. What action that anyone (you or anyone in your group or class) took in the online environment did you find most puzzling or confusing? Why?
  5. What about the online environment during the semester surprised you the most? Why?

These evaluation questions were distributed via an anonymous online survey during week 7 of a 15-week academic semester. Institutional review board clearance was obtained, and participants were informed of their participation at the beginning of the midsemester evaluation survey. In total, 70 responses were received. The data from the surveys were entered into Microsoft Excel 365 and organized under the five CIQ questions.

While the original intention of the CIQ was to use it after every teaching session, in this study, the CIQ was only used once during an academic semester to avoid students experiencing feedback fatigue (Brookfield, 1998) at the end of the course.

Data Analysis

A semantic thematic analysis from a constructivist epistemology was conducted. The data were inductively analyzed following the six phases of thematic analysis outlined by Braun and Clarke (2006).

Phase one was familiarizing with the data. The data from the six courses were compiled and organized under the individual CIQ questions. The authors familiarized themselves with the data by reading and rereading the data, looking for patterns. Phase two was generating initial codes. As patterns were identified, the authors independently generated initial codes across the complete data set. Table 1 shows a sample of data extracted and the initial codes applied by the authors individually.

Table 1

Initial Coding

Data excerpt Author 1 code Author 2 code
The module discussions in which we were sharing experiences. Sharing experiences Sharing experiences
I met with my group on Skype[.] I felt most engaged because it felt more real. Synchronous meeting Synchronous face-to-face interaction
My group members are proving to be responsible for tasks, and willing to help with clarifications. Surprised by responsive group members’ accountability Interaction with students, affirmative/supportive tone

Phase three was searching for themes. The authors met to review their individual codes. They organized the codes into potential themes and collated the data relevant to these themes. Phase four was reviewing themes. They checked to see if the themes worked at both the discrete extracted data and entire data set levels. Once the themes were confirmed, the authors developed a thematic map (see Table 2).

Table 2

Sample of Thematic Map

Themes Sub-themes
Student-student communication Support
Sharing experiences
Collaboration
Feedback
Experience with online learning Experienced
Inexperienced
Type of communication Asynchronous
Synchronous

Phase five was defining and naming themes. The authors continued to refine the themes through categorization and renaming. Phase six was producing the report. Finally, the authors selected compelling extracts and rechecked them against the themes and the research question. See Table 3 for a sample.

Table 3

Sample Extracts of Student Comments

Subtheme Code Extracts from Student Feedback
Student-Instructor Feedback The professor’s feedback is concise and [comes] in a timely manner. When the group or thread needs constructive criticism or more clarification, the professor jumps in to emphasize the need for more or better information.
Student-student Collaboration I felt most engaged when I was put into a group and start[ed] gaining different task[s] to do within my group.

Findings

The data analysis of student feedback received through the CIQ revealed five broad themes: interactions, expectations, course design, experience with online learning, and learners’ sense of agency. These five factors affected students in different ways. The findings are presented using the CIQ questions as a framework. See Table 4 for a summary of findings.

Table 4

Summary of Findings

Factors Findings
Engaging factors
  • Student-student interaction
  • Robust communication
  • Relevant content
  • Learner sense of agency
Distancing factors
  • Course design
  • Unclear expectations
  • Lack of peer interaction
Affirming factors
  • Student-student interaction
  • Student-instructor interaction
  • Group dynamics
    • Supportive
    • Helpful
  • Learner sense of agency
Puzzling factors
  • Peer interaction
  • Course design
    • Too many moving parts
Surprising factors
  • Unexpected elements
  • Course design
  • Interaction with technology

Engaging Factors

The first question posed in the CIQ was “At what moment in the semester did you feel most engaged with what was happening? Why?” Interactions with peers and course content, quality of interactions, and learner sense of agency emerged as the key factors for engagement.

The participants in this study were overwhelmingly engaged by interactions with their peers. They identified both synchronous and asynchronous peer interactions as being engaging. Asynchronous group discussions were repeatedly mentioned, for example, “I feel engaged when I am responding to posts within my small group.” These peer interactions were related to discussions regarding course content and, as mentioned by a student, “working on my group project with my group members.” However, the peer interactions were effective within small groups rather than in the large class setting because “the general course discussion was overwhelming once more and more posts were added.”

The quality of these peer interactions was also an influencing factor; one student mentioned “in-depth discussions that have been meaningful and thorough.” Participants appreciated thoughtful and timely posts from their group members; one participant identified a turning point when “conversations took a turn into deeper analytical discussions, did I really feel engaged in the class and learning.” Participants also appreciated “module discussions in which we were sharing experiences,” creating “group discussions [that] were more conversational.”

Synchronous activities were optional in the courses surveyed and were identified as engaging in the courses where students had participated in them. Participants mentioned they felt engaged “when I am Skyping with my group.” The physicality of interactions in these synchronous meetings were specifically identified: “Now I know how they look, the way they talk etc. It is easier for me to relate to these people now”; “I felt most engaged because it felt more real. I think that having a real discussion and being able to hear someone talk are really important.” The real-time immediacy of feedback in these synchronous sessions was also noted.

Participants also felt engaged when they found the course content relevant to their practice since “this brought the information full circle and to life, rather than just a theory” and “I was able to apply [what] I was learning first-hand.” As one participant put it: “I think I was most engaged because I find these topics to be very interesting and where I would like to focus my research.”

When participants took control of the learning environment and guided the direction of tasks and interactions, they found the experience engaging. One participant “felt engaged early on, as I took the responsibility for leading the first module and discussion on the readings.”

Distancing Factors

Three factors caused students to feel distanced in the course: the course design, unclear expectations, and lack of peer interaction. Course design emerged as an oft-mentioned factor that created a feeling of distance within the course. Specifically, course design related to workload issues was a major contributor to participants’ experience of distance. One participant said, “Reading from different texts, doing the book review, trying to get the tech plan. It seems like a little of everything all at once.”

The distancing aspects of course design were exacerbated by unclear course instructions. When participants were unsure of what was expected of them, they expressed feeling distanced in the course. As one participant succinctly put it, “When I am confused about what I need to do or what is expected of me. I feel like just turning off.” Another participant clarified: “I prefer to have very specific instructions, and at times I felt I needed more direction and felt distanced.”

While peer interaction enabled participants to feel engaged with the course, lack of peer interaction and lack of a cohesive group dynamic distanced them from the course. But there was also an element of too much of a good thing, as some participants felt distanced when there was too much interaction: “I found myself feeling overwhelmed with the number of comments in the first module’s discussion threads.”

Affirming Factors

Interactions with peers, group dynamics, interactions with instructors, and learner sense of agency were identified as affirming actions. Group dynamics were repeatedly mentioned: “In general, the conversations my group has is affirming and helpful because everyone is very open, honest and complimentary.” Candid conversations were appreciated and explicitly noted as this participant comments:

One member of my group started out all the discussions with how he likes the DQs [discussion questions] to feel conversational. It has encouraged many in our group to follow suit. It has made the discussions much more lively and personable. Because of this, there are many times which we are supporting each other through sharing experiences and relating it to not only the text, but each other.

Meaningful communication was highly valued and noted by participants: “There were some very thoughtful and helpful comments and that was most helpful.” The supportive nature of group dynamics was also identified as an affirming factor in the courses: “When the modules first began, I appreciated the fact that [the peer group] facilitator reached out to me to help me remember when postings were and requirements were to keep me a part of the group.” One participant explained that “one of my group members was so helpful. ... They encouraged me when I was getting anxious about our poor group participation. They also took on more responsibility within the group which made a positive impact on me.”

Interactions with the course instructor were also noted as being affirming and helpful: “I like that my professor is involved and responds so quickly and very often.” The timely nature of instructor responses and feedback was highlighted. This instructor presence “made me feel that the teacher actually is interested in what I had to say. It was nice to know she was ‘there.’”

When learners exercised agency, they felt affirmed. “Stepping up to be the leader” was noted by participants as an affirming action, and “taking action and making a plan and schedule was something that really helped me.”

Puzzling Factors

Just as lack of peer interaction was experienced as a distancing factor, it was also identified as the most puzzling part of the course: “Having group members not participate on a consistent basis” or “[w]hen members did not respond” confused some participants. Specific group members’ actions were remarked upon as confusing and puzzling: for example, “The group that I was in for the book review seemed disinterested in the topic we had.”

Course design was also identified as confusing or puzzling. When there were too many moving pieces in the course, participants spoke about it as a confusing environment: “I am super confused by the different roles that we have in groups. This is mainly because we have both class discussions and a class project going on at the same time.” Another participant felt that “the discussion threads are confusing and overwhelming in this course. There are so many that I often find myself losing track of what we are talking about.”

Surprising Factors

Participants expressed surprise when something fell outside their expectations or what they were used to. Participants who had taken other online courses reported, “I have taken many online courses now and know what to expect and how to pace myself.” However, when participants encountered something different, some examples of their comments were as follows: “I was surprised by my reaction to the discussion board. I have taken classes that don’t involve the discussion board as much” and “I am spending much more time in the online class than traditional f2f [face-to-face]. I never thought it would require greater time commitment.”

Elements of the course design caused some participants to be either pleasantly or unpleasantly surprised: “I have enough time to think about the discussion and formulate a re[s]ponse that I feel good about posting,” said one participant, while another commented about “how the interface on D2L (the learning management system) did not look like other D2L class sites. It was a happy surprise.” One participant said, “The way threads are posted makes everything seem garbled together. Not enough separation yet too many places to check.”

Participants also expressed surprise when they had positive interactions with the technology: “This was the first time I used video in a response” and “I have not utilized OneDrive previously and am enjoying the benefits it provides with group tasks.” It was a negative surprise when they felt challenged by the technology: “I was surprised at the drastic changes [in the learning management system] and I’m still surprised that I can’t seem to adapt to this new environment.”

Discussion

Formative course evaluations are intended to help course instructors make changes to the online course in real time and enhance the student experience. However, for faculty to make changes to their courses, formative evaluations need to be robust and provide useful data. Findings from this study, conducted across six different fully online graduate-level courses, show that the CIQ can provide useful and actionable information for course instructors.

This study sought to answer the question, in what ways does the use of the CIQ contribute to online course design and delivery? There are three key findings from this study. First, the use of the CIQ for formative evaluation can provide opportunities for real-time adjustments to online course design and inform future redesign of online courses. Second, responses received via the CIQ prioritize the student voice and experience by focusing on factors that are critical to students. Last, this deep-dive analysis reinforces the enduring factors that contribute to effective online course design and delivery.

Informing Course Design

A key element of good online course design is consistency (Subramanian & Budhrani, 2020). This makes it challenging to implement changes to course design and delivery in real time. However, there are changes that instructors can make in real time to enhance student learning experiences.

The students in these courses highlighted the need for clear instructions and expectations. A lack of clarity in different activities, including assignment expectations, group interaction expectations, and instructor expectations, led to feelings of distance and confusion (Baldwin & Trespalacios, 2017). This is important feedback and easy for instructors to act on. After reviewing the midsemester evaluation, instructors can easily address points of confusion and clarify expectations (Gehringer, 2010).

Similarly, when students identify specific activities as unclear or elements of the course site as challenging to navigate, the instructor can correct that in real time by providing the necessary clarifications. Making these adjustments during the course has the most meaning for the students as it directly affects them. Acknowledging student experiences and trying to address concerns show students that the instructor is hearing them and is invested in their success (Dancer & Kamvounias, 2005).

Interactions within the course environment were frequently mentioned by the participants in this study. These students responded positively to proactive, timely feedback from instructors. This is positive reinforcement, and course instructors should make a note to actively maintain this form of interaction with the students.

The most challenging interaction to facilitate is student-student interaction, since it lies outside the instructor’s locus of control (Samuel, 2020). Yet, instructors can encourage student-student interaction. When students engage positively with synchronous sessions, instructors can ensure that they offer more opportunities for this through the remainder of the course. Midsemester, instructors can provide appropriate feedback and encourage students who are less active to participate more. When responses from the CIQ midsemester evaluation is summarized and shared with students, highlighting student-student interaction could also encourage participation.

The graduate adult students in this study appreciated agency over their learning and valued readings and assignments that they found practical and applicable to their lives (Linstrum et al., 2012). Positive comments received through the CIQ reinforce the course design decisions of the instructor. Instructors also have the opportunity to assess their courses midsemester and ensure sufficient opportunities for students to exercise agency over their learning. Instructors might consider giving students a choice over the course content they engage with.

As illustrated above, some feedback can be acted upon in real time, but some changes can only be implemented in a future iteration of the course (Gehringer, 2010). The participants in this study clearly expressed that when a course design had too many moving parts, such as overlapping assignments, they experienced cognitive overload. The specific comments help instructors identify course elements that need to be adjusted. Changes to course timelines and assignments are not feasible in real time. However, this feedback, provided by students as they are experiencing the course, has immediacy and authenticity. This is valuable feedback for future course redesign.

Prioritizing Student Voices

The CIQ as a critically reflective tool was developed to remove the hegemony of the course instructor and create a democratic learning environment where students have a voice in their learning process. Using the CIQ as an anonymous midsemester evaluation tool reduces the power dynamic between student and instructor and elicits honest feedback about a course (Brookfield, 1998). The CIQ’s open-ended questions allow students to speak to online course experiences that were critical to them. Reviewing comments received through the CIQ and modifying course design demonstrates to students that their experiences are meaningful and valued by the instructor.

The literature identifies three main types of interactions in online courses: student-student, student-instructor, and student-content (Moore, 1989). Relying on researcher-generated quantitative surveys, some studies have shown student-instructor interaction to be the most important (Kyei-Blankson et al., 2019; Linstrum et al., 2012; Martin & Bolliger, 2018; Swan, 2002). Other studies have shown that student-student interaction is more important (Bernard et al., 2009). In this study, the use of the CIQ revealed student-student interaction to be the most impactful interaction for participants. Using the CIQ helps bring clarity to instructors when research provides unclear findings. This is important for instructors as it shows them exactly what their students in a specific course are expecting and experiencing. Even though the findings might not align with the literature, acknowledging and addressing them as valid and unique to these participants is important.

Reinforcing Enduring Factors of Effective Online Course Design

All of this study’s findings reinforce the factors that have been identified for effective online course design. Interaction appeared as a factor that engages and affirms students; but it could also make them feel distanced. Student-instructor interaction was highlighted as an affirming factor in this study, and student-instructor interaction has been repeatedly shown to have a significant effect on student learning, including predicting student success in a course (Crews et al., 2015; Jaggars & Xu, 2016; Martin & Bolliger, 2018). Student-student interactions were repeatedly mentioned by the participants in this study as impactful (Bernard et al., 2009).

Chunking course content (Martin et al., 2019; Subramanian & Budhrani, 2020) and maintaining consistency in the presentation and rhythm to the course (Subramanian & Budhrani, 2020) are important in an online course. The study participants noted that course sites that are difficult to navigate or that had too many elements were overwhelming and added to their cognitive load.

The study participants appreciated meaningful tasks, readings, and course content that had practical applications and were relevant to them (Linstrum et al., 2012). In addition, expectations for assignments need to be stated explicitly. Clear expectations also influence the quality of student-student interactions (Jaggars & Xu, 2016; Martin et al., 2019).

These findings echo the literature on best practices for online course design and validate the use of the CIQ tool. Using the CIQ can give instructors important information on where their courses might be deviating from best practices, offering them an opportunity to reflect on their course and reassess its design.

In this study, elements of course design were mentioned by students as a significant factor in their negative experiences of the course only. A lack of comments about the course design is an indicator to instructors that their course is functioning as expected.

Limitations of the Study

While the responses from the CIQ can be informative and guide course design, Keefer (2009) notes that the findings are not comprehensive. Students will only focus on incidents that have been impactful for them and will not necessarily provide a holistic review of the course. In this study, the nature of the CIQ questions guided the framing of students’ responses, especially the third and fourth questions, which expressly referred to actions taken by course participants. This limits the scope of responses that the students provide to events that happened to them (Hessler & Taggart, 2011).

Furthermore, a participant self-selection bias is present in these surveys; usually, students with strong negative or positive responses to a course tend to respond to evaluation surveys (Wolbring & Treischel, 2015). It should also be noted that this study was limited to one department within one university, and the participants were graduate-level students. This might have affected the quality of their responses to the CIQ.

Conclusion and Next Steps

The online instructor plays a prominent role in influencing how students respond to an online course, from designing the course structure, course activities, and assignments to encouraging interaction. Therefore, to develop effective online courses, instructors need robust feedback on their design strategies. This study shows that the CIQ can be used as a tool to elicit useful formative evaluation feedback from students. Instructors can use this feedback to make changes to a course, both in real time and in future interactions to enhance the student learning experience. Since student evaluations of teaching impact tenure and promotion decisions, tools such as midsemester evaluations that can positively influence SETs should be used. However, these evaluations are meaningful only if implemented through a robust tool such as the CIQ that can facilitate concrete action based on student feedback. Future research should consider using the CIQ at two points in the course: in the middle and at the end. The first deployment of the CIQ can help the instructor identify issues with the course. Then, using CIQ again at the end of the course can help highlight whether the instructor’s changes had any impact. The advantage of using the CIQ is that it highlights factors that specifically affect a particular group of students.i

Note

  1. The opinions and assertions expressed herein are those of the author(s) and do not necessarily reflect the official policy or position of the Uniformed Services University or the Department of Defense.

References

Anderson, T. (2011). The theory and practice of online learning (5th ed.). Athabasca University Press. https://auspace.athabascau.ca/bitstream/handle/2149/411/?sequence=1

Baldwin, S. J., & Trespalacios, J. (2017). Evaluation instruments and good practices in online education. Online Learning, 21(2). https://olj.onlinelearningconsortium.org/index.php/olj/article/view/913

Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79(3), 1243-1289. https://doi.org/10.3102/0034654309333844

Blackboard. (2017, April 10). Blackboard exemplary course program rubric. https://www.blackboard.com/resources/are-your-courses-exemplary

Boring, A., Ottoboni, K., & Stark, P. (2016, January 7). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research, 1-11. https://www.scienceopen.com/document/read?vid=818d8ec0-5908-47d8-86b4-5dc38f04b23e

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa

Brookfield, S. (1998). Critically reflective practice. Journal of Continuing Education in the Health Professions, 18(4), 197-205. https://doi.org/10.1002/chp.1340180402

California State University. (2019). CSU QLT course review instrument. https://docs.google.com/document/d/1ilqtDHjYfuJfjq1f8lG_bGhH9Xskcad-2a8aaPmnHG8/edit

Cohen, P. A. (1980). Effectiveness of student-rating feedback for improving college instruction: A meta-analysis of findings. Research in higher education, 13(4), 321-341.

Crews, T. B., Wilkinson, K., & Neill, J. K. (2015). Principles for good practice in undergraduate education: Effective online course design to assist students’ success. Journal of Online Learning and Teaching, 11(1), 87-103. https://uscdmc.sc.edu/about/offices_and_divisions/cte/instructional_design/docs/principles_good_practice_undergraduate_education_crews.pdf

Dancer, D., & Kamvounias, P. (2005). Student involvement in assessment: A project designed to assess class participation fairly and reliably. Assessment and Evaluation in Higher Education 30(4), 445-454. https://doi.org/10.1080/02602930500099235

Erikson, M., Erikson, M. G., & Punzi, E. (2016). Student responses to a reflexive course evaluation. Reflective Practice, 17(6), 663-675. https://doi.org/10.1080/14623943.2016.1206877

Finelli, C. J., Ott, M., Gottfried, A. C., Hershock, C., O’Neal, C., & Kaplan, M. (2008). Utilizing instructional consultations to enhance the teaching performance of engineering faculty. Journal of Engineering Education, 97(4), 397-411. https://doi.org/10.1002/j.2168-9830.2008.tb00989.x

Fisher, R., & Miller, D. (2008). Responding to student expectations: A partnership approach to course evaluation. Assessment & Evaluation in Higher Education, 33(2), 191-202. https://doi.org/10.1080/02602930701292514

Garrison, D. R., Anderson, T., & Archer, W. (1999). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 87-105. https://doi.org/10.1016/S1096-7516(00)00016-6

Gehringer, E. (2010, June 20-23). Daily course evaluation with Google forms [Paper presentation]. 2010 American Society for Engineering Education Annual Conference & Exposition, Louisville, KY, United States. https://doi.org/10.18260/1-2--16350

Glowacki-Dudka, M., & Barnett, N. (2007). Connecting critical reflection and group development in online adult education classrooms. International Journal of Teaching and Learning in Higher Education, 19(1), 43-52. https://eric.ed.gov/?id=EJ901286

Harasim, L. (2017). Learning theory and online technologies. Taylor & Francis. https://doi.org/10.4324/9781315716831

Hessler, H. B., & Taggart, A. R. (2011). What’s stalling learning? Using a formative assessment tool to address critical incidents in class. International Journal for the Scholarship of Teaching and Learning, 5(1), Article 9. https://doi.org/10.20429/ijsotl.2011.050109

Hurney, C., Harris, N., Bates Prins, S., & Kruck, S. E. (2014). The impact of a learner-centered, mid-semester course evaluation on students. The Journal of Faculty Development, 28(3), 55-62. https://cetl.uni.edu/sites/default/files/impact_of_learner-centered_by_hurney_1.pdf

Jacobs, M. A. (2015). By their pupils they’ll be taught: Using Critical Incident Questionnaire as feedback. Journal of Invitational Theory and Practice, 21, 9-22. https://eric.ed.gov/?id=EJ1163006

Jaggars, S. S., & Xu, D. (2016). How do online course design features influence student performance? Computers & Education, 95, 270-284. https://doi.org/10.1016/j.compedu.2016.01.014

Keefer, J. M. (2009). The Critical Incident Questionnaire (CIQ): From research to practice and back again. In R. L. Lawrence (Ed.), Proceedings of the 50th Annual Adult Education Research Conference (pp. 177-182). https://digitalcommons.nl.edu/cgi/viewcontent.cgi?referer=https://scholar.google.com/&httpsredir=1&article=1000&context=ace_aerc#page=191

Kyei-Blankson, L., Ntuli, E., & Donnelly, H. (2019). Establishing the importance of interaction and presence to student learning in online environments. Journal of Interactive Learning Research, 30(4), 539-560. https://www.learntechlib.org/p/161956/

Linstrum, K. S., Ballard, G., & Shelby, T. (2012). Formative evaluation: Using the Critical Incident Questionnaire in a graduate counseling course on cultural diversity. Journal of Intercultural Disciplines, 10, 94-102. https://www.proquest.com/docview/1033777245

Martin, F., & Bolliger, D. U. (2018). Engagement matters: Student perceptions on the importance of engagement strategies in the online learning environment. Online Learning, 22(1), 205-222. https://doi.org/10.24059/olj.v22i1.1092

Martin, F., Ritzhaupt, A., Kumar, S., & Budhrani, K. (2019). Award-winning faculty online teaching practices: Course design, assessment and evaluation, and facilitation. The Internet and Higher Education, 42, 34-43. https://doi.org/10.1016/j.iheduc.2019.04.001

McKeachie, W. J., Lin, Y.-G., & Mann, W. (1971). Student ratings of teacher effectiveness: Validity studies. American Educational Research Journal, 8(3), 435-445. https://doi.org/10.3102%2F00028312008003435

Mitchell, K. M., & Martin, J. (2018). Gender bias in student evaluations. PS: Political Science & Politics, 51(3), 648-652. https://doi.org/10.1017/S104909651800001X

Moore, M. (1997). Theory of transactional distance. In D. Keegan (Ed.) Theoretical principles of distance education (pp.22-38) Routledge. http://www.c3l.uni-oldenburg.de/cde/found/moore93.pdf

Moore, M. G. (2013). Handbook of distance education. Routledge.

Phelan, L. (2012). Interrogating students’ perceptions of their online learning experiences with Brookfield’s Critical Incident Questionnaire. Distance Education, 33(1), 31-44. https://doi.org/10.1080/01587919.2012.667958

Picciano, A. G. (2017). Theories and frameworks for online education: Seeking an integrated model. Online Learning, 21(3). https://doi.org/10.24059/olj.v21i3.1225

Quality Matters. (2018). Specific review standards from the QM higher education rubric, sixth edition. https://www.qualitymatters.org/qa-resources/rubric-standards/higher-ed-rubric

Rao, V., & Woolcock, M. (2003). Integrating qualitative and quantitative approaches in program evaluation. In F. Bourguignon, & L. A. Pereira da Silva, (Eds.), The impact of economic policies on poverty and income distribution: Evaluation techniques and tools (pp. 165-190). World Bank and Oxford University Press.

Reid, L. D. (2010). The role of perceived race and gender in the evaluation of college teaching on RateMyProfessors.Com. Journal of Diversity in higher Education, 3(3), 137-152. https://doi.org/10.1037/a0019865

Rodin, M., & Rodin, B. (1973). Student evaluations of teachers. The Journal of Economic Education, 5(1), 5-9. https://www.jstor.org/stable/1734252

Samuel, A. (2020). Zones of agency: Understanding online faculty experiences of presence. The International Review of Research in Open and Distributed Learning, 21(4), 79-95. https://doi.org/10.19173/irrodl.v21i4.4905

Stark, P., & Freishtat, R. (2014, September 29). An evaluation of course evaluations. ScienceOpen Research, 1-7. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1

State University of New York. (2018). The SUNY online course quality review rubric: OSCQR. https://oscqr.suny.edu/

Steyn, C., Davies, C., & Sambo, A. (2019). Eliciting student feedback for course development: The application of a qualitative course evaluation tool among business research students. Assessment & Evaluation in Higher Education, 44(1), 11-24. https://doi.org/10.1080/02602938.2018.1466266

Subramanian, K., & Budhrani, K. (2020, February). Influence of course design on student engagement and motivation in an online course. In SIGCSE ’20: Proceedings of the 51st ACM Technical Symposium on Computer Science Education (pp. 303-308). Association for Computing Machinery. https://dl.acm.org/doi/abs/10.1145/3328778.3366828

Swan, K. (2002). Building learning communities in online courses: The importance of interaction. Education, Communication & Information, 2(1), 23-49. https://doi.org/10.1080/1463631022000005016

Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42. https://doi.org/10.1016/j.stueduc.2016.08.007

Veeck, A., O’Reilly, K., MacMillan, A., & Yu, H. (2016). The use of collaborative midterm student evaluations to provide actionable results. Journal of Marketing Education, 38(3), 157-169. http://doi.org/10.1177/0273475315619652

Wolbring, T., & Treischl, E. (2016). Selection bias in students’ evaluation of teaching. Research in Higher Education, 57(1), 51-71. https://link.springer.com/content/pdf/10.1007/s11162-015-9378-7.pdf

 

Athabasca University

Creative Commons License

Using the Critical Incident Questionnaire as a Formative Evaluation Tool to Inform Online Course Design: A Qualitative Study by Anita Samuel and Simone C. O. Conceição is licensed under a Creative Commons Attribution 4.0 International License.