International Review of Research in Open and Distributed Learning

Volume 23, Number 3

September - 2022

 

Revising and Validating the Community of Inquiry Instrument for MOOCs and Other Global Online Courses

 

Jered Borup, Joan Kang Shin, Marvin Powell, Anya S. Evmenova, and Woomee Kim
George Mason University

 

Abstract

Globally, online course enrollments have grown, and English is often used as a lingua franca for instruction. The Community of Inquiry (CoI) framework can inform the creation of more supportive, interaction-rich online learning environments. However, the framework and its accompanying validated instrument were created in North America, limiting researchers’ ability to use the instrument in courses where participants have varying levels of English language proficiency. We revised the CoI instrument so it could be more easily read and understood by individuals whose native language is not English. Using exploratory and confirmatory factor analyses (EFA and CFA) on data obtained from global online courses and MOOCs, we found the revised instrument had good fit statistics once seven items were removed. This study expands the usability of the CoI instrument beyond the original and translated versions, and provides an example of adapting and validating an existing instrument for global courses.

Keywords: Community of inquiry, global online courses, MOOCs, teachers of English, English as a foreign language

Introduction

Online learning has grown dramatically despite relatively high attrition rates (Bawa, 2016). Garrison et al.’s (2000) Community of Inquiry (CoI) framework highlights how outcomes can improve through meaningful interactions. Arbaugh et al. (2008) developed and validated an instrument that measured CoI constructs—teaching presence, social presence, and cognitive presence—allowing researchers to better identify factors that impact outcomes. The “overwhelming majority” of research using the instrument has been conducted in North America (Stenbom, 2018, p. 24) and it is important to ensure the instrument is also appropriate for courses with a global audience. Since it is not practical to provide the survey in every language, especially in large global courses such as massive open online courses (MOOCs), it is important to develop an English version of the survey that would be easily comprehensible at varying levels of English language proficiency. In this research, we revised the CoI instrument to be comprehensible for culturally and linguistically diverse English language educators and validated it using survey responses following teacher professional development courses offered globally. Specifically, we revised the CoI instrument to be at the B1 level of the Common European Framework of Reference (CEFR) for English (i.e., lower intermediate level of English language proficiency).

We sought to answer the following research question:

Literature Review

Growth of Online Learning

At universities outside the United States, online course enrollments have been growing rapidly (Xiao, 2018), a growth likely to accelerate in the wake of emergency remote teaching during the COVID-19 pandemic (Teräs et al., 2020).

MOOCs have also impacted global online learning in the last decade because they “offer free or low-cost education to anyone, anytime, anywhere, and on a massive scale” (Lowenthal & Hodges, 2015, p. 84). MOOCs have been categorized based on learning interactions and their dominant learning strategies. Connectivist MOOCs (cMOOCs) emphasize learner-learner interaction and community, while extended MOOCs (xMOOCs) focus on learner-content interaction and a cognitive-behaviorist approach to learning (Anders, 2015). Blended MOOCs (bMOOCs) combine online learning with in-person meetings to discuss and apply learning (Yousef et al., 2015).

MOOCs have the potential to serve as scalable solutions to the challenges and demands in teacher professional learning. For instance, pre-service teachers from Israel expressed positive attitudes towards learning both the content, pedagogical, and technological knowledge after enrolling into an international MOOC for credit (Donitsa-Schidt & Topaz, 2018). Both pre-service and in-service teachers in the US have demonstrated personal and professional growth after enrolling and participating in a professional development MOOC (Phan & Zhu, 2020). In-service elementary school teachers participating in a teachers’ professional development MOOC in Greece enhanced their self-efficacy beliefs compared to those teachers who did not participate in the course (Tzovla et al., 2021). As teachers are expected to adjust to rapidly evolving national education policies (Zein, 2019) and meet the increasing demands for flexible and inclusive education for diverse learners, MOOCs can become a tool for open education and teacher professional development for all (Koukis & Jimoyannis, 2019).

Language MOOCs are dedicated to online instruction in second or foreign languages. They can be effectively used to teach all aspects of language, especially for reading and listening skills (Sallam et al., 2020). MOOCs designed to improve teachers’ instructional practices in teaching English as a second language are largely offered in English (Finardi & Tyler, 2015). English MOOCs are especially popular with English language learners (ELLs; see Wilson & Gruzd, 2014) who commonly enroll to improve their English language skills as well as their economic, social, and geographic mobility (Uchidiuno et al., 2018). While there are benefits to offering courses in English, those who design and develop MOOCs should take into consideration the English proficiency of their learners and adjust the language level of the MOOC without sacrificing content.

CoI Framework Supporting Online Learning Performance

Online courses tend to have attrition rates 10 to 20% higher than in-person courses (Bawa, 2016). Attrition rates are much worse in MOOCs. Fewer than 5% of participants enrolled in MOOCs offered by MIT and Harvard University passed their course. The pass rates rose to nearly 16% when students indicated they intended to pass the MOOC, and only went as high as 50% when students paid a small fee (Reich & Ruipérez-Valiente, 2019a, b). In order to improve course outcomes, many have attempted to strengthen the three presences highlighted in Garrison et al.’s (2000) CoI framework (see Figure 1).

Figure 1

Model for Community of Inquiry Framework

Note. Adapted from Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105. https://doi.org/10.1016/S1096-7516(00)00016-6

The CoI framework was created following content analyses of discussion board comments. A decade later, Archer (2010)—one of the CoI original authors—suggested that “the time has come to build outwards from the firm base established by the many researchers who have applied this framework in the context of online discussions” (p. 69). Stenbom (2018) identified and analyzed 103 journal articles that used the CoI instrument and found that a primary purpose of using the instrument was to gain insight into a variety of aspects in a learning environment or even compare entire courses. Fiock (2020) also reviewed research using the CoI framework and showed that research has focused on a wide range of aspects related to designing and facilitating online courses. Kumar et al. (2011) even applied the CoI framework to the design of an entire online doctoral program. Xing (2019) summarized that the CoI framework has “been widely applied to the design of online courses” (p. 101) including MOOCs (see Thymniou & Tsitouridou, 2021).

Need for a Validated Global Survey

Using 287 online student responses collected from four North American universities, Arbaugh et al. (2008) developed and validated a widely-used survey instrument that measured each of the three presences in the CoI framework. Stenbom (2018) reviewed 103 journal articles using the CoI instrument and found that an “overwhelming majority of the studies” were conducted in North America. At the same time, there has been important work using the instrument internationally. For instance, it has been translated and validated in several languages including Portuguese (Moreira et al., 2013), Arabic (Alaulamie, 2014), Korean (Yu & Richardson, 2015), Swedish (Öberg & Nyström, 2016), Chinese (Ma et al., 2017), Spanish (Gil-Jaurena et al., 2019), and French (Heilporn & Lakhal, 2020). However, considering that many global courses such as MOOCs enroll students with many different native languages, offering a survey in the language of instruction is the most logical approach. The original CoI survey in English has been used for research in international contexts such as in Singapore (Choy & Quek, 2016), South Korea (Kim, 2017), and China (Zhang, 2020). However, in these studies, accuracy of comprehension of the survey items may have been limited due to respondent language proficiency. Because English is commonly used in international courses (Finardi & Tyler, 2015; Wilson & Gruzd, 2014), the purpose of this research was to create and validate a version of the CoI survey for use in international courses where English is used but is not students’ native or primary language. This study aimed to expand the usability of the CoI instrument beyond the original and translated versions. It also provided an example of adapting and validating existing instruments without translating the language. Factor analytic techniques have been shown to provide evidence of instruments’ validation (Brown, 2015; Tabachnick & Fidell, 2019).

Methods

Research Context and Background

Following a grant from the US Department of State, we developed and freely offered three versions of an online professional development course to teachers of English whose students’ ages ranged from 3 to 10, in countries where English was not the dominant language. The first version was a global online course (GOC) with eight-weekly modules and an enrollment cap of 25 students, allowing for weekly facilitated discussions and personalized feedback on assignments. In total, we offered 25 sections of the GOC to 609 students from 89 countries. All students who applied were nominated by their local US embassy, and selected and enrolled by the US Department of State. We also offered the GOC’s first five modules to students in more than 100 countries as two different versions of a MOOC. The first MOOC maintained a set start and end date with weekly deadlines. The second MOOC provided students with flexibility in their pacing so long as they finished the modules within the 12-week period in which it was offered. In total, the five-week MOOC enrolled 21,232 students (7,221 successfully completed the course); 8,691 students (1,494 successfully completed the course) enrolled in the more flexible MOOC. Individualized instructor feedback was not provided on submitted assignments but module discussions were facilitated by the instructors and 20 top-performing GOC students. Similar to the GOC, the instructor posted regular announcements and reminders to help motivate students. As expected, student engagement and completion varied across the three versions of the course. Table 1 outlines the completion rates for both the total students enrolled as well as those students who completed at least one activity; we defined these as active students.

Table 1

Participants Across Three Course Formats

Course type Number. enrolled Number active Number completed % Completed (enrolled students) % Completed (active students)
Global online course (25 sections) 609 534 449 74% 84%
5-week MOOC 21,232 9,948 7,221 34% 73%
Flexible MOOC 8,691 2,379 1,494 17% 63%

The purpose of this program was for experts in the field to provide research-based professional development opportunities to English as a foreign language (EFL) teachers and teacher educators around the world who may not otherwise have access. Since the participating teachers were largely ELLs themselves, the US Department of State required that all course materials be developed at the B1 level, based on the CEFR for English, meaning a participant “can read straightforward factual texts on subjects related to his/her field and interests with a satisfactory level of comprehension” (Council of Europe, 2018, p. 60).

Data Collection

Since modules 1 to 5 were nearly identical across all three course formats, all participants were invited to voluntarily complete the CoI instrument in Module 5. A course page provided an invitation to participate in our study, a description of our survey research following IRB requirements, and a link to a Qualtrics survey. The Qualtrics survey included respondents’ informed consent to participate in research, demographic information (e.g., gender, age, country, teaching position, number of years teaching), and our revised CoI survey items. The original CoI survey was developed and validated with English-speaking students from North America so understandably, as required for use in the course, it was written at a higher level than CEFR for English B1. As a result, three members of the research team worked collaboratively to revise the items. All three members had previously used the CoI framework in research. Additionally, one team member was an EFL expert and another was a non-native English speaker who had also been trained as an EFL teacher. The revised items were written at the B1 level while still addressing the intended CoI constructs. No changes were made to the response scale (see Table 2).

Table 2

Comparing the Original and Revised Items

Construct Item label Original item Revised item
Teaching presence TP1 The instructor clearly communicated important course topics. The teacher clearly communicated about important course topics.
TP2 The instructor clearly communicated important course goals. The teacher clearly communicated about important course goals.
TP3 The instructor provided clear instructions on how to participate in course learning activities. The teacher gave clear instructions on how to complete course activities.
TP4 The instructor clearly communicated important due dates/time frames for learning activities. The teacher clearly communicated about important due dates.
TP5 The instructor was helpful in identifying areas of agreement and disagreement on course topics that helped me to learn. The teacher helped explain difficult topics to help me learn.
TP6 The instructor was helpful in guiding the class towards understanding course topics in a way that helped me clarify my thinking. The teacher helped me understand my thinking about course topics.
TP7 The instructor helped to keep course participants engaged and participating in productive dialogue. The teacher helped students be engaged and participate in dialogue.
TP8 The instructor helped keep the course participants on task in a way that helped me to learn. The teacher helped keep students on task, and it helped me learn.
TP9 The instructor encouraged course participants to explore new concepts in this course. The teacher made me want to learn new things.
TP10 Instructor actions reinforced the development of a sense of community among course participants. The teacher made students feel as part of a community.
TP11 The instructor helped to focus discussion on relevant issues in a way that helped me to learn. The teacher set up discussions to help me learn.
TP12 The instructor provided feedback that helped me understand my strengths and weaknesses relative to the course’s goals and objectives. The teacher provided feedback that helped me learn.
TP13 The instructor provided feedback in a timely fashion. The teacher provided feedback on time.
Social presence SP1 Getting to know other course participants gave me a sense of belonging in the course. Getting to know other students made me feel part of the course.
SP2 I was able to form distinct impressions of some course participants. I got to know some students.
SP3 Online or Web-based communication is an excellent medium for social interaction. Online communication is an excellent way to interact with people.
SP4 I felt comfortable conversing through the online medium. I felt comfortable communicating online.
SP5 I felt comfortable participating in the course discussions. I felt comfortable participating in the course discussions.
SP6 I felt comfortable interacting with other course participants. I felt comfortable interacting with other students.
SP7 I felt comfortable disagreeing with other course participants while still maintaining a sense of trust. I felt it was OK to disagree with other students.
SP8 I felt that my point of view was acknowledged by other course participants. I felt that other students understood my point of view.
SP9 Online discussions help me to develop a sense of collaboration. Online discussions help me to collaborate with others.
Cognitive presence CP1 Problems posed increased my interest in course issues. Questions asked in the course increased my interest in course topics.
CP2 Course activities piqued my curiosity. Course activities made me curious to learn more.
CP3 I felt motivated to explore content-related questions. I felt motivated to explore the questions asked.
CP4 I utilized a variety of information sources to explore problems posed in this course. I used many resources to explore questions asked.
CP5 Brainstorming and finding relevant information helped me resolve content-related questions. Sharing and finding information with classmates helped me find answers to questions asked.
CP6 Online discussions were valuable in helping me appreciate different perspectives. Online discussions helped me see different perspectives.
CP7 Combining new information helped me answer questions raised in course activities. Combining all of the new information helped me answer questions asked in course activities.
CP8 Learning activities helped me construct explanations/solutions. Course activities helped me create explanations/solutions.
CP9 Reflection on course content and discussions helped me understand fundamental concepts in this class. Thinking about the course content and discussions helped me understand course topics.
CP10 I can describe ways to test and apply the knowledge created in this course. I can describe ways to use the knowledge created in this course.
CP11 I have developed solutions to course problems that can be applied in practice. I developed solutions that I can use in my teaching.
CP12 I can apply the knowledge created in this course to my work or other non-class related activities. I can apply the knowledge created in this course to my work.

Note: Participants used the response scale: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree.

To achieve the B1 level, items were revised to use more familiar terms and grammatical structures that would also be less ambiguous for participants coming from diverse linguistic and cultural backgrounds. For example, we switched out instructor for teacher, a term more familiar to teachers working in classroom contexts. Some verbs were simplified, such as changing conversing to a more familiar verb, communicating. Some original items had complex sentences, such as “The instructor was helpful in guiding the class towards understanding course topics in a way that helped me clarify my thinking.” We adapted this item by making it more personalized and simplifying the sentence structure: “The teacher helped me understand my thinking about course topics.” In addition, we avoided using words that have a different meaning in other contexts (e.g., the word fashion). These types of revisions from the original versions still preserved the meaning and intent of the survey items while making them more comprehensible to global course participants.

Data Analysis

We randomly divided data into two samples. The first half (n = 744) was used to conduct exploratory factor analysis (EFA). The second half (n = 743) was used to confirm the factor structure with confirmatory factor analysis (CFA). Gorsuch (1983) explained that EFA determines “factors that best reproduce the variables under the maximum likelihood conditions, [while CFA] tests specific hypothesis regarding the nature of the factors” (p. 129). We first conducted an EFA to determine the items that best described the construct. EFAs are used to assess the factor structure of a set of variables (data). Whenever these data are measured at a categorical level (e.g., ordinal, polytomous), Brown (2015) proposed the use of a robust weighted least square (WLSMV) estimator. An oblique rotation method (geomin) was applied, assuming the extracted factors were correlated. Rotating the factor matrix allowed for a more interpretability solution (Tabachnick & Fidell, 2019). The correlations matrix for correlation and sample adequacy is assessed using Bartlett’s test of sphericity and Kaiser-Meyer-Olkin (KMO; Kaiser, 1970) measure. KMO values greater than .5 are acceptable and greater than.9 are superb (Field, 2009). A significant Bartlett’s test indicates adequate correlations within the matrix.

Several pieces of information were needed to identify the number of factors to extract in an EFA model. EFA is a descriptive and exploratory tool; therefore, to determine the number of factors to retain, we relied on (a) item-factor correlations (loadings); (b) goodness of model fit; (c) percent of variance explained by the factors; (d) and theoretical explanations. Meyers et al. (2017) recommended factor loadings of .40 and higher with sample size in excess of 200 participants. However, results in the high.3s may also be acceptable. We concentrated on five fit indices: (a) χ2 goodness-of-fit statistic; (b) the root mean square error of approximation (RMSEA; Steiger & Lind, 1980); (c) standardized root mean square residual (SRMR); (d) comparative fit index (CFI); and (e) Tucker-Lewis index (TLI). SRMR, RMSEA, and χ2 are considered bad fit indices; therefore, values of zero indicate perfect fit, and closer to zero reflects better fit (Brown, 2015). A model is deemed to have good fit if RMSEA ≤ 0.05 (Hu & Bentler, 1999) but acceptable once the upper bound of the confidence interval is less than or equal to 0.10 (Kline, 2011), and low values for SRMR (≤.05; Schreiber et al., 2006). CFI and TLI are goodness-of-fit indices, where values in the range of.90 and.95 generally represent acceptable model fit (Brown, 2015).

The main premise of factor analysis is to extract common variance among items. As such, reporting the total amount of variance extracted forms an important consideration in the factor analytic process. Tabachnick and Fidell (2019) suggested that the final factor solution should explain at least 50% of the total item variance. Additionally, the amount of variance in each item, explained by the retained factors—communality should also be reported (Field, 2009). We were guided by Tabachnick and Fidell (2019) using .50 cutoff for communality coefficients (h 2) and an average of at least.60 for all items (Field, 2009). The CFA applied the same fit indices used for EFA; the CFA model was employed to assess the empirical factor structure found through the EFA.

Findings

The WLSMV extraction method was used to conduct the EFA. Preliminary analysis yielded that the sample size was superb (KMO = .965). The correlations were also large enough for factor analysis using the Bartlett’s Test [χ2(946) = 24676.05, p < .001]. We generated four models to determine the best structure for the data. The fit indices for the models are presented in Table 3. The first two models (one-factor and two-factor models) did not meet the preset criteria for model fit with CFI and TLI below the preferred.95 cutoff. SRMR and RMSEA were also out of range.

Table 3

Fit Indices for the Four Exploratory Factor Analysis Models (N = 34 items)

Model RMSEA
[90% CI]
CFI TLI SRMR χ2 Variance explained
1-Factor 0.106
[0.103, 0.109]
.894 .887 .092 (527) = 3785.52, p < .001 61.36%
2-Factor 0.092
[0.089, 0.096]
.925 .914 .054 (494) = 2812.86, p < .001 68.83%
3-Factor 0.077
[0.073, 0.080]
.951 .941 .041 (462) = 1968.22, p < .001 72.99%
4-Factor 0.070
[0.066, 0.074]
.962 .951 .035 (431) = 1597.24, p < .001 75.84%

Note. 90% CI = confidence intervals. Root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker Lewis index (TLI), standardized root mean square residual (SRMR). For the chi-square (χ2), degrees of freedom are in parentheses.

The third and fourth models showed more acceptable fit and were further examined despite RMSEA values being greater than 0.05 for both models, but we observed the acceptable fit through the upper bound of the confidence intervals. Overall, the 3-factor and 4-factor models better represented the data. Further assessment of these models found that in the 4-factor model, several items had severe cross-loadings. As a team we discussed the wording of these items and potential reasons for the cross-loadings. We decided these items were problematic and were therefore deleted. Additionally, other items remained in the analysis that covered the theoretical representation of the constructs being measured. In an iterative process, we removed individual items to ensure we observed the correlations at each iteration. As a result of the several analyses, we removed seven items (i.e., TP2, TP13, SP1, SP2, SP9, CP5, and CP11).

Upon the theoretical removal of those items, we regenerated four models. Those fit indices are presented in Table 4. Once again, the 3-factor and 4-factor models were better representations of the data. We tabled the 27-item factor loadings of both models (Table 5). Further inspection of the 4-factor model revealed more cross-loadings between factors and no items with factor loadings greater than .40. For example, in the 4-factor solution, item CP10 could be a function of the second and fourth factors. Through discussing the two models, we opted for the 3-factor model, as its items better fit the theorized teaching presence (n = 11), social presence (n = 6), and cognitive presence (n = 10). This model had the simplest structure with loadings all greater than .40 (Meyers et al., 2017), adequate fit indices, and the three factors explained 73.81% of the variance in all the items (more than the 50% recommended by Tabachnick & Fidell, 2019). Additionally, the average communality across the retained items was.60, suggesting that we had explained, on average, 60% of the variances across all the items included in the three factors we retained.

Table 4

Fit Indices for the Four Exploratory Factor Analysis Models (n = 27 items)

Model RMSEA
[90% CI]
CFI TLI SRMR χ2 Variance explained
1-Factor 0.117
[0.113, 0.120]
.883 .873 .093 (324) = 3611.60, p < .001 62.15%
2-Factor 0.095
[0.091, 0.099]
.929 .916 .049 (298) = 2298.19, p < .001 70.09%
3-Factor 0.076
[0.072, 0.080]
.958 .946 .034 (273) = 1450.06, p < .001 73.81%
4-Factor 0.071
[0.066, 0.075]
.967 .954 .029 (249) = 1170.58, p < .001 76.65%

Note. 90% CI = confidence intervals. Root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker Lewis index (TLI), standardized root mean square residual (SRMR). For the chi-square (χ2), degrees of freedom are in parentheses.

Table 5

Factor Loadings for the Three- and Four-Factor Solutions

Item Factor solution
4-Factor model 3-Factor model
Factor 1 Factor 2 Factor 3 Factor 4 Factor 1 Factor 2 Factor 3 h2
TP1 .740 -.003 .010 .306 .699 -.104 .240 .56
TP3 .768 -.008 .031 .307 .733 -.109 .255 .61
TP4 .714 .064 -.028 .240 .681 -.024 .159 .49
TP5 .898 -.015 -.039 .055 .897 -.001 -.034 .81
TP6 .878 -.020 -.009 .071 .876 -.004 .000 .77
TP7 .882 .044 -.014 -.163 .920 .157 -.204 .91
TP8 .912 .015 .014 -.078 .948 .117 -.136 .93
TP9 .661 -.085 .325 .031 .673 .029 .225 .50
TP10 .641 .100 .185 -.035 .660 .198 .068 .48
TP11 .536 .243 .147 -.031 .546 .312 .062 .40
TP12 .479 .205 .131 -.084 .495 .280 .018 .32
SP3 .101 .693 .053 -.053 .108 .734 -.017 .55
SP4 .058 .810 .065 -.021 .057 .844 .016 .72
SP5 -.017 .905 .013 .066 -.030 .918 .020 .84
SP6 -.005 .975 -.051 .028 -.016 .989 -.060 .98
SP7 -.046 .591 .000 .111 -.065 .572 .060 .34
SP8 .048 .678 .071 .002 .047 .710 .033 .51
CP1 .036 .146 .740 -.104 .044 .312 .560 .41
CP2 .032 -.012 .895 -.077 .034 .193 .699 .53
CP3 -.019 -.053 .967 -.093 -.007 .170 .741 .58
CP4 .000 .149 .632 -.030 .013 .284 .492 .32
CP6 .033 .238 .591 .108 .029 .335 .542 .41
CP7 .009 .079 .768 .216 -.007 .179 .767 .62
CP8 -.014 .002 .834 .276 -.037 .095 .867 .76
CP9 .053 .031 .680 .363 .010 .061 .817 .67
CP10 -.035 .058 .702 .379 -.076 .084 .845 .73
CP12 .169 -.040 .607 .327 .132 -.017 .732 .55

Note. Factor loadings greater than .40 are in boldface. h2 is communalities for the 3-Factor model only.

Finally, we conducted the CFA to assess the factor structure with a unique sample. First, we assessed the internal consistency of the subscales. We employed the Cronbach’s (1951) coefficient with a traditional .70 recommendation (Nunnally & Bernstein, 1994). Higher values reflect higher internal consistency (i.e., the items share a large amount of variances). We found that items for teaching presence (a = .950, 95% CI[.945,.955]), social presence (a = .892, 95% CI[.880,.903]), and cognitive presence (a = .949, 95% CI[.943,.954]) reliably measured the constructs. The results of the CFA revealed that the factor structure from the EFA adequately represented the data: CFI = .974, TLI = .972, and RMSEA = 0.067, 90% CI [0.063, 0.070]. The factor loadings are presented in Table 6. Moderate to high relationships existed across the three factors: teaching presence and social presence (r = .614), teaching presence and cognitive presence (r = .705), and social presence and cognitive presence (r = .679).

Table 6

Factor Loadings for the Three-Factor Confirmatory Factor Analysis

Teaching presence Social presence Cognitive presence
Item Loading Item Loading Item Loading
TP1 .949 (0.02) SP3 .926 (0.02) CP1 .955 (0.01)
TP3 .944 (0.02) SP4 .979 (0.02) CP2 .984 (0.01)
TP4 .893 (0.02) SP5 1.00 (0.00) CP3 1.00 (0.00)
TP5 .959 (0.02) SP6 .997 (0.02) CP4 .826 (0.02)
TP6 .979 (0.02) SP7 .734 (0.03) CP6 .939 (0.01)
TP7 .949 (0.02) SP8 .891 (0.02) CP7 .956 (0.01)
TP8 .974 (0.01) CP8 .977 (0.01)
TP9 1.00 (0.00) CP9 .974 (0.01)
TP10 .977 (0.02) CP10 .942 (0.02)
TP11 .951 (0.02) CP12 .968 (0.02)
TP12 .868 (0.02)

Note. Standard errors are in parentheses. Items with loadings = 1 represent items used as the scaling constant.

Implications and Conclusions

Language Considerations with Global Online Research

This study developed through the need for a CoI instrument that was written for a global audience using English as the lingua franca. The participants in our online courses were from over 80 countries and enrolled in our courses to learn more about English language teaching. Based on our grant-funded program parameters, we developed course materials—including the survey—in English at the CEFR B1 level to ensure participant understanding. This brought to light the importance of language considerations and comprehensibility when conducting global online research. Knowing that more and more students with varying levels of English language proficiency are enrolling in global courses offered in English, such as MOOCs, instructional designers and facilitators should carefully consider the language level required to participate in all aspects of their courses. This is especially true when collecting data from students for evaluative and research purposes, since important decisions are often based on these data and should be valid.

This study successfully adapted survey item to the CEFR B1 level, which was high enough to maintain the basic meaning of survey items while also being more comprehensible to respondents who were not native English speakers. More research is needed to examine processes for lowering the language level of existing survey instruments.

Using the CoI Survey in Global Contexts

Since global courses are most frequently offered in English, the CoI instrument needed to be examined critically and revised to ensure the utility and validity of the data it provided. After revising the CoI instrument to be at the CEFR B1 level, we administered it to students enrolled in one of the following three course formats: sections with reasonably low instructor-to-student ratios (1:25), a five-week MOOC, and a flexible MOOC. This study showed success in adapting survey items in the CoI instrument to the CEFR B1 level, which can be useful for other CoI studies conducted in global contexts. However, more work is needed to examine the validity of this instrument in international online learning environments.

Although there are accepted processes for translating and validating surveys (see Gavriilidou & Mitits, 2016), these are often not feasible for research in global courses or courses in multicultural contexts with a high level of linguistic diversity. For instance, the MOOCs examined in this research included participants from over 100 countries. Therefore, the most practical option was to provide a survey in the language of instruction at a level comprehensible for varying levels of language proficiency. However, we found no studies that investigated the methods and/or validation of instruments adapted from one language into the same language, making adjustments based on participants’ proficiency level.

Contextual and Cultural Aspects of CoI Survey Item Analysis

Using an EFA and CFA, we found the instrument had good fit statistics once seven items (TP2, TP13, SP1, SP2, SP9, CP5, and CP11) were removed. There are some possible contextual reasons why the removed items did not load as expected. Social presence is a construct that describes student perceptions and attributes of learner-learner interactions (Garrison et al., 2000). The CoI instrument was originally developed for use in small traditional online courses with high levels of interactions within small groups of students (Arbaugh et al., 2008) allowing students to develop a level of familiarity that is unlikely to form in a MOOC. Additionally, we administered the survey to students following only five weeks of participation, and students’ perceptions regarding other students may have changed if the course offerings were longer.

Based on these two contextual aspects related to class size and length of instruction between a traditional online course and short-term MOOC, it is understandable that the three social presence items that measured students’ ability to form relationships or collaborate with others did not fit the model as well as the items that focused on students’ comfort communicating online. Interestingly, two of those removed items, SP1 and SP2, are aligned with the social presence subconstruct affective expression. This finding supports Poquet et al.’s (2018) examination of social presence in three MOOCs that also found students tended to respond lower to SP1 and SP2. Similarly, in Kovanovic et al. (2018), EFA using student responses from five MOOCs found that the data fit best when affective expression was its own factor. As a result, additional research is needed to examine the development of affective expression in MOOCs.

Furthermore, the data from the two teaching presence items that focused on instructor-provided feedback did not fit the model as well as did the other teaching presence items. One important limitation of MOOCs is the quality of the feedback students receive. In MOOCs with thousands of students where the content experts typically do not have time to provide much feedback to individual students, it makes sense that the survey items measuring feedback performed differently than did the other items that can be accomplished in whole group interactions. However, because feedback is still important to teaching presence, we decided to keep the items TP12 and TP13 that focused on providing quality and timely feedback. Additional research is needed to explore effective ways to provide feedback in MOOCs, and keeping TP12 and TP13 will help this survey stay tied to the theory, even though its removal would have helped the data fit slightly better.

Similarly, even though SP7 (I felt it was OK to disagree with other students) was less than.5, this item was kept because is important to the concept of social presence and was not captured in other items. This was a particularly interesting item due to cultural aspects of online communication and learning. It is possible that for some of the cultures represented in the course, it was not appropriate to overtly disagree with other students. Research in intercultural pragmatics focused specifically on the speech act of disagreement in multicultural online asynchronous discussions using English as a lingua franca have showed a tendency to avoid strong disagreement, particularly with students who have lower levels of English language proficiency (Maìz-Arévalo, 2014). Therefore, this item may have cultural bias in its interpretation, particularly if public disagreement is not considered culturally acceptable. We recommend that additional research examine intercultural perspectives on disagreement as a measure of social presence in global courses, including more qualitative research with culturally and linguistically diverse learners’ discourse in online synchronous discussions.

Implications for Future Research

The use of the revised CoI survey could benefit researchers examining global online courses where participants have varying levels of English language proficiency. The revised instrument’s simplified language and sentence structure can help collect data that more accurately reflects students’ perceived CoI in global courses as well as courses offered in multicultural contexts. We also recommend that others carefully consider the language levels of the research instruments they both create and use, particularly when using instruments in global contexts or within diverse contexts in North America where participants have varying levels of English language proficiency. If respondents are ELLs, survey items that have been written for native speakers may not be comprehensible or could result in survey fatigue due to the heavy linguistic load of each item. Improving the comprehensibility by lowering the language level will make the instruments accessible to a larger international audience. It is also important to validate surveys when using them with different audiences or when revising for language level. We recommend conducting an EFA and CFA, similar to this study.

Furthermore, researchers should consider the diverse range of cultures represented in survey respondents, which can affect participants’ understanding of the survey items. For example, perceptions of disagreement as an indicator of social presence could be different because of culture and language proficiency (Maìz-Arévalo, 2014). Furthermore, the length and type of course could affect participants’ perceptions of teaching, social, and cognitive presences. For example, without an instructor giving individualized feedback to students in MOOCs, it is expected that the item measuring feedback performed differently across three different formats, particularly since the original survey was designed and validated in traditional instructor-led courses rather than MOOCs.

A large portion of instructional design and technology (IDT) research has come from English speaking countries (Bodily et al., 2019). Furthermore, North America is overrepresented in the most highly cited online and blended learning research—especially at the K-12 level (Hu et al., 2019). The opportunity to design, develop, facilitate, and research global online offerings has never been greater due to improving telecommunication infrastructures and increasing support from all levels of government throughout the world (Palvia et al., 2018). The COVID-19 crisis has accelerated the growth and acceptance of online learning throughout the world. As we move into this new normal, it is important that the IDT field maintains a global perspective in our research efforts. The revised CoI instrument shared in this research can aid in those efforts.

Acknowledgement

This publication was prepared under a grant funded by Family Health International under Grant No. S-ECAGD-16-CA-1092 funded by The U.S. Department of State, Bureau of Educational and Cultural Affairs. The content of this publication does not necessarily reflect the views, analysis or policies of FHI 360 of The U.S. Department of State, Bureau of Educational and Cultural Affairs, nor does any mention of trade names, commercial products, or organizations imply endorsement by FHI 360 or The U.S. Department of State, Bureau of Educational and Cultural Affairs.

References

Alaulamie, L. A. (2014). Teaching presence, social presence, and cognitive presence as predictors of students’ satisfaction in an online program at a Saudi University (Publication No. 3671236) [Doctoral dissertation, Ohio University]. ProQuest Dissertations Publishing. https://www.proquest.com/openview/0dfd831e1c80c529806725d42137cca8/1?pq-origsite=gscholar&cbl=18750

Anders, A. (2015). Theories and applications of massive online open courses (MOOCs): The case for hybrid design. The International Review of Research in Open and Distributed Learning, 16(6). https://doi.org/10.19173/irrodl.v16i6.2185

Arbaugh, J., Cleveland-Innes, M., Diaz, S., Garrison, D., Ice, P., Richardson, J., & Swan, K. (2008). Developing a Community of Inquiry instrument: Testing a measure of the Community of Inquiry framework using a multi-institutional sample. The Internet and Higher Education, 11(3-4), 133-136. https://doi.org/10.1016/j.iheduc.2008.06.003

Archer, W. (2010). Beyond online discussions: Extending the Community of Inquiry framework to entire courses. The Internet and Higher Education, 13(1-2), 69. https://doi.org/10.1016/j.iheduc.2009.10.005

Bawa, P. (2016). Retention in online courses: Exploring issues and solutions—A literature review. SAGE Open, 6(1). https://doi.org/10.1177/2158244015621777

Bodily, R., Leary, H., & West, R. E. (2019). Research trends in instructional design and technology journals. British Journal of Educational Technology, 50(1), 64-79. https://doi.org/10.1111/bjet.12712

Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). Guilford Press.

Choy, J. L. F., & Quek, C. L. (2016). Modelling relationships between students’ academic achievement and community of inquiry in an online learning environment for a blended course. Australasian Journal of Educational Technology, 32(4), 106-124. https://doi.org/10.14742/ajet.2500

Council of Europe. (2018). Common European framework of reference for languages: Learning, teaching, assessment. Companion volume with new descriptors. Council of Europe.

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334. https://doi.org/10.1007/BF02310555

Donitsa-Schmidt, S., & Topaz, B. (2018). Massive open online courses as a knowledge base for teachers. Journal of Education for Teaching: International Research & Pedagogy, 44(5), 608-620. https://doi.org/10.1080/02607476.2018.1516350

Field, A. (2009). Discovery statistics using SPSS (3rd. ed.). Sage.

Finardi, K. R., & Tyler, J. (2015). The role of English and technology in the internationalization of education: Insights from the analysis of MOOCs. In 7th International Conference on Education and New Learning Technologies (pp. 11-18). Barcelona, Spain. https://blog.ufes.br/kyriafinardi/files/2018/01/Finardi-Tyler-2015.pdf

Fiock, H. S. (2020). Designing a community of inquiry in online courses. International Review of Research in Open and Distributed Learning, 21(1). 135-153. https://doi.org/10.19173/irrodl.v20i5.3985

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105. https://doi.org/10.1016/S1096-7516(00)00016-6

Gavriilidou, Z., & Mitits, L. (2016). Adaptation of the strategy inventory for language learning (SILL) for students aged 12-15 into Greek: Developing an adaptation protocol. Selected Papers on Theoretical and Applied Linguistics, 21, 588-601. https://doi.org/10.26262/istal.v21i0.5256

Gil-Jaurena, I., Figaredo, D. D., Velázquez, B. B., & Encina, J. M. (2019, June). Validation of the Community of Inquiry Survey (Spanish Version) at UNED Courses. In EDEN Conference Proceedings (pp. 28-34).

Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Lawrence Erlbaum.

Heilporn, G., & Lakhal, S. (2020). Investigating the reliability and validity of the Community of Inquiry framework: An analysis of categories within each presence. Computers & Education, 145. https://doi.org/10.1016/j.compedu.2019.103712

Hu, M., Arnesen, K., Barbour, M. K., & Leary, H. (2019). An analysis of the Journal of Online Learning Research, 2015-2018. Journal of Online Learning Research, 5(2), 123-144. https://www.learntechlib.org/primary/p/195231/

Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55. https://doi.org/10.1080/10705519909540118

Kaiser, H. F. (1970). A second generation little jiffy. Psychometrika, 35, 401-415. https://doi.org/10.1007/BF02291817

Kim, D. (2017). Flipped interpreting classroom: Flipping approaches, student perceptions and design considerations. The Interpreter and Translator Trainer, 11(1), 38-55. https://doi.org/10.1080/1750399X.2016.1198180

Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). Guilford Press.

Koukis, N. & Jimoyiannis, A. (2019). MOOCS for teacher professional development: Exploring teachers’ perceptions and achievements. Interactive Technology and Smart Education, 16(1), 74-91. https://doi.org/10.1108/ITSE-10-2018-0081

Kovanović, V., Joksimović, S., Poquet, O., Hennis, T., Čukić, I., de Vries, P., Hatala, M., Dawson, S., Siemens, G., & Gašević, D. (2018). Exploring Communities of Inquiry in massive open online courses. Computers & Education, 119, 44-58. https://doi.org/10.1016/j.compedu.2017.11.010

Kumar, S., Dawson, K., Black, E. W., Cavanaugh, C., & Sessums, C. D. (2011). Applying the Community of Inquiry framework to an online professional practice doctoral program. The International Review of Research in Open and Distributed Learning, 12(6), 126-142. https://doi.org/10.19173/irrodl.v12i6.978

Lowenthal, P., & Hodges, C. (2015). In search of quality: Using quality matters to analyze the quality of massive, open, online courses (MOOCs). The International Review of Research in Open and Distributed Learning, 16(5). https://doi.org/10.19173/irrodl.v16i5.2348

Ma, Z., Wang, J., Wang, Q., Kong, L., Wu, Y., & Yang, H. (2017). Verifying causal relationships among the presences of the Community of Inquiry framework in the Chinese context. International Review of Research in Open and Distributed Learning, 18(6), 213-230. https://doi.org/10.19173/irrodl.v18i6.3197

Maíz-Arévalo, C. (2014). Expressing disagreement in English as a lingua franca: Whose pragmatic rules. Intercultural Pragmatics, 11(2), 199-224. https://doi.org/10.1515/ip-2014-0009

Meyers, L. S., Gamst, G., & Guaino, A. J. (2017). Applied multivariate research: Design and interpretation (3rd ed.). Sage.

Moreira, J. A., Ferreira, A. G., & Almeida, A. C. (2013). Comparing communities of inquiry of Portuguese higher education students: One for all or one for each? Open Praxis, 5(2), 165-178. https://www.openpraxis.org/articles/abstract/10.5944/openpraxis.5.2.50/

Nunnally, J. & Bernstein, I. (1994). Psychometric theory (3rd ed.). McGraw-Hill.

Öberg, L. M., & Nyström, C. A. (2016). Evaluation of the level of collaboration in a regional crisis exercise setting: The use of Community of Inquiry. In Proceedings of the 39th Information Systems Research Conference in Scandinavia, Ljungskile, Sweden. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-29640

Palvia, S., Aeron, P., Gupta, P., Mahapatra, D., Rosner, R., & Sindhi, S. (2018). Online education: Worldwide status, challenges, trends, and implications. Journal of Global Information Technology Management, 21(4), 233-241. https://doi.org/10.1080/1097198X.2018.1542262

Phan, T., & Zhu, M. (2020). Professional development journey in MOOCs by pre- and in-service teachers. Educational Media International, 57(2), 148-166. https://doi.org/10.1080/09523987.2020.1786773

Poquet, O., Kovanović, V., de Vries, P., Hennis, T., Joksimović, S., Gašević, D., & Dawson, S. (2018). Social presence in massive open online courses. International Review of Research in Open and Distributed Learning, 19(3), 43-68. https://doi.org/10.19173/irrodl.v19i3.3370

Reich, J., & Ruipérez-Valiente, J. A. (2019a). Supplementary material for the MOOC pivot. https://www.sciencemag.org/content/363/6423/130/suppl/DC1

Reich, J., & Ruipérez-Valiente, J. A. (2019b). The MOOC pivot: What happened to disruptive transformation. Science, 363(6423), 130-131. https://doi.org/10.1126/science.aav7958

Sallam, M. H., Marin-Monje, E., & Li, Y. (2020). Research trends in language MOOC studies: A systematic review of the published literature (2012-2018). Computer Assisted Language Learning. https://doi.org/10.1080/09588221.2020.1744668

Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. The Journal of Educational Research, 99, 323-338. https://doi.org/10.3200/JOER.99.6.323-338

Steiger, J. H., & Lind, J. M., (1980). Statistically based tests for the number of common factors [Paper presentation]. Meeting of the Psychometric Society, Iowa City, Iowa.

Stenbom, S. (2018). A systematic review of the Community of Inquiry survey. The Internet and Higher Education, 39, 22-32. https://doi.org/10.1016/j.iheduc.2018.06.001

Tabachnick, B. G., & Fidell, L. S. (2019). Using multivariate statistics (7th ed.). Pearson.

Teräs, M., Teräs, H., Arinto, P., Brunton, J., Daryono, D., & Subramaniam, T. (2020). COVID-19 and the push to online learning: Reflections from 5 countries. Digital Culture and Education. https://www.digitalcultureandeducation.com/reflections-on-covid19/reflections-from-5-countries

Thymniou, A., & Tsitouridou, M. (2021). Community of Inquiry model in online learning: Development approach in MOOCs. Research on E-Learning and ICT in Education: Technological, pedagogical and instructional perspectives, (pp. 93-109). Springer. https://doig.org/10.1007/978-3-030-64363-8_6

Tzovla, E., Kedraka, K., Karalis, T., Kougiourouki, M., & Lavidas, K. (2021). Effectiveness of in-service elementary school teacher professional development MOOC: An experimental research. Contemporary Education Technology, 13(4), 1-14. https://doi.org/10.30935/cedtech/11144

Uchidiuno, J. O., Ogan, A., Yarzebinski, E., & Hammer, J. (2018). Going global: Understanding English language learners’ student motivation in English-language MOOCs. International Journal of Artificial Intelligence in Education, 28(4), 528-552. https://doi.org/10.1007/s40593-017-0159-7

Wilson, L., & Gruzd, A. (2014). MOOCs: International information and education phenomenon? Bulletin of the Association for Information Science and Technology, 40(5), 35-40. https://doi.org/10.1002/bult.2014.1720400510

Xiao, J. (2018). On the margins or at the center? Distance education in higher education. Distance Education, 39(2), 259-274. https://doi.org/10.1080/01587919.2018.1429213

Xing, W. (2019) Exploring the influences of MOOC design features on student performance and persistence, Distance Education, 40(1), 98-113. https://doi.org/10.1080/01587919.2018.1553560

Yousef, A. M. F., Chatti, M. A., Schroeder, U., & Wosnitza, M. (2015). A usability evaluation of a blended MOOC environment: An experimental case study. The International Review of Research in Open and Distributed Learning, 16(2). https://doi.org/10.19173/irrodl.v16i2.2032

Yu, T., & Richardson, J. C. (2015). Examining reliability and validity of a Korean version of the Community of Inquiry instrument using exploratory and confirmatory factor analysis. Internet and Higher Education, 25, 45-52. https://doi.org/10.1016/j.iheduc.2014.12.004

Zein, S. (2019). Preparing Asian English teachers in the global world. In S. Zein & R. Stroupe (Eds.), English language teacher preparation in Asia: Policy, research and practice (pp. 1-15). Routledge.

Zhang, R. (2020). Exploring blended learning experiences through the Community of Inquiry framework. Language Learning & Technology, 24(1), 38-53. https://doi.org/10125/44707

 

Athabasca University

Creative Commons License

Revising and Validating the Community of Inquiry Instrument for MOOCs and Other Global Online Courses by Jered Borup, Joan Kang Shin, Marvin Powell, Anya S. Evmenova, and Woomee Kim is licensed under a Creative Commons Attribution 4.0 International License.