International Review of Research in Open and Distributed Learning

Volume 24, Number 2

May - 2023

 

Scrutinizing Learning Management Systems in Practice: An Applied Time Series Research in Higher Education

 

Esra Barut Tuğtekin
Inonu University

 

Abstract

This study examined the use of Advancity Learning Management Systems (ALMS) and the Moodle Learning Management Systems (LMS) in learning settings, as well as online exams, within the framework of Transactional Distance Theory. With 146 college students (nfemale = 102, nmale = 44) as voluntary participants, data was gathered through an online questionnaire. A time series design was used for two different LMS sessions, and participants who voluntarily participated in ALMS and Moodle LMS sessions were matched. The findings revealed that while Moodle and ALMS both receive relatively similar assessment ratings for online exams, Moodle scored better in terms of learning setting. When factors of the Learning Management Systems Evaluation Scale (LMSES) based on Transactional Distance Theory were compared, the dialogue and autonomy factors were significantly higher for Moodle LMS than for ALMS. When online exams in the LMS were compared, there was no significant difference between ALMS and Moodle LMS, and for both LMS, the reliability factor was a determinant indicator than the other factors. As a result, in assessing and using an LMS, choices should be based on how well the LMS characteristics address an institution’s demands.

Keywords: learning management systems, e-learning, online exam, transactional distance theory

Introduction

Learning management systems (LMS) are used at most institutions throughout the world. Nearly half of university courses will likely be based on e-learning soon, while approximately 42% of Global Fortune 500 companies currently use educational technology tools like LMS to deliver in-service training to their staff (Research & Markets, 2022). Given the changes in learning methodologies and procedures in e-learning settings, there is a high demand for LMS, with the global market expected to reach $25.7 billion by 2025 (Markets & Markets, 2022). Considering that there are more than 1,000 LMS vendors in the e-learning market, choosing an appropriate LMS from the many available is very challenging. Practical testing of different LMS and analyzing their outcomes will help identify the criteria necessary to support those selecting LMS.

Although LMS were first used primarily as supplemental learning tools, thanks to the incorporation of various structures, they have now evolved into a systematic learning environment. The term LMS now describes various software systems that provide learners, instructors, and administrators with synchronous or asynchronous educational services (Elfeky et al., 2020; Turnbull et al., 2019). LMS learning environments are most effective when they consistently provide users with a variety of activities (Jung & Huh, 2019). LMS assist learners by monitoring and recording the learning process, as well as performing various assessments while providing uploaded and requested information. Additionally, they provide access to educational resources, promote tutoring, and monitor and store information on each learner’s activities (Kehrwald & Parker, 2019). As a result, a variety of enhancements and constructivist arrangements may be produced on LMS in line with pedagogical objectives and educational goals, and depending on learners’ problems and suggestions (Al-Fraihat et al., 2020).

The use of online learning environments for education and training has triggered and significantly enhanced the importance of LMS, particularly amid the COVID-19 pandemic (Huang et al., 2020; Kwon et al., 2021; Raza et al., 2020; Turnbull, 2021). Despite the rise in academic research on LMS, particularly amid the pandemic, most studies have focused on systematic literature reviews or assessing user attitudes. With little quantitative analysis of LMS use in the literature, empirical comparison is limited. Furthermore, institutions may find it difficult to select the LMS best suited to their institutional needs and goals from among the many available. Empirical comparisons of different LMS may provide essential data, guidance, and also serve as a reference for learners, instructors, and managers of institutions selecting and implementing suitable LMS.

Learning Management Systems and Conceptual Framework

LMS provide a highly inclusive environment for learning, including online collaborative learning groups, discussion activities, and frameworks that encourage learners to connect with content as well as other LMS stakeholders (Baxto da Silva et al., 2019; Dias & Diniz, 2014; Jung & Huh, 2019). Using the LMS is a crucial and key factor for learners’ performance and academic achievement (Nasser et al., 2011). Learners are encouraged to be autonomous through the use of LMS in e-learning environments (Bradley, 2021; Nasser et al., 2011; Wood et al., 2011) and LMS can encourage learners’ engagement since they allow users to monitor the learning process (Al-Fraihat et al., 2020). LMS serve as a multifaceted platform for distributing, sharing, supervising, and monitoring educational content (Watson & Watson, 2007). They also offer a range of options for learners to sign up for courses, monitor and assess their progress in those courses, and promote engagement (Al-Fraihat et al., 2020; Oakes, 2002). In e-learning environments, even though the learners and their instructors are physically separated, LMS make it possible to establish communication and overcome physical distance through Internet technology.

Moore (1993), who concentrated on the concept of distance in distance learning, called attention to the social and psychological distance brought on by communication gaps. These types of distance might lead to misconceptions and impede the learning process. According to Moore’s (1993) Transactional Distance Theory, the detrimental effects of distance may be reduced by influencing one another and developing recurring behavioral patterns (Moore & Kearsley, 1996). Transactional distance has been conceptualized as all kinds of distance that prevent individuals from interacting (Horzum, 2011) and consists of three factors, namely structure, dialogue, and autonomy (Moore, 1993).

Structure describes the combination of features that address learner needs during learner-content and learner-interface interaction, whereas the dialogue factor describes the two-way interactions labelled as learner-instructor and learner-learner. Learner autonomy addresses the issue of choosing learning strategies and how learners’ tenets of their own experiences are about how the autonomy factor is managed by learners (Horzum, 2011). The constraints of structure may create an inflexible learning environment and frustrate learners’ ability to learn. On the other hand, LMS with a well-developed dialogue factor increases the likelihood of achieving new learning outcomes. Furthermore, supporting the autonomy factor enables learners to freely guide their learning in the LMS. In brief, transactional distance theory recommends that when selecting an LMS, learning materials that improve learners’ autonomy and discourse should be included, and the structure factor of the LMS should be regulated to provide a flexible learning experience. It is critical for institutions that will employ LMS to focus on their benefits by analyzing learners’ performance throughout the course and the learning outcomes after the course is concluded. Evaluating, organizing, and improving LMS within the context of transactional distance theory will enhance learners’ outcomes. In addition, tests—a key component of the learning process—are employed as online examinations in LMS, so it is crucial to consider the potential effects of online examinations on learners and assessment practices. Therefore, while assessing LMS, the course and test processes should be considered together, while the LMS-based online exam options should be evaluated independently.

Online Exams

To evaluate learners’ education standing, tests in face-to-face classrooms are generally held synchronously, though with the options provided by distance education, exams can also be held online. The primary distinction between a face-to-face classroom and an online exam is physical presence and synchrony (Jorczak, 2014). While learners take tests synchronously and face-to-face in a classroom setting, they can take online exams synchronously or asynchronously during the exam period designated on the LMS. While exam security for face-to-face tests can be ensured by a hall attendant, automated monitoring solutions are available for online exams if there is a requirement for an attendant (Arnò et al., 2021; Jia & He, 2021; Khalaf et al., 2020; Woldeab & Brothen, 2021). Even with controls using a camera, microphone, and Internet connection during online tests, it is very challenging to obtain the monitoring and evaluation effectiveness afforded by human surveillance. Therefore, an investigation of online exam dependability metrics is ongoing. Additionally, it has been reported that learners may experience varying degrees of exam anxiety due to computer-based exam activities (Jaap et al., 2021). Studies have indicated that students with significant face-to-face test anxiety had lower (Stowell & Bennett, 2010) or greater (Shraim, 2019) degrees of anxiety in online examinations, and there is a significant relationship between online test anxiety and test performance (Arora et al., 2021; Jaap et al., 2021; Stowell & Bennett, 2010).

Various studies on online tests have compared supervised and unsupervised exam results (Dadashzadeh, 2021; Hollister & Berenson, 2009), as well as face-to-face and online exam methods (Kemp & Grieve, 2014; Weber & Lennon, 2007). However, there have been only limited findings for different online exam environments without supervision. In this current study, both online test and exam activities created in different LMS systems were carried out unsupervised. More time was allotted for test participation than the exam’s duration, and learners were permitted to take the exam online asynchronously within the time limitation. Evaluating online test apps across various LMS platforms will be useful step and a fruitful guide, as examinations are a crucial part of any learning setting.

Research Questions (RQs)

In the literature, there is a gap in both the practical and statistical examination of LMS. Thus, the purpose of the current research was to assess online exams as they have been used in these settings, and to compare Advancity Learning Management Systems (ALMS) and Moodle LMS within the context of transactional distance theory. Accordingly, the following RQs were developed:

Method

Participants

The subjects were college students from a state university’s Faculty of Education. All students were given access to the data collecting tool through the LMS, and participation was voluntary. College students from 13 departments participated in the current study; of the 146 participants, 102 were females (69.9%) and 44 were males (30.1%). The age of the participants ranged from 18 to 33 years, with an average age of 21.66 (SD = 2.61). Being an experienced user of both ALMS and Moodle LMS was a criterion for inclusion in the current study.

Data Collection Tools

An online questionnaire was used to collect data. This questionnaire contained demographic profile items, the Learning Management Systems Evaluation Scale -LMSES (Barut Tuğtekin, 2021), and the Online Examination Assessment Scale - OEAS (Yilmaz, 2016). LMSES consisted of 19 items and 3 factors, with a 5-point Likert scale ranging from (1) strongly disagree to (5) completely agree. Because the LMSES had one reversed item, it was reverse scored for this study. According to the original form of the LMSES, the explained variances of the factors were 23.06% for dialogue, 25.74% for structure, and 14.93% for autonomy. The fit indices obtained from the LMSES (structure = 0.9, dialogue = 0.89, autonomy = 0.82; χ2 = 252.78, df = 146, χ2/df = 1.73; CFI = 0.95, NFI = 0.90, GFI = 0.89, AGFI = 0.85; SRMR = 0.06, RMSEA = 0.06; p < 0.001), and Cronbach’s Alpha (α) reliability coefficients were at an acceptable bound (i.e., α > .70). The OEAS had 3 factors and 17 items, with a 5-point Likert scale ranging from (1) strongly disagree to (5) completely agree. Because the OEAS contained six reversed items, these items were reverse scored and included in the ongoing analyses. According to the original form of the OEAS, the practicality-suitability factor explained 36% of the variance, the affective factors about 17%, and the reliability factor approximately 9%. Cronbach’s alpha reliability coefficients for factors were found to be (α = 0.89) for practicality-suitability, (α = 0.82) for affective, and (α = 0.82) for reliability.

Confirmatory Factor Analysis (CFA) was conducted to test the suitability of the data collection instruments with the sample for this study. The model fit indices of the LMSES were found to be in the good-fit value range (χ2 = 270.881, df = 147, χ2/df = 1.84; CFI = 0.92, NFI = 0.84, GFI = 0.85, AGFI = 0.80; SRMR = 0.06, RMSEA = 0.07; p < 0.001). For the LMSES, Cronbach’s alpha reliability coefficient was found to be (α = 0.93). The measurement model was also confirmed, with good fit indices (χ2 = 243.377, df = 115, χ2/df = 2.116; CFI = 0.93, NFI = 0.87, GFI = 0.85, AGFI = 0.80; SRMR = 0.06, RMSEA = 0.08; p < 0.001), based on the findings of CFA. For the OEAS, Cronbach’s alpha reliability coefficient was also found to be (α = 0.93). Therefore, the scales used in the current research constituted a valid and reliable measurement model, and there were no violations.

Procedure

Moodle, ALMS, Canvas, and Blackboard are popular LMS and are often used in the region where the research was done. Both Moodle and Canvas are open source and free to use, while ALMS and Blackboard are commercial LMS with annual fees. Although Blackboard has been used throughout the world, ALMS was developed in Turkey by Advancity. It has become one of the most popular LMS there, even though it is not used extensively worldwide. Moodle has been used in over 70 higher education institutions, and ALMS has been used in close to 60 higher education institutions when comparing the most popular LMS in Turkey (Cabi & Ersoy, 2022; Karadag et al., 2021; Yolsal & Yorulmaz, 2022). This study examined the use of Moodle LMS and ALMS, among the most frequently used LMS in the region. Table 1 compares some notable characteristics and attributions of the Moodle LMS and ALMS as used in the current research.

Table 1

Comparing ALMS and Moodle LMS Features and Attributes

Feature Moodle ALMS
Virtual classroom plugin Google Classroom integrated Perculus Plus integrated
Storage space On Google Drive On internal virtual server
Mobile application Yes No (Web environment adapted for mobile access)
Page Yes Yes
URL Yes Yes
File Yes Yes
Lecture Yes Yes
Lesson plan Yes Yes
Discussion/Forum Yes Yes
Chat Yes No
Reports Yes Yes
Comments Yes No
Blogs Yes No
Survey Yes Yes
Quick mail Yes Yes
Task Yes Yes
Group mode Yes No
Wiki Yes No
Calendar Yes Yes
Statistics Yes Yes
Role settings Yes Yes
Homework Yes Yes
Change course visibility Yes Yes
Tests Yes Yes
Online exam Yes Yes
Synchronous & asynchronous exams Yes Yes
Exam types Various Various
Online exam proctoring No No
Video Yes Yes
Interactive video Plugin can be installed Yes
Dictionary Plugin can be installed Yes
Language adjustment Plugin can be installed Yes
LTI activity Plugin can be installed Yes
Grade chart Plugin can be installed Yes
Send feedback Plugin can be installed Yes

Since this research assessed two distinct LMS (i.e., ALMS and Moodle LMS) according to the Transactional Distance Theory and the evaluation of online test procedures, it was crucial to identify learners who had experienced both LMS. First, an online data collecting tool was made available to Faculty of Education students who were taking courses via ALMS during the spring semester of 2020-2021. This online survey collected the participants’ nicknames and e-mail addresses only, with no direct request for any other identification information. The goal was to select the same participants who also took part in the subsequent Moodle LMS implementation. In the second stage of the study, college students from the Faculty of Education who also studied through Moodle LMS in the fall semester of 2021-2022 were offered an online questionnaire to evaluate Moodle at the end of the semester. As with the previous implementation, the participants’ nicknames and e-mail addresses were gathered, and their participation status in former ALMS sessions was also checked and verified. Following the second implementation, one-to-one comparisons of nicknames and e-mail addresses were performed, and the learners who participated in both implementations were determined. These individuals comprised the sample for this study. Figure 1 depicts the complete research procedure.

Figure 1

Research Procedure

Data Analysis

Prior to performing the data analysis, skewness and kurtosis values were found to be ±1 (Hair et al., 2013), and a total of eight participants, found to be outliers in Mahalanobis distance and Q-Q plot graphs, were eliminated from all ongoing analyses (McLachlan, 1999). Since two-way repeated measures were conducted on the same study group in this research, the sphericity assumption was tested. The results of the analyses showed that the homogeneity of equal variance assumption was not violated, and that Mauchly’s test of sphericity significance value was above 0.05 (Cooley & Lohnes, 1971). Once the prerequisites were fulfilled, two-way repeated measures ANOVA was conducted. The average scores for all the scales and factors were calculated and analyzed, and the average scores were interpreted.

Findings

Table 2 presents the average LMSES and OEAS scores of participants for two distinct LMS environments.

Table 2

Descriptive Statistics of ALMS and Moodle LMS for LMSES and OEAS

LMS and scale Min. Max. Sum Mean SE SD
ALMS LMSES 1.42 5.00 505.32 3.461 .060 .722
Moodle LMSES 2.00 4.95 545.58 3.737 .057 .694
ALMS OEAS 1.24 5.00 438.71 3.005 .071 .857
Moodle OEAS 1.06 5.00 449.53 3.079 .082 .985

When the total mean scores for the scales were compared, the LMSES scores for Moodle LMS (Mean = 3.74; SD = 0.69) outperformed the ALMS (Mean = 3.46; SD = 0.72). When the OEAS scores used to assess the online tests are compared, the average scores of Moodle LMS and ALMS were quite close.

Table 3 presents the descriptive statistics for ALMS and Moodle LMS regarding LMSES factors based on Transactional Distance Theory.

Table 3

Descriptive Statistics of ALMS and Moodle LMS for LMSES Factors

Factors Min. Max. Sum Mean SE SD
ALMS structure 1.14 5.00 570.71 3.909 .061 .746
ALMS dialogue 1.25 5.00 424.75 2.909 .073 .879
ALMS autonomy 1.50 5.00 552.00 3.781 .072 .876
Moodle LMS structure 1.57 5.00 566.43 3.880 .065 .784
Moodle LMS dialogue 1.00 5.00 508.13 3.480 .065 .785
Moodle LMS autonomy 1.25 5.00 584.00 4.000 .068 .815

According to Table 3, when the averages of the LMSES factors were examined, autonomy in the Moodle LMS had the greatest average score, and dialogue in ALMS had the lowest. Additionally, structure in ALMS had a higher average score than the other ALMS factors.

Two-factor repeated measures ANOVA was conducted to scrutinize the differences between the LMSES factors for the ALMS and Moodle LMS within the context of the Transactional Distance Theory. The findings are summarized in Table 4.

Table 4

Two-Way Repeated Measures ANOVA Results

Source SS df MS F p ηp2 Power
LMS type 14.088 1 14.088 8.982 .003* .058 .845
Error (LMS type) 227.425 145 1.568
LMSES factor 94.733 2 47.367 187.04 .000** .563 1.000
Error (LMSES factor) 73.441 290 .253
LMS type * LMSES factor 13.288 2 6.644 29.218 .000** .168 1.000
Error (interaction) 65.946 290 .227
Total error 212.013 145 1.462

Note. *p < .01, **p < .001.

Based on the differences between the LMS type variable across the groups, findings in Table 4 revealed a significant result (F(1-145) = 8.982; p < 0.01; ηp2 = 0.058). Additionally, the statistical power value was found to be 0.845. There were found to be statistically significant differences between the groups in the analysis of the LMSES factors (F(2-290) = 187.040; p < 0.001; ηp2 = 0.563). As well, it was revealed that there was a statistically significant difference in the interaction of the LMS type and LMSES factors (F(2-290) = 29.218; p < 0.001; ηp2 = 0.168). The power value of this result was found to be 1.00. Figure 2 depicts the variations of LMSES factors based on LMS type.

Figure 2

Changes in LMSES Factors According to LMS Type

Figure 2 shows that the dialogue factor, for which Moodle had a higher mean score, was where the two LMS differed most significantly. On the other hand, both LMS scored similarly on the structure factor. To ascertain which LMSES factors varied in statistical significance, a straightforward main effect analysis was used and paired-samples t-tests were conducted. The results are shown in Table 5.

Table 5

t-Test Results for LMSES Factors

Factor Mean SD t df p < η2
ALMS-Moodle LMS (structure) .029 1.039 .341 145 .733 0.001
ALMS-Moodle LMS (dialogue) -.571 1.239 -5.570 145 .000** 0.176
ALMS-Moodle LMS (autonomy) -.219 1.197 -2.213 145 .028* 0.033

Note. * p < .05, ** p < .001.

There was a significant difference between LMS in terms of dialogue (t(145) = -5.570; p < 0.001) and autonomy (t(145) = -2.213; p < 0.05), both of which are factors of LMSES. Since the value calculated for the dialogue factor was larger than 0.14, it suggested a large effect size, and since the value computed for the autonomy factor was less than 0.06, it indicated a small effect size (Cohen, 1988).

Table 6 presents the descriptive statistics of ALMS and Moodle LMS for OEAS variables, whereby online exams made in the two distinct LMS types were compared.

Table 6

Descriptive Statistics for Online Exams Via Distinct LMS

Factor Min. Max. Sum Mean SE SD
ALMS practicality-suitability 1.00 5.00 429.75 2.943 .090 1.094
ALMS affective 1.00 5.00 435.50 2.983 .079 .951
ALMS reliability 1.00 5.00 469.00 3.212 .076 .925
Moodle LMS practicality-suitability 1.00 5.00 443.38 3.037 .099 1.200
Moodle LMS affective 1.00 5.00 441.67 3.025 .092 1.115
Moodle LMS reliability 1.00 5.00 481.67 3.299 .082 .989

When the averages of the OEAS factors in Table 6 were evaluated, it was revealed that the reliability factor for Moodle LMS had the higher score, while the ALMS usability factor had the lowest.

A two-factor repeated measures ANOVA was conducted to assess OEAS factors to measure differences in online exams based on the type of LMS (i.e., ALMS and Moodle LMS). The findings are presented in Table 7. Prior to the related analysis, the prerequisites were checked, and the sphericity assumption was not violated.

Table 7

ANOVA Results for Interactions of LMS Types and Online Exam Factors

Source SS df MS F p ηp2 Power
LMS type 1.203 1 1.203 .474 .492 .003 .105
Error (LMS type) 368.056 145 2.538
OEAS factor 13.049 2 6.525 12.257 .000* .078 .996
Error (OEAS factor) 154.370 290 .532
LMS type * OEAS factor .113 2 .056 .114 .892 .001 .067
Error (LMS type * OEAS factor) 142.876 290 .493
Total error 293.904 145 2.027

Note. *p < 0.001.

According to Table 7, the difference in terms of the LMS type variable was not statistically significant, however, there was a statistically significant difference between groups in the analysis of the OEAS factors (F(2-290) = 12.257; p < 0.001; ηp2 = 0.996). It was determined that there was no statistically significant difference while evaluating the related outcomes for the interactions of the LMS type and OEAS factors (F(2-290) = 0.114; p > 0.05; ηp2 = 0.001). However, Figure 3 illustrates the variations in OEAS by LMS type.

Figure 3

Average Online Exam Scores for OEAS Factors by LMS Type

As seen in Figure 3, the higher difference between the two distinct LMS is in the usability factor, with Moodle LMS scoring better. When the mean scores of the OEAS factors were examined, the higher means were found in the reliability factor. Furthermore, Moodle LMS had higher average OEAS scores than ALMS in each factor. In brief, even if there was no statistically significant difference (F(2-290) = 0.114; p > 0.05; ηp2 = 0.001), Moodle LMS had higher average scores in online exam evaluation than did ALMS.

Discussion

Although there are many different approaches to implementing e-learning, LMS are one of the most effective platforms for carrying out educational activities efficiently, effectively, and systematically. Because of this, educational institutions look for a LMS that can satisfy their e-learning requirements. There are two main options when choosing an LMS to address institutional needs. One is an open source LMS, while the other is a pay-for-use LMS that has been commercially developed. Open source LMS are free-to-use and may be customized to meet an institution’s demands, but these come with a range of maintenance and development costs. While the costs of acquiring commercial LMS are substantial, such systems have been designed expressly for the institution and might be simpler to use. Therefore, when deciding between free-to-use and commercial LMS, it is essential to evaluate (a) the institution’s demands; (b) LMS ease of use, as well as features that improve and support satisfaction, and (c) the potential resources necessary for LMS implementation (Kasim & Khalid, 2016). Participants in this study used both open source free-to-use Moodle LMS, and the commercial ALMS at different time periods. Comparisons were made between the two alternative LMS. Both Moodle LMS and ALMS were linked to other systems in the institution were fully ready to use.

The usefulness, efficiency, and usability of LMS can be affected by various factors. According to research on ALMS, usability, intention to use, and satisfaction levels have been directly influenced by the quality of the course material and user interface design (Yoruk et al., 2020). According to Alshurideh et al. (2021), the perceived usability and utility of e-learning systems have been significantly influenced by the quality of the content. Since this study examined two distinct kinds of LMS, it is possible that their particular interface designs led to differences in the LMS rating scores. Learners’ use of particular LMS during different education terms may have resulted in a range of quality levels in the presentation of instructional information in various courses.

When LMSES scores were considered, the average for Moodle LMS was higher than for ALMS. As a result, it can be argued that Moodle LMS is a more practical and efficient LMS option than ALMS. When the interactions of the LMSES factors were examined while taking into consideration the different LMS types, there was no statistically significant difference in the structure factor of the LMSES, but there was a significant difference in the dialogue and autonomy factors. Additionally, as compared to ALMS, the Moodle LMS revealed a positive and statistically significant difference in the autonomy and dialogue factors. Thus, it may be claimed that Moodle LMS encourages learners to act more independently and that ALMS has a poorer capacity for dialogue. On the other hand, the fact that the structure factor of ALMS had a higher average score than did the other factors, indicates that the ALMS interface was well structured. In addition, course format affects learners’ autonomy, as well as learner-learner and learner-instructor communication (Abuhassna et al., 2022). When analyzing how LMS features encourage learners to act independently and participate in dialogue, it is important to consider the ways that instructors use these activities and how frequently. As well, even though learners’ autonomy is seen as a crucial notion in e-learning environments (Castañeda & Selwyn, 2018), the use of educational technologies that reinforce learners’ autonomy may trigger learner-centered research (Lazorak et al., 2021). Although the structural elements of LMS (i.e., interface and curriculum) were evaluated using Transactional Distance Theory, the methods and activities employed by instructors in relation to autonomy and dialogue factors can also play an essential role. Therefore, to improve autonomy and dialogue in the successful use of an LMS and manage structural aspects, the LMS interface, features, and ease of use, as well as instructional materials, coursework, and related instructional activities should be scrutinized.

Moodle LMS and ALMS had similar average scores in the overall comparison of online exams. When the OEAS factors for online exams used in LMS were assessed, the reliability factor of Moodle LMS had the highest average score, while the practically-suitability factor of ALMS had the lowest average score. However, there was no statistically significant interaction between LMS type and OEAS factors in online exam evaluation. There was no statistically significant difference between online exams according to the type of LMS employed and the OEAS factors with which the online exam procedures were evaluated. Even with no statistically significant difference, Moodle LMS outperformed ALMS in terms of average scores for each OEAS factor in the assessment of online exams. The average score for the reliability factor of both LMS was relatively higher in comparison to the other factors when Moodle and ALMS were compared using the framework of practically-suitability, affective factors, and reliability. However, because the structural relevance of the online exam questions is measured by the reliability factor of the OEAS, it may be concluded that instructors typically provide trustworthy online exam items.

On the other hand, the fact that both LMS platforms offer unsupervised online exams, and that most instructors favor multiple-choice exams, may have led to comparable experiences for learners during the online exam procedures. Online tests may be associated with a variety of security issues; it is recommended that they be used for formative rather than summative evaluation to ensure that assessments are accurate, dependable, and adaptable when used in distance learning (Shraim, 2019). Considering the security issues with online examinations, formative evaluation targeted at enhancing learning may be a better option for online assessment rather than grading with summative assessment. On the other hand, it would be difficult to provide a formative evaluation setting that delivers individual feedback in online exams when there are numerous participants (Ilgaz & Adanir, 2020). Furthermore, system quality has been cited as the most fundamental component influencing online exams, e-learning experience, mobile learning, and cloud services (Akar & Mardikyan, 2014). Therefore, improved system quality is likely to boost both LMS use and intentions to use (Alshurideh et al., 2021; Liu et al., 2010). As Dermo (2009) has indicated from learners’ assessment of online exams, it is crucial to improve exam procedures by addressing affective factors, validity, practical issues, reliability, security, as well as learning and teaching considerations.

Limitations

There were some limitations to this study that should be noted when interpreting the research findings. First, this study was limited to evaluating college students’ use of ALMS and Moodle LMS for a semester each. Second, since the institution managed the sequence in which the LMS used in this study were implemented, the inability to alter this sequence should be regarded as one of the crucial limitations. Third, while the data instruments used in the LMS comparisons were validated, they were limited to LMSES and OEAS scales. Fourth, it was expected that instructors used LMS systems efficiently while creating and delivering online exams and related course materials. As well, was assumed that learners had a sufficient degree of expertise using the LMS since the institution provided user guides and support services. Finally, although participation in the LMS surveys was entirely optional, it was assumed that respondents provided honest evaluations.

Conclusion and Practical Implications

In the current study, the use of two LMS in the e-learning process was scrutinized using the time series approach and within the context of Transactional Distance Theory. Additionally, the effectiveness of online exam procedures in each LMS was assessed. The research findings indicated that online exams in Moodle LMS and ALMS both had similar assessment ratings, while Moodle had a higher evaluation score for the e-learning process. The findings obtained from the Transactional Distance Theory factors indicated that, despite ALMS’s structural aspects being predominant, Moodle’s strength was mostly tied to learners’ autonomy. It was revealed that when evaluated according to the LMSES factors, average scores of the dialogue and autonomy factors of Moodle LMS were significantly higher than for ALMS. The reliability of both LMS was found to be a better indicator than other factors in comparing LMS online exams, where it was found that there was no statistically significant difference between ALMS and Moodle LMS.

We recommend that in selecting and using LMS, choices should be based on their specific characteristics in accordance with the demands of the institutions. Additionally, we believe that LMS may be used more effectively when e-learning instructors are offered specific training to improve their abilities to use LMS. However, we think that results are comparable when tests are given online in an unsupervised setting and are typically of a similar kind. Therefore, we recommend doing empirical comparisons of online exams in e-learning environments for various exam types (e.g., supervised vs. unsupervised, multiple choice vs. open-ended).

References

Abuhassna, H., Busalim, A. H., Mamman, B., Yahaya, N., Zakaria, M., Al-Maatouk, Q., & Awae, F. (2022). From student’s experience: Does e-learning course structure influenced by learner’s prior experience, background knowledge, autonomy, and dialogue. Contemporary Educational Technology, 14(1), ep338. https://doi.org/10.30935/cedtech/11386

Akar, E., & Mardikyan, S. (2014). Analyzing factors affecting users’ behavior intention to use social media: Twitter case. International Journal of Business and Social Science, 5(11), 85-95. http://ijbssnet.com/view.php?u=https://www.ijbssnet.com/journals/Vol_5_No_11_1_October_2014/9.pdf

Al-Fraihat, D., Joy, M., Masa’deh, R., & Sinclair, J. (2020). Evaluating e-learning systems success: An empirical study. Computers in Human Behavior, 102(1), 67-86. https://doi.org/10.1016/j.chb.2019.08.004

Alshurideh, M. T., Al Kurdi, B., AlHamad, A. Q., Salloum, S. A., Alkurdi, S., Dehghan, A., Abuhashesh, M., & Masa’deh, R. E. (2021). Factors affecting the use of smart mobile examination platforms by universities’ postgraduate students during the COVID-19 pandemic: An empirical study. Informatics, 8(2), 32. https://doi.org/10.3390/informatics8020032

Arnò, S., Galassi, A., Tommasi, M., Saggino, A., & Vittorini, P. (2021). State-of-the-art of commercial proctoring systems and their use in academic online exams. International Journal of Distance Education Technologies, 19(2), 55-76. https://doi.org/10.4018/IJDET.20210401.oa3

Arora, S., Chaudhary, P., & Singh, R. K. (2021). Impact of coronavirus and online exam anxiety on self-efficacy: The moderating role of coping strategy. Interactive Technology and Smart Education, 18(3), 475-492. https://doi.org/10.1108/ITSE-08-2020-0158

Barut Tuğtekin, E. (2021). Development of the learning management systems evaluation scale based on transactional distance theory. Journal of Educational Technology and Online Learning, 4(3), 503-515. https://doi.org/10.31681/jetol.943335

Baxto da Silva, W., Amaro, R., & Mattar, J. (2019). Distance education and the Open University of Brazil: History, structure, and challenges. International Review of Research in Open and Distributed Learning, 20(4), 99-115. https://doi.org/10.19173/irrodl.v20i4.4132

Bradley, V. M. (2021). Learning management system (LMS) use with online instruction. International Journal of Technology in Education, 4(1), 68-92. https://doi.org/10.46328/ijte.36

Cabi, E., & Ersoy, H. (2021). Technologies used in distance education during the COVID-19 global pandemic and investigation of the opinions of teachers: The case of Baskent University. Journal of Higher Education and Science, 12(1), 168-179.

Castañeda, L., & Selwyn, N. (2018). More than tools? Making sense of the ongoing digitizations of higher education. International Journal of Educational Technology in Higher Education, 15(1), 1-10. https://doi.org/10.1186/s41239-018-0109-y

Cohen, J. (1988). Statistical power analysis for the behavioural sciences. Earlbaum. https://doi.org/10.4324/9780203771587

Cooley, W. W., & Lohnes, P. R. (1971). Multivariate data analysis. Wiley. https://doi.org/10.1002/bimj.19730150413

Dadashzadeh, M. (2021). The online examination dilemma: To proctor or not to proctor? Journal of Instructional Pedagogies, 25, 1-11. https://eric.ed.gov/?id=EJ1294386

Dermo, J. (2009). e-Assessment and the student learning experience: A survey of student perceptions of e-assessment. British Journal of Educational Technology, 40(2), 203-214. https://doi.org/10.1111/j.1467-8535.2008.00915.x

Dias, S. B., & Diniz, J. A. (2014). Towards an enhanced learning in higher education incorporating distinct learner’s profiles. Educational Technology & Society, 17(1), 307-319. https://www.jstor.org/stable/10.2307/jeductechsoci.17.1.307

Elfeky, A. I. M., Masadeh, T. S. Y., & Elbyaly, M. Y. H. (2020). Advance organizers in flipped classroom via e-learning management system and the promotion of integrated science process skills. Thinking Skills and Creativity, 35, 100622. https://doi.org/10.1016/j.tsc.2019.100622

Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2013). Multivariate data analysis. Pearson.

Hollister, K. K., & Berenson, M. L. (2009). Proctored versus unproctored online exams: Studying the impact of exam environment on student performance. Decision Sciences Journal of Innovative Education, 7(1), 271-294. https://doi.org/10.1111/j.1540-4609.2008.00220.x

Horzum, M. B. (2011). Developing transactional distance scale and examining transactional distance perception of blended learning students in terms of different variables. Educational Sciences: Theory & Practice, 11(3), 1571-1587. https://eric.ed.gov/?id=EJ936610

Huang, R. H., Liu, D. J., Tlili, A., Yang, J. F., & Wang, H. H. et al. (2020). Handbook on facilitating flexible learning during educational disruption: The Chinese experience in maintaining undisrupted learning in COVID-19 outbreak. Smart Learning Institute of Beijing Normal University. https://iite.unesco.org/wp-content/uploads/2020/03/Handbook-on-Facilitating-Flexible-Learning-in-COVID-19-Outbreak-SLIBNU-V1.2-20200315.pdf

Ilgaz, H., & Adanir, G. A. (2020). Providing online exams for online learners: Does it really matter for them? Education and Information Technologies, 25(2), 1255-1269. https://doi.org/10.1007/s10639-019-10020-6

Jaap, A., Dewar, A., Duncan, C., Fairhurst, K., Hope, D., & Kluth, D. (2021). Effect of remote online exam delivery on student experience and performance in applied knowledge tests. BMC Medical Education, 21(1), 1-7. https://doi.org/10.1186/s12909-021-02521-1

Jia, J., & He, Y. (2021). The design, implementation and pilot application of an intelligent online proctoring system for online exams. Interactive Technology and Smart Education, 19(1), 112-120. https://doi.org/10.1108/ITSE-12-2020-0246

Jorczak, R. (2014). Differences in classroom versus online exam performance due to asynchronous discussion. Online Learning Journal, 18(2), 1-9. https://www.learntechlib.org/p/183751/

Jung, S., & Huh, J. H. (2019). An efficient LMS platform and its test bed. Electronics, 8(2), 154. https://doi.org/10.3390/electronics8020154

Karadag, E., Ciftci, S. K., Gok, R., Su, A., Ergin-Kocaturk, H., & Ciftci, Ş. S. (2021). Distance education capacities of universities during the COVID-19 pandemic process. Journal of University Research, 4(1), 8-22.

Kasim, N. N. M., & Khalid, F. (2016). Choosing the right learning management system (LMS) for the higher education institution context: A systematic review. International Journal of Emerging Technologies in Learning, 11(6), 55-61. http://dx.doi.org/10.3991/ijet.v11i06.5644

Kehrwald, B. A., & Parker, B. (2019). Implementing online learning: Stories from the field. Journal of University Teaching & Learning Practice, 16(1), 1. https://doi.org/10.53761/1.16.1.1

Kemp, N., & Grieve, R. (2014). Face-to-face or face-to-screen? Undergraduates’ opinions and test performance in classroom vs. online learning. Frontiers in Psychology, 5, 1278. https://doi.org/10.3389/fpsyg.2014.01278

Khalaf, K., El-Kishawi, M., Moufti, M. A., & Al Kawas S. (2020). Introducing a comprehensive high-stake online exam to final-year dental students during the COVID-19 pandemic and evaluation of its effectiveness. Medical Education Online, 25(1), 1826861. https://doi.org/10.1080/10872981.2020.1826861

Kwon, S., Kim, W., Bae, C., Cho, M., Lee, S., & Dreamson, N. (2021). The identity changes in online learning and teaching: Instructors, learners, and learning management systems. International Journal of Educational Technology in Higher Education, 18(1), 1-18. https://doi.org/10.1186/s41239-021-00304-8

Lazorak, O., Belkina, O., & Yaroslavova, E. (2021). Changes in student autonomy via e-learning courses. International Journal of Emerging Technologies in Learning, 16(17), 209-225. https://www.learntechlib.org/p/220066/

Liu, Y., Han, S., & Li, H. (2010). Understanding the factors driving m-learning adoption: A literature review. Campus-Wide Information Systems, 27, 210-226. https://doi.org/10.1108/10650741011073761

Markets & Markets. (2022). Learning Management System (LMS) market by component (solutions and services), delivery mode (distance learning, instructor-led training, and blended learning), deployment, user type (academic and corporate), and region (2022 - 2026). https://www.marketsandmarkets.com/Market-Reports/learning-management-systems-market-1266.html?gclid=EAIaIQobChMItKT-oful3QIVjcqyCh1PcwdkEAAYASAAEgIfGvD_BwE

McLachlan, G. J. (1999). Mahalanobis distance. Resonance, 4(6), 20-26. https://www.ias.ac.in/article/fulltext/reso/004/06/0020-0026

Moore, M. G. (1993). Theory of transactional distance. In D. Keegan (Ed.), Theoretical principles of distance education (pp. 22-38). Routledge. http://www.c3l.uni-oldenburg.de/cde/found/moore93.pdf

Moore, M. G., & Kearsley, G. (1996). Distance education: A systems view. Wadsworth Publishing.

Nasser, R., Cherif, M., & Romanowski, M. (2011). Factors that impact the usage of the learning management system in Qatari schools. The International Review of Research in Open and Distance Learning, 12(6), 39-62. https://doi.org/10.19173/irrodl.v12i6.985

Oakes, K. (2002). E-learning: LCMS, LMS-They’re not just acronyms but powerful systems for learning. Training & Development, 56(3), 73-75.

Raza, S. A., Qazi, W., Khan, K. A., & Salam, J. (2020). Social isolation and acceptance of the learning management system (LMS) in the time of COVID-19 pandemic: An expansion of the UTAUT model. Journal of Educational Computing Research, 5 9(2), 1-26. https://doi.org/10.1177/0735633120960421

Research & Markets. (2022). Learning management system (LMS) global market report 2022: By component, delivery mode, deployment mode, end user, and covering. https://www.researchandmarkets.com/reports/5522323/learning-management-system-lms-global-market

Shraim, K. (2019). Online examination practices in higher education institutions: Learners’ perspectives. Turkish Online Journal of Distance Education, 20(4), 185-196. https://doi.org/10.17718/tojde.640588

Stowell, J. R., & Bennett, D. (2010). Effects of online testing on student exam performance and test anxiety. Journal of Educational Computing Research, 42(2), 161-171. https://doi.org/10.2190/EC.42.2.b

Turnbull, D., Chugh, R., & Luck, J. (2019). Learning management systems: An overview. In A. Tatnall (Ed.), Encyclopedia of education and information technologies (pp. 1-7). Springer Nature. https://doi.org/10.1007/978-3-030-10576-1_248

Turnbull, D., Chugh, R., & Luck, J. (2021). The use of case study design in learning management system research: A label of convenience? International Journal of Qualitative Methods, 20. https://doi.org/10.1177/16094069211004148

Watson, R., & Watson, S. (2007). An argument for clarity: What are learning management systems, what are they not, and what should they become? TechTrends, 51(2), 28-34. https://doi.org/10.1007/s11528-007-0023-y

Weber, J. M., & Lennon, R. (2007). Multi-course comparison of traditional versus Web-based course delivery systems. Journal of Educators Online, 4(2), 1-19. https://eric.ed.gov/?id=EJ907748

Woldeab, D., & Brothen, T. (2021). Video surveillance of online exam proctoring: Exam anxiety and student performance. International Journal of E-Learning & Distance Education, 36(1), 1-26. https://www.ijede.ca/index.php/jde/article/view/1204/1856

Wood, D., Kurtz-Costes, B., & Copping K. (2011). Gender differences in motivational pathways to college for middle class African-American youths. American Psychological Association, 47(4), 961-968. https://doi.org/10.1037/a0023745

Yilmaz, O. (2016). Online examination assessment survey. e-Kafkas Journal of Educational Research, 3(3), 26-33. https://dergipark.org.tr/en/pub/kafkasegt/issue/27919/296377

Yolsal, H., & Yorulmaz, O. (2022). The effect of COVID19 pandemic on the performance of higher education students. Kafkas University Economics and Administrative Sciences Faculty, 13(25), 441-472. https://doi.org/10.36543/kauiibfd.2022.019

Yoruk, T., Akar, N., & Erdogan, H. (2020). An analysis of the factors affecting use of learning management system in the framework of extended technology acceptance model with structural equation model. Eskisehir Osmangazi University Journal of Social Sciences, 21(2), 431-449. https://dergipark.org.tr/en/pub/ogusbd/issue/58568/808336

 

Athabasca University

Creative Commons License

Scrutinizing Learning Management Systems in Practice: An Applied Time Series Research in Higher Education by Esra Barut Tuğtekin is licensed under a Creative Commons Attribution 4.0 International License.