International Review of Research in Open and Distributed Learning

Volume 19, Number 3

July - 2018

 

Technology Matters - The Impact of Transactional Distance on Satisfaction in Online Distance Learning

Author photos

Joshua Weidlich and Theo J. Bastiaens
Department of Instructional Technology & Media, FernUniversität in Hagen, Germany

Abstract

Transactional distance (TD), the perception of psychological distance between the student and his peers, his instructor/teacher, and the learning content, has long been a prominent construct in research on distance education. Today, distance education primarily takes place over the internet, with technology mediating engagement and communication. Because transactional distance in online distance learning will always rely on technologically-mediated communication or interaction, we argue that in order to get the full picture, this aspect of technological mediation needs to be considered. For this purpose, we introduce a new scale for measuring transactional distance between students and the learning technology (TDSTECH), comprised of two interrelated dimensions. Reliability, convergent, and discriminant validity suggest a suitable scale. Preliminary inferential analyses are conducted with multiple linear regression and mediation analysis. Regression models show that TDSTECH is the single most important predictor of satisfaction in this population. This may have important implications for practitioners trying design and facilitate satisfying online distance learning experiences. Also, mediator analysis reveals that TDSTECH mediates the relationship between TD student-teacher and satisfaction, but not for TD student-content. Surprisingly, TD student-student shows no significant relationship with satisfaction. Implications for practice and further research are discussed.

Keywords: transactional distance, learning technology, satisfaction, distance education, online learning

Distance Education

The latest online learning report by Allen and Seaman (2016) shows that enrollment in distance education is still growing, with a rate of 3.9%. In 2014, 5.8 million US students were enrolled in distance education, one half of which are learning in a fully online environment. Even though distance and online learning is still growing and there have been innovative developments in recent years (e.g., MOOCs & OER), typical problems persist. One such problem is the high level of persistence necessary to successfully complete online and distance classes (Allen & Seaman, 2014). As a result of this, attrition rates in online and distance education are higher than in face-to-face settings. Since comparing attrition between these very different settings is not trivial (Allen & Seaman, 2014), there are only estimates. According to some scholars, attrition rates for online learning may be as high as 75% (Croxton, 2014). This has been especially prominent in MOOCs where course completion may be as low as 6.5% (Jordan, 2014).

Given that convenience and flexibility regarding time and location of learning are often put forward as major advantages of online learning, high attrition rates are even more striking. There are many studies identifying factors that influence dropout rates in online and distance learning (Willging & Johnson, 2009; Croxton, 2014; Kauffman, 2015; Adamopoulos, 2013; Selim, 2007; Park & Choi, 2009). However, one very intuitive and straightforward variable, satisfaction, has been shown over and over again to be positively associated with persistence in online distance learning (Levy, 2007; Schreiner, 2009; Park & Choi, 2009; Joo, Lim, & Kim, 2011; Joo, Lim, & Kim, 2013; Lee & Choi, 2013). Hence, it seems to stand on firm empirical ground that students who are more satisfied with the online learning experience are less likely to drop out. Although the exact mechanism of this relationship is not yet fully understood, for example, how satisfaction actually results in higher persistence, motivation has been put forward as a possible explanation (Joo, Lim, & Kim, 2011). In this understanding, a student satisfied with online learning will experience higher motivation to continue, and thus is less likely to drop out.

The relationship of satisfaction with persistence is important for online distance learning research, because satisfaction is a variable easily quantified. Other possible factors related to dropout do not always lend themselves to easy measurement, for example, scheduling conflicts, family issues, financial problems, technical issues, academic integration (Park & Choi, 2009). These factors may be hard to measure but systematically improving them may be even harder, especially because some are largely outside of the realm of instructional design, educational technology, or even learning research (e.g., family issues, financial problems). Therefore, exploring ways to systematically improve satisfaction is a rather straightforward and arguably more fruitful avenue to improving persistence.

Transactional Distance

Transactional distance, an influential concept in distance education, proposed by Moore (1993), refers to the degree of psychological distance between learner and teacher. It suggests that, although separation by space and time is the most prominent characteristic of distance education, transactional distance is the actual guiding principle in distance education, influencing the process of teaching and learning. Transactional distance may also be perceived in face-to-face education, as it is a relative rather than an absolute term. The extent to which transactional distance will be perceived by the learner is a function of three variables, dialog, structure, and learner autonomy (Moore, 1993). Depending on how these variables manifest, transactional distance will be higher or lower, allowing for a typology of educational programs. For example, an increase in structure is expected to reduce dialog, leading to higher transactional distance. The concept of transactional distance has been hugely influential in distance education (1888 citations of Moore (1993), according to Google Scholar), and has since been applied to different contexts, for example, online learning and e-learning (Chen, 2001; Benson & Samarawickrema, 2009; Goel, Zhang, & Templeton, 2012). Gokool-Ramdoo (2008) suggests that it may well be the most promising contender "for a global theory for further development of distance education" (p.1). However, there has been criticism. For example, Gorsky and Caspi (2005) argue that empirical support is limited and that propositions of transactional distance theory may be reduced to a tautology.

Analyzing educational programs in regard to their dialogue, structure, and learner autonomy has been one method of researching the basic tenets of transactional distance theory. In this line of research, transactional distance is estimated from the extent to which these variables are manifest in a given educational program. As such, it is an indirect approach to measuring transactional distance, because the actual unit of analysis is the students' perspective (Goel, Zhang, & Templeton, 2012). Consequently, other scholars have conceptualized transactional distance as a psychometric construct, directly measuring perceived transactional distance through self-report (Chen, 2001; Zhang, 2003; Goel, Zhang, & Templeton, 2012; Paul, Swart, Zhang, & MacLeod, 2015; Ekwunife-Orakwue & Teng, 2014). Based on Moore's (1989) classic typology of interaction in distance education, some self-report scales now differentiate between transactional distance of student and teacher (TDST), student and student (TDSS), as well as student and content (TDSC). Measuring these sub-constructs of perceived transactional distance is a worthwhile endeavor because large transactional distance will "prohibit students' active engagement with learning in the online course" (Zhang, 2003, p. 80). Anderson's (2003) equivalency theorem suggests that meaningful learning is supported when at least one of these forms of interaction is at a high level and the learning experience will be perceived as even more satisfying if more than one form of interaction is at high level. Consequently, satisfaction with the learning experience will be higher if transactional distance is lower. Recently, Swart, MacLeod, Paul, Zhang, and Gagulic (2014) developed Relative Proximity Theory in order to compare actual perceptions of transactional distance in a course with a subjectively "optimal" course. The resulting relative proximity allows for systematic improvements of the learning experience.

Since transactional distance in online distance learning will always rely on technologically mediated communication or interaction, we argue that, in order to get the full picture, this aspect of technological mediation needs to be considered.

Transactional Distance of Student and Technology

Hillman, Willis, and Gunawardena (1994) highlighted the paradoxical situation that although distance education, as well as online education today, relies on technology to transport communication and content, the influence these technologies may have is neglected. To fill this theoretical gap, they propose "Learner-Interface Interaction [as] a process of manipulating tools to accomplish a task" (p. 34). They propose that a students' basic ability to interact with the necessary technology is expected to facilitate or hamper other interactions, thereby influencing the learning in a meaningful way.

Subsequently, Chen (2001) and Zhang (2003) developed measures to assess the transactional distance of the learner and the interface or technology (TDSTECH). Chen (2001) showed that this sub-construct is correlated with, but distinct from, the other constructs of transactional distance. Zhang (2003) found a weak but significant relationship between overall transactional distance and TDSTECH. In a study building on Zhang (2003), Paul et al. (2015) present a revised and more parsimonious version of the transactional distance scale, with only 12 items. This scale predicted more than half of the variance of satisfaction (R2=0.586). Additionally, the authors find that TDSTECH may no longer be a relevant sub-construct, as "most respondents gave such high rankings to these items that the items no longer serve any purpose" (Paul et al., 2015, p. 374). They go on hypothesizing that the items, originally written in 2003, may no longer be relevant because technology, especially web-based, has become even more commonplace today and no longer posits a challenge to most users.

However, recent research shows that the technology used in online learning and distance education does indeed still matter and may have profound effects on the learning experience. For example, Thoms and Eryilmaz (2014) showed that different software used to deliver instruction and manage interaction had an impact on satisfaction, student-student interaction, and the learning community, implying an effect on student-student transactional distance. Sun (2016) finds that support, accessibility, and usability of the course technology predicted satisfaction through the instruction-technology fit. Here, instruction-technology fit consists of the interplay between TDSTECH and TDSC, together explaining most of the variance in satisfaction (R2=0.765). Howard, Ma, and Yang (2016) using data mining on a very large dataset (N=8,817) found that computer self-efficacy was one of two main factors related to positive and negative engagement with digital technologies. The relevance of computer self-efficacy suggests that digital technologies used for learning have by no means become so intuitive and commonplace that the transactional distance relating to this, TDSTECH, has become irrelevant.

Based on this recent research, we hypothesize that TDSTECH is not only still relevant for the assessment of transactional distance but may actually be key in understanding how transactional distance is related to satisfaction with the learning experience. Hillman et al. (1994) suggest this by noting that the interface is a mediating element in all interaction, implying that the other interactions will depend, to some extent, on a students' ability to successfully engage with technology. Analogously, we suggest that TDST, TDSS, and TDSC will be, to some extent, dependent on TDSTECH and it will be a mediator between these sub-constructs and satisfaction.

A New Scale for TDSTECH

In the original conception of student-interface interaction by Hillman et al. (1994), the extent to which this interaction will take place is determined by the student on one hand and the interface on the other. In this sense, learner-interface interaction is determined by "the user's interpretation of the interface's perceived actions and visible structure, which form the basis for understanding the interface, predicting its future behavior, and controlling its actions" (p. 34). This makes intuitive sense, because a potential impetus may originate from the students' abilities, for example if he or she is inexperienced, but it may also originate from the technology itself, for example, if it lacks sufficient usability. Accordingly, we suggest that the amount of perceived transactional distance between student and technology will be determined by two factors: (1) the basic proficiency of the student in using the necessary technology, as well as, (2) the design and functionality (e.g., usability) of the technology itself, as perceived by the student. Transactional distance is expected to emerge at the interplay of these two factors. Of course, they are related in the sense that perceptions of usability will change with proficiency and also in the sense that different degrees of usability make different demands on the learner's proficiency. In order to reflect these two factors of TDSTECH, we sought to identify items that represent the proficiency of a student in using technology, as well as items that reflect the perceived usability of the technology.

For the first part, we sought items that would adequately capture the basic proficiency of the student in dealing with relevant technology for learning, for example, internet, word processing, and the learning management system Moodle. We referred to an existing scale that had been previously validated, the Online Learning Readiness Scale of Hung, Chou, Chen, and Own (2010). The dimension of "Computer & Internet self-efficacy," understood as "the individuals' perception of using a given technology and individuals' ability to use the technology, that is, assessment concerning computer/network self-efficacy" (Hung et al., 2010, p. 1082), seemed to fit our conception of learner proficiency. The three items related to this dimension showed adequate composite reliability (0.736), as well as convergent and discriminant validity. To provide a better fit for the context of this research, one of the items was adapted slightly to encompass proficiency in using Moodle, instead of the very broad term "software for online learning" (Hung et al., 2010, p. 1088).

For the second part we found no adequate and previously validated items. Hence, we referred to the concept of usability, as defined by the International Organization for Standardization (ISO), in order to generate items that reflect this factor. Usability as a concept tries to capture the quality of the user's experience (Bevan, Carter, & Harker, 2015) and is classically defined as "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (International Organization for Standardization [ISO], 1998). Here, effectiveness refers to "the completeness and accuracy with which users achieve specific goals" (Green & Pearson, 2006, p. 68). Applied to the context of online distance learning, a technology is perceived as effective for learning if a student is able to use it in a way that is helpful in reaching a learning goal. He/she may perceive Wikipedia as effective for finding relevant information for his studies, or he/she may perceive asynchronous threaded message boards as ineffective for natural communication with his peers. Efficiency adds to this by introducing resources needed (e.g., time, energy) into the equation. A tool or website may be effective towards a learning goal, but if a student spends too much time navigating or finding the relevant information, efficiency will be perceived as low (Green & Pearson, 2006). Satisfaction is defined as "freedom from discomfort, and positive attitudes towards the use of the product (ISO, 1998). A student will indicate satisfaction with technology, if in the process of using it towards a learning goals, he/she associates a positive attitude with the technology.

We sought to identify items that reflect the guidelines of effectiveness, efficiency, and satisfaction. To this end, we generated an item pool of 24 face-valid items, expected to be extensive in covering these dimensions. Then, four independent raters were briefed on the definitions of effectiveness, efficiency, and satisfaction in this context and were asked to flag items that they thought to be at odds with these definitions. Three items that were independently flagged by every rater were then deleted. In another round, the same four raters were then asked to identify from the remaining 21 items the items that best reflected these dimensions of usability. This resulted in an inter-rater reliability of k=0.69 (Fleiss' Kappa; substantial, Landis & Koch, 1977). After discussing the unclear items, the raters agreed on nine items, three for each dimension of usability.

This resulted in a preliminary TDSTECH scale with a total of 12 items, with items 1-3 representing the students' proficiency in using technology for learning and items 4-12 representing the usability of the technology. All items can be found in Appendix D.

Research Questions

Based on these considerations, there are four research questions for this study. The first one is concerned with the old TSTECH scale of Zhang (2003). We try to replicate the finding of Paul et al. (2015), the scale being obsolete with today's generation of students, via inspection of Kurtosis and factor structure. Based on these previous findings, we expect the scale to be obsolete (H1). This is true, if the data shows high kurtosis (<2). This then would indicate a need for a new scale to measure this construct. Research question two is concerned with assessing the relationship between the different forms of transactional distance and satisfaction, in order to understand the relative importance of these predictors. Based on the findings of Paul et al. (2015), we hypothesize that TDST, TDSS, and TDSC will all be predictors for satisfaction (H2). This will be assessed via multiple linear regression. The third research question concerns the reliability and validity of the new TDSTECH scale. This will be assessed through Cronbach's Alpha and Principal Component Analysis. Since we have no way of predicting this, there are no hypotheses regarding this research question. The fourth research question is concerned with how TDSTECH is related to the other sub-constructs and satisfaction. We expect TDSTECH to be a predictor of satisfaction (H4.1) and TDSTECH to be a mediator for the relationship of TDST, TDSS, and TDST with satisfaction (H4.2). The hypotheses are derived from the definitions of the constructs and their suggested relationships based in the literature review. These hypotheses will be tested via linear multiple regression and bias corrected bootstrapping mediator analysis, respectively. Note, however, that these analyses have to be considered tentative, as TDSTECH has not been previously validated on a different dataset.

RQ1: How relevant is the old TDSTECH scale?
H1: The items of the old TDSTECH scale are outdated and have little informational value.
RQ2: How are TDST, TDSS, and TDSC related to Satisfaction?
H2: TDST, TDSS, and TDSC are significant predictors for Satisfaction.
RQ3: What is the reliability and validity of a new TDSTECH scale?
RQ4: How is TDSTECH related to TDST, TDSS, TDSC, and Satisfaction?
H4.1: TDSTECH is a significant predictor for Satisfaction.
H4.2: TDSTECH is a mediator for the relationship of TDST, TDSS, and TDSC with Satisfaction.

Data Collection

Data was collected from 141 students in a large distance university in Hagen, Germany. Students were undergraduates in either Psychology or Educational Science and participated in the class "Instructional Technology and Media" in which a total of 550 students were enrolled. Students self-selected into the survey. They were able to participate either through an URL link in the online learning environment or through a pen-and-paper questionnaire in a face-to-face class. Table 1 shows demographics of this sample.

Table 1

Demographics of the Sample for This Study

Demographics
Gender Female (87.3%) Male (11.3%) n/a (1.4%)
Online vs Print Online (78.9%) Print (21.1%)
Study Educational Science (61.3%) Psychology (38.7%)
Age 26 and younger (28.9%) 26-35 (31.7%) 36-45 (31.7%) 46-55 (22.5%) 56-65 (4.2%) 66 and older (0.7%)

The items for TDST, TDSS, and TDSC were taken from Paul et al. (2015). The satisfaction scale from Weidlich and Bastiaens (2017) was used for this study. Table 2 shows an overview of scales in this study.

Table 2

Scales Used in This Study

Variable name Validated #items Cronbach's alpha
Satisfaction Weidlich & Bastiaens (2017) 6 0.82
TD Student-Teacher (TDST) Paul et al. (2015) 4 0.85
TD Student-Student (TDSS) Paul et al. (2015) 5 0.92
TD Student-Content (TDSC) Paul et al. (2015) 3 0.72
TD Student-Technology (TDSTECH) - 12 (11) 0.88 (0.87)

Analysis

RQ1

Upon analyzing response patterns for the transactional distance student-interface (TDSI) scale (Zhang, 2003) on this dataset, it became clear that, similar to findings of Paul et al. (2015), some items are problematic. Item 6 and 8 show kurtosis values larger than 2 (2.96 and 2.41, respectively). For both items, respondents choose the highest values (4 and 5) in most cases (88% and 89.4%, respectively). Also, for item 5, no values smaller than 3 were chosen at all. Upon inspection of the items ("I don't like using the internet," "the technology used in this course is difficult to learn," "I feel comfortable using the computer") it becomes clear that these items may be outdated and no longer apply for learners today. This replicates the findings of Paul et al. (2015). H1 is supported.

RQ 2

Linear multiple regression shows that TDSC (β=.273, p=.002) and TDST (β=.188, p=.025) are significant predictors of satisfaction. TDSS does not significantly predict satisfaction (β=.155, p=.07). Together, they account for a total of R2=.213, F(3, 136) = 13.54 (p<.001). This is a notable difference from Paul et al. (2015), in which all TD sub-constructs were significant predictors. In addition, in this study TDSC is the strongest predictor instead of TDST being the strongest predictor in Paul et al. (2015). Also, the predictive capabilities of the sub-constructs amount to a much lower R2 in the present study, compared to Paul et al. (2015), in which R2=.586. H2 is partly supported with only TDSC and TDST being significant predictors.

Table 3

Model Summary

Model R R2 Adjusted R2 RMSE
1 0.480 0.230 0.213 0.643

Table 4

Regression Coefficients

Model B SE β t p
1 intercept 1.340 0.357 3.751 < .001
TDSC 0.299 0.097 0.273 3.082 0.002
TDSS 0.123 0.068 0.155 1.826 0.070
TDST 0.177 0.078 0.188 2.268 0.025

RQ3

To answer RQ3, different analyses were conducted to assess validity and reliability of the new TDSTECH scale. Principal Component Analysis (PCA), with Eigenvalue 1, oblique rotation shows a four component solution (Appendix A). The items were developed with four components in mind; learner readiness for using technology, effectiveness, efficiency, and satisfaction, as explained in Chapter 4. Apparently, respondents were able to differentiate between the four components. However, as Appendix A shows, one item (Usab9) loads on satisfaction instead of efficiency, suggesting this item was not interpreted as intended. Therefore, it will be excluded from the scale. All in all, it seems that the component structure supports the validity of the items in representing the TDSTECH construct and its sub-dimensions.

Reliability as assessed via Cronbach's Alpha was found to be excellent, α=0.873. In terms of convergent validity, TDSTECH is expected to correlate highly with the old (Zhang, 2003) Student-Interface TD scale, and moderately with all other TDs. This was supported (Appendix C). In terms of discriminant validity, the four TDSTECH components should load differently than TDSS, TDST, and TDSC. This was supported, as PCA with Eigenvalue 1, oblique rotation shows a seven component solution (Appendix B). This suggests that all scales do indeed measure different, albeit correlated constructs.

RQ 4

In a first attempt to assess the relationship of TDSTECH with the other sub-constructs, as well as satisfaction with the learning experience, TDSTECH is introduced to the regression model. Zero order correlations between all variables are highly significant (p<.001) and range from r =.29 to r =.54 (Table 5).

Table 5

Pearson Correlations

Satisfaction TDST TDSC TDSS TDSTECH
Satisfaction - 0.34*** 0.41*** 0.30*** 0.54***
TDST - 0.40*** 0.29*** 0.51***
TDSC - 0.45*** 0.38***
TDSS - 0.32***
TDSTECH -

*** p <.001

Results of regression analysis show that TDSTECH is a much stronger predictor than the other sub-constructs. In fact, it leaves only TDSC as a significant predictor: TDSTECH (β =.411, p<.001), TDSC (β =.208, p<.05), TDSS (β =.097, p>.05), and TDST (β =.019, p>.05). Because all sub-constructs are correlated and TDSTECH has such a strong impact on the model, multicollinearity might be considered an issue. However, Variance Inflation Factor (VIF) do not suggest so, as they are well below any rule of thumb, for example, 4 or 10 (Table 7; O'Brien, 2007).

Table 6

Model Summary

Model R R2 Adjusted R2 RMSE
1 0.587 0.345 0.326 0.595

Table 7

Regression Coefficients

Model B SE β t p Collinearity statistics
Tolerance VIF
1 intercept 0.754 0.352 2.144 0.034
TDSC 0.228 0.091 0.208 2.503 0.013 0.701 1.426
TDSS 0.077 0.063 0.097 1.219 0.225 0.766 1.305
TDST 0.018 0.079 0.019 0.230 0.818 0.686 1.457
TDSTECH 0.442 0.091 0.411 4.872 < .001 0.681 1.467

Because the introduction of TDSTECH into the model substantially decreases the beta of TDSS, TDST, and to a smaller amount that of TDSC, an indirect effect is expected here. This was also implicitly suggested in the original conception of TD student-interface by Hillman et al. (1994). Therefore, a model with TDSTECH as mediator will be tested. Classical Tests of mediation are the Baron and Kenny (1986) procedure and Sobel test (Sobel, 1986). Because these methods have been associated with shortcomings (Hayes & Scharkow 2013; Shrout & Bolger, 2002), a bias corrected bootstrapping procedure was used instead. The PROCESS macro for IBM SPSS is a tool for path analysis-based moderation and mediation, as well as conditional process models (e.g., moderated mediation and mediated moderation) (Hayes, 2013). In the present study, there are three independent variables to be assessed regarding mediation. This raises the question if they should be analyzed separately or simultaneously. Running it separately for each independent variable without controlling for the others is expected to result in larger effects. However, because the predictor variables are correlated, the results may be confounded, as the effects wouldn't be unique to the predictor variable, but instead shared by its correlates. The effect sizes would then be misleading. Therefore, all independent variables were simultaneously included in the mediation model. Because PROCESS has no preset models for this case, the model was run three times, once with each TD, TDSTECH as mediator, and the remaining two TD's as covariates. This approach yields the same results as if the model had been estimated simultaneously (Hayes, 2013). Table 8 shows the estimated indirect effects and effect sizes in K2. The analysis yields a non-significant indirect effect for TDSS on Satisfaction through TDSTECH, b=.046, BCa CI [-.004,.112] with K2=.058, a significant indirect effect for TDST on Satisfaction through TDSTECH, b=.159, Bca CI [.92,.253] with K2=.171, and a non-significant indirect effect for TDSC on Satisfaction through TDSTECH, b=.071, Bca CI [.02,.184] with K2=.061.


Figure 1. Conceptual mediation model to be tested.

Table 8

Summary of Three Mediation Models With Covariates

Mediator Covariates Indirect effect [95% CI's] Bootstr. SE Effect size K2 [95% CI's] Bootstr. SE
TDST on Satisfaction TDSTECH TDSC, TDSS .159 [.092,.253] .041 .171 [.104,.265] .039
TDSC on Satisfaction TDSTECH TDSS, TDST .071 [.002,.184] .044 .061 [.002,.161] .036
TDSS on Satisfaction TDSTECH TDSC, TDST .046 [-.004,.112] .029 .058 [-.004,.136] .035

Table 9

Summary of Path Coefficents

Path Coefficient
TDST→TDSTECH .359 [.224,.494]***
TDST→Satisfaction .018 [-.138,.175]
TDSC→TDSTECH .161 [-.007,.329]
TDSC→ Satisfaction .228 [.048,.408]*
TDSS→TDSTECH .105 [-.012,.222]
TDSS→Satisfaction .077 [-.048,.202]
TDSTECH→Satisfaction .442 [.263,.621]***


Figure 2. Mediation Model based on path coefficients.

Interpretation

The regression model of TDST, TDSC, TDSS, and satisfaction shows that only TDSC and TDST are significant predictors. This is interesting, because other lines of research have consistently shown the importance of certain social aspect for online learning (Richardson, Caskurlu, & Maeda, 2017). In the present student population, it seems that being able to meaningfully engage with the learning content and to feel a certain psychological closeness to the instructor are essential for a satisfying experience, the psychological closeness to ones' peers is not.

In line with previous research along these lines, results show that TDSTECH is indeed an important predictor for satisfaction. This is not surprising when considering that TDSTECH underlies all interaction within an online distance learning context. Effectively, there is no interaction with peers, content, or instructors without the antecedent of interacting with the technology itself. Accordingly, the perceptions of transactional distance will be strongly influenced by the views, attitudes, and experiences a student may have with the mediating technology. We suggest that this psychological distance may underlie all possible ways of interaction for online and distance learners and therefore is essential in understanding their experience. This notion is supported by the regression model, in which TDSTECH is by far the strongest predictor for satisfaction, so much so that its introduction to the model leaves only TDSC and TDSTECH as significant predictors.

The mediation model, however, suggests that only TDST is significantly mediated by TDSTECH. This can be taken to mean that the influence of the perceived distance between student and instructor on satisfaction with the learning experience is influenced by TDSTECH in a way that no direct effect of TDST on satisfaction remains. An explanation for this may be that students primarily engage with instructors via technology that is unique to their distance education provider. In this population, students usually contact their instructors via Moodle or the university's own e-mail system. In these forms of communication, student's computer efficacy and the technology's usability may indeed be relevant to understand the perceived distance between students and teachers, as well as the relationship to satisfaction. Past research has shown that the presence of teaching faculty is an important part of the learning experience. Some studies have identified it as the most important aspect, as per student opinion (e.g., Maddrell, Morrison, & Watson, 2017). This study suggests, however, that there may be barriers regarding the psychological distance of students and teachers. The technology of the learning environment, as well as the student's proficiency in handling this technology may cause barriers in engagement with faculty. Because students rely on teachers for support, reducing transaction distance is critical.

On the other hand, TDSC has a direct effect on satisfaction without a mediating influence of TDSTECH. This is interesting because it suggests that the relationship of student-content interaction with satisfaction is not reliant on perceptions of the technology. It seems that the ability to meaningfully engage with the learning content is associated with satisfaction, no matter the technology involved. Again, this may be explained by characteristics of the population, as determined by the university's instructional methods. Much of the primarily important content is still delivered via print material. Although there is supplemental material in Moodle, most of the relevant content is still accessed offline. Because engaging with print material is not determined by perceptions of one's own computer proficiency and the technologies usability, TDSTECH has no significant role here. To the extent that other distance online learning providers deliver their main content via technology, this direct effect may disappear and a mediating effect of TDSTECH may emerge.

Since TDSS was not a significant predictor for satisfaction in the regression model, there was also no direct or indirect effect in the mediation model. Here, the psychological distance to one's peers does not seem to impact the satisfaction with the learning experience. This is surprising as it doesn't conform with research on social aspects of online learning.

Conclusion and Limitations

This study has demonstrated the importance of TDSTECH in understanding how satisfying online and distance learning experiences come to be. Although TDSTECH is partly determined by individual student characteristics that cannot be readily manipulated, choosing delivery and communication technology according to its usability is possible. Results of this study suggest that this decision may be more critical for improving satisfaction than previously thought. Statistical analyses of student responses show the importance of effective, efficient, and satisfying learning technology in order to foster TDSTECH, which in turn mediates TDST and is highly relevant for student satisfaction. Online and distance education providers may be advised to pilot-test learning technology and use the TDSTECH scale to assess the adequacy of the technology in relation to student's computer proficiency.

Although we proposed an updated TDSTECH scale with good predictive capabilities in terms of satisfaction, we have no data regarding the relevance of TDSTECH for actual student learning. It has been notoriously difficult to find consistent evidence for relationships between affective aspects like psychological distance and actual cognitive learning gains in terms of achievement measures. For example, even though Hostetter and Bush (2012) and Joksimović, Gašević, Kovanović, Riecke, and Hatala (2015) found some evidence of relationships between social presence indicators and test scores via content analysis, the evidence base is still shaky. Not only because these studies are correlational and the relationships susceptible to confounding variables, but even more importantly, there is still no substantive theory linking these affective variables to cognitive learning. Because of these shortcomings in the literature, cognitive learning gains can't be reliably predicted via affective aspects like psychological distance. However, because they indeed have empirically supported predictive capabilities in regards to satisfaction, which in turn is associated with retention and persistence, understanding these variables may be the most promising course of action for the time being.

One major limitation is a methodological one. The TDSTECH scale was not validated on a separate data set. Instead, validation and inference were conducted on one sample. Therefore, caution in interpreting the results is advised. TDSTECH, as well as its relationship to other relevant variables should be tested in different settings and possibly with a larger N. It may be interesting to find out if TDSTECH is equally relevant in different settings and how this may influence mediation patterns.

References

Adamopoulos, P. (2013). What makes a great mooc? An interdisciplinary analysis of student retention in online courses. 34th International Conference on Information Systems: ICIS 2013. Retrieved from http://pages.stern.nyu.edu/~padamopo/What%20makes%20a%20great%20MOOC.pdf

Allen, I. E., & Seaman, J. (2014). Grade change. Tracking online education in the United States. Babson Survey Research Group and Quahog Research Group, LLC. Retrieved from http://onlinelearningsurvey.com/reports/onlinereportcard.pdf

Allen, I. E., & Seaman, J. (2016). Online report card: Tracking online education in the United States. Babson Survey Research Group and Quahog Research Group, LLC. Retrieved from http://sloanconsortium.org/publications/survey/grade-change-2013

Anderson, T. (2003). Getting the mix right again: An updated and theoretical rationale for interaction. The International Review of Research in Open and Distributed Learning, 4(2). doi: 10.19173/irrodl.v4i2.149

Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173-1182.

Benson, R., & Samarawickrema, G. (2009). Addressing the context of e-learning: using transactional distance theory to inform design. Distance Education, 30(1), 5-21.

Bevan, N., Carter, J., & Harker, S. (2015). ISO 9241-11 revised: What have we learnt about usability since 1998? In International Conference on Human-Computer Interaction (pp. 143-151). Cham: Springer International Publishing.

Chen, Y. J. (2001). Dimensions of transactional distance in the world wide web learning environment: A factor analysis. British Journal of Educational Technology, 32(4), 459-470.

Croxton, R. A. (2014). The role of interactivity in student satisfaction and persistence in online learning. Journal of Online Learning and Teaching, 10(2), 314.

Ekwunife-Orakwue, K. C., & Teng, T. L. (2014). The impact of transactional distance dialogic interactions on student learning outcomes in online and blended environments. Computers & Education, 78, 414-427.

Goel, L., Zhang, P., & Templeton, M. (2012). Transactional distance revisited: Bridging face and empirical validity. Computers in Human Behavior, 28(4), 1122-1129.

Gokool-Ramdoo, S. (2008). Beyond the theoretical impasse: Extending the applications of transactional distance education theory. The International Review of Research in Open and Distributed Learning, 9(3). doi: 10.19173/irrodl.v9i3.541

Gorsky, P., & Caspi, A. (2005). A critical analysis of transactional distance theory. The Quarterly Review of Distance Education, 6(1), 1-11.

Green, D., & Pearson, J. M. (2006). Development of a web site usability instrument based on ISO 9241-11. Journal of Computer Information Systems, 47(1), 66-72.

Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. London: Guilford Press.

Hayes, A. F., & Scharkow, M. (2013). The relative trustworthiness of inferential tests of the indirect effect in statistical mediation analysis does method really matter? Psychological Science, 24(10), 1918-1927.

Hillman, D. C. A., Willis, D. J., & Gunawardena, C. N. (1994). Learner-interface interaction in distance education: An extension of contemporary models and strategies for practitioners. American Journal of Distance Education, 8(2), 30-42.

Hostetter, C., & Busch, M. (2012). Measuring up online: The relationship between social presence and student learning satisfaction. Journal of the Scholarship of Teaching and Learning, 6(2), 1-12.

Howard, S. K., Ma, J., & Yang, J. (2016). Student rules: Exploring patterns of students' computer-efficacy and engagement with digital technologies in learning. Computers & Education, 101, 29-42.

Hung, M. L., Chou, C., Chen, C. H., & Own, Z. Y. (2010). Learner readiness for online learning: Scale development and student perceptions. Computers & Education, 55(3), 1080-1090.

International Organization for Standardization. (2016). Ergonomic requirements for office work with visual display terminals (VDTs) - Part 11: Guidance on usability (ISO Standard No. 9241-11). Retrieved from https://www.iso.org/standard/16883.html

Joksimović, S., Gašević, D., Kovanović, V., Riecke, B. E., & Hatala, M. (2015). Social presence in online discussions as a process predictor of academic performance. Journal of Computer Assisted Learning, 31(6), 638-654.

Jordan, K. (2014). Initial trends in enrolment and completion of massive open online courses. The International Review of Research in Open and Distributed Learning, 15(1). doi: 10.19173/irrodl.v15i1.1651

Joo, Y. J., Lim, K. Y., & Kim, E. K. (2011). Online university students' satisfaction and persistence: Examining perceived level of presence, usefulness and ease of use as predictors in a structural model. Computers & Education, 57(2), 1654-1664.

Joo, Y. J., Lim, K. Y., & Kim, J. (2013). Locus of control, self-efficacy, and task value as predictors of learning outcome in an online university context. Computers & Education, 62, 149-158.

Kauffman, H. (2015). A review of predictive factors of student success in and satisfaction with online learning. Research in Learning Technology, 23.

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics 33(1), 159-174.

Lee, Y., & Choi, J. (2013). A structural equation model of predictors of online learning retention. The Internet and Higher Education, 16, 36-42.

Levy, Y. (2007). Comparing dropouts and persistence in e-learning courses. Computers & Education, 48(2), 185-204.

Maddrell, J. A., Morrison, G. R., & Watson, G. S. (2017). Presence and learning in a community of inquiry. Distance Education, 38(2), 245-258.

Moore, M. G. (1989). Editorial: Three types of interaction. American Journal of Distance Education, 3(2), 1-6.

Moore, M. G. (1993). Theory of transactional distance. Theoretical principles of distance education, 1, 22-38.

O'Brien, R. M. (2007). A caution regarding rules of thumb for variance inflation factors. Quality & Quantity, 41(5), 673-690.

Park, J. H., & Choi, H. J. (2009). Factors influencing adult learners' decision to drop out or persist in online learning. Educational Technology & Society, 12(4), 207-217.

Paul, R. C., Swart, W., Zhang, A. M., & MacLeod, K. R. (2015). Revisiting Zhang's scale of transactional distance: Refinement and validation using structural equation modeling. Distance Education, 36(3), 364-382.

Richardson, J. C., Maeda, Y., Lv, J., & Caskurlu, S. (2017). Social presence in relation to students' satisfaction and learning in the online environment: A meta-analysis. Computers in Human Behavior, 71, 402-417.

Schreiner, L. A. (2009). Linking student satisfaction and retention. Coralville, IA: Noel-Levitz. Retrieved from https://cmsro.uwstout.edu/admin/provost/upload/LinkingStudentSatis0809.pdf

Selim, H. M. (2007). Critical success factors for e-learning acceptance: Confirmatory factor models. Computers & Education, 49(2), 396-413.

Shrout, P. E., & Bolger, N. (2002). Mediation in experimental and nonexperimental studies: New procedures and recommendations. Psychological Methods, 7(4), 422-445.

Sobel, M. E. (1986). Some new results on indirect effects and their standard errors in covariance structure models. Sociological Methodology, 16, 159-186.

Sun, J. (2016). Multi-dimensional alignment between online instruction and course technology: A learner-centered perspective. Computers & Education, 101, 102-114.

Swart, W., MacLeod, K., Paul, R., Zhang, A., & Gagulic, M. (2014). Relative proximity theory: Measuring the gap between actual and ideal online course delivery. American Journal of Distance Education, 28(4), 222-240.

Thoms, B., & Eryilmaz, E. (2014). How media choice affects learner interactions in distance learning classes. Computers & Education, 75, 112-126.

Weidlich, J. & Bastiaens, T.J. (2017). Explaining social presence and the quality of online learning with the SIPS model. Computers in Human Behavior, 72, 479-487

Willging, P. A., & Johnson, S. D. (2009). Factors that influence students' decision to dropout of online courses. Journal of Asynchronous Learning Networks, 13(3), 115-127.

Zhang, A. M. (2003). Transactional distance in web-based college learning environments: Toward measurement and theory construction (Unpublished doctoral dissertation). Commonwealth University, Virginia.

Appendix A

Component Loadings of TDSTECH Items

1 2 3 4 Uniqueness
SI_PC_readin1 . . 0.911 . 0.266
SI_PC_readin2 . . 0.896 . 0.227
SI_PC_readin3 . . 0.588 . 0.267
SI_Usab4 . 0.860 . . 0.163
SI_Usab5 . 0.933 . . 0.140
SI_Usab6 . 0.908 . . 0.202
SI_Usab7 . . . 0.916 0.159
SI_Usab8 . . . 0.939 0.159
*SI_Usab9 0.757 . . . 0.445
SI_Usab10 0.926 . . . 0.140
SI_Usab11 0.936 . . . 0.130
SI_Usab120.936 . . . 0.136

* item excluded from further analyses

Appendix B

Component Loadings of all TD Items

1 2 3 4 5 6 7 Uniqueness
TDSC_1 . . . . . . 0.727 0.321
TDSC_2 . . . . . . 0.954 0.279
TDSC_3 . . . . . . 0.638 0.412
SI_PC_readin1 . . . . 0.908 . . 0.264
SI_PC_readin2 . . . . 0.857 . . 0.226
SI_PC_readin3 . . . . 0.599 . . 0.234
SI_Usab4 . . . 0.830 . . . 0.156
SI_Usab5 . . . 0.939 . . . 0.124
SI_Usab6 . . . 0.810 . . . 0.208
SI_Usab7 . . . . . 0.877 . 0.147
SI_Usab8 . . . . . 0.751 . 0.319
SI_Usab9 . 0.897 . . . . . 0.120
SI_Usab10 . 0.898 . . . . . 0.105
SI_Usab11. 0.894 . . . . . 0.113
TDSS_1 0.919 . . . . . . 0.179
TDSS_2 0.909 . . . . . . 0.170
TDSS_3 0.900 . . . . . . 0.169
TDSS_4 0.945 . . . . . . 0.101
TDSS_5 0.636 . . .. . . 0.353
TDST_1 . . . . . 0.814 . 0.312
TDST_2 . . 0.783 . . . . 0.301
TDST_3 . . 0.885 . . . . 0.115
TDST_4 . . 0.882 . . . . 0.177

Appendix C

Pearson Correlations

SI_old TDSTECH TDSC TDSS TDST
SI_old - 0.784*** 0.453*** 0.354*** 0.489***
TDSTECH - 0.382*** 0.316*** 0.514***
TDSC - 0.449*** 0.399***
TDSS - 0.290***
TDST -

*** p <.001

Appendix D

Scales Used in This Study

TDSC = Transactional distance between students and content
1 This course emphasized SYNTHESIZING and organizing ideas, information, or experiences.
2 This course emphasized MAKING JUDGEMENTS about the value of information, arguments, or methods.
3 This course emphasized APPLYING theories and concepts to practical problems or in new situations.
TDSS = Transactional distance between students
1 I get along well with my classmates
2 I feel valued by the class members in this online class
3 My classmates in this online class value my ideas and opinions very highly
4 My classmates respect me in this online class
5 The class members are supportive of my ability to make my own decisions
TDST = Transactional distance between students and teacher
1 The instructor pays no attention to me*
2 I receive prompt feedback from the instructor on my academic performance
3 The instructor was helpful to me
4 The instructor can be turned to when I need help in the course
TDSTECH = Transactional distance between student and technology
1 I feel confident in using office-programs like Word and Excel
2 I feel competent in researching information and finding resources on the internet
3 I feel confident in using the online learning environment Moodle
4 Moodle was helpful in supporting my learning activities
5 Moodle was helpful in reaching my learning goals
6 I feel that Moodle supported my learning
7 I experienced frustration using Moodle*
8 I had to consciously think about how to use Moodle*
9 I feel comfortable using Moodle
10 I feel satisfied using Moodle
11 I find it pleasant to use Moodle
Satisfaction with the learning experience
1 I benefited from this course
2 This course met my expectations
3 I experienced and learned new things in this course
4 The content covered in this course was not interesting*
5 I would like to take more courses like this one
6 I wish other course were more like this one

*reverse coded items

 

Athabasca University

Creative Commons License

Technology Matters - The Impact of Transactional Distance on Satisfaction in Online Distance Learning by Joshua Weidlich and Theo J. Bastiaens is licensed under a Creative Commons Attribution 4.0 International License.