February – 2015

Making Sense of Video Analytics: Lessons Learned from Clickstream Interactions, Attitudes, and Learning Outcome in a Video-Assisted Course

Giannakos photo Chorianopoulos photo Chrisochoides photo

Michail N. Giannakos1, Konstantinos Chorianopoulos1, 2, and Nikos Chrisochoides3
1Norwegian University of Science and Technology, Norway, 2Ionian University, Greece, 3Old Dominion University, USA

Abstract

Online video lectures have been considered an instructional media for various pedagogic approaches, such as the flipped classroom and open online courses. In comparison to other instructional media, online video affords the opportunity for recording student clickstream patterns within a video lecture. Video analytics within lecture videos may provide insights into student learning performance and inform the improvement of video-assisted teaching tactics. Nevertheless, video analytics are not accessible to learning stakeholders, such as researchers and educators, mainly because online video platforms do not broadly share the interactions of the users with their systems. For this purpose, we have designed an open-access video analytics system for use in a video-assisted course. In this paper, we present a longitudinal study, which provides valuable insights through the lens of the collected video analytics. In particular, we found that there is a relationship between video navigation (repeated views) and the level of cognition/thinking required for a specific video segment. Our results indicated that learning performance progress was slightly improved and stabilized after the third week of the video-assisted course. We also found that attitudes regarding easiness, usability, usefulness, and acceptance of this type of course remained at the same levels throughout the course. Finally, we triangulate analytics from diverse sources, discuss them, and provide the lessons learned for further development and refinement of video-assisted courses and practices.

Keywords: User interactions; learning analytics; video lecture; open learning system; analytics triangulation; controlled experiment

Introduction

The use of video for learning has become widely employed in recent years. Video-based learning techniques and practices are applied in a variety of ways, such as the flipped classroom, small private online courses (SPOCs), and xMOOCs. Learners enjoy video streaming from different platforms (e.g., YouTube) on a diverse number of terminals (TV, desktop, smart phone, tablets) and create billions of simple interactions. Observations of this learning activity might be converted via analytics into useful information (Giannakos et al., 2013) for the benefit of all video learners. Video learning analytics (LA) can allow researchers and educators to understand and improve the effectiveness of video-based learning tools and practices.

It is essential that students are actively involved in the learning process; however, content is still often taught via the traditional lecture approach where students are placed in the role of passive learners, which involves limited options for interactivity and engagement with the course material. Prominent technologies that have gained much attention include social media such as wikis, Facebook, and YouTube (Siemens, 2011) for information exchange, collaboration, instruction, and interaction with the course materials. With the myriad of technologies available today, many instructors face the dilemma of how we can take full advantage of the instructional media to provide more effective student-centred instruction.

Traditional lectures may no longer primarily serve the purpose of disseminating information, which can be easily retrieved from many online video lecture repositories at any time. Video lectures have given rise to flipped (or inverted) classrooms and even assist SPOCs (Fox, 2013). This specific type of blended-learning classroom utilizes technology, such as video, to move lectures outside the classroom, thereby giving students and teachers time for active learning in the classroom (Roehl, 2012). By using learning materials as a supplement to classroom teaching rather than being viewed as a replacement for it, those techniques are attempting to increase instructor leverage, student throughput, student mastery, and engagement (Fox, 2013). At the same time, recent technical and infrastructural developments (Giannakos, 2013) make the potential of video-assisted learning ripe for exploration. Capturing and sharing learners’ diverse interactions with the emerging learning technologies can clearly provide scholars and educators with valuable information.

Video resources have emerged as one of the premier forms of learning materials. In our paper we use the term video-assisted learning to refer to the systematic use of video resources for the purpose of achieving defined competences. Hence, video-assisted learning might be defined as, the learning process of acquiring defined knowledge, competence, and skills with the systematic assistance of video resources. With the widespread adoption of online video lecture communities, such as Khan Academy and VideoLectures.net, conducting research to understand how students learn via video lectures has become critical. Despite the present significant body of related research into the impact of video lectures (Giannakos, 2013), the majority of previous efforts have mainly focused on: a sporadic or single use of video lectures in an educational context (Evans, 2008) and/or the investigation of only one factor like student performance (Kazlauskas and Robinson, 2012). Therefore, the longitudinal collection of diverse LA and the interpretation of them through triangulation will allow us to better understand how students learn and interact with videos.

Researchers have recently begun to study student interactions with video-assisted learning in order to provide educators with valuable information about students (e.g., Khan Academy, Coursera). The capture and analysis of this information is still in an embryotic research stage, and the experimental instrumentation and methodology described herein will significantly enhance this critical research effort.

The next section outlines the related work and the focus of this research; the third section presents the system, its validation, and its final adjustments; the fourth section describes the methodology of the study employed in this article; the fifth section exhibits the empirical results derived; and the last section discusses the results and limitations, suggests the implications, and makes recommendations for future research.

Related Work

Video lecture has emerged as one of the premier open educational resources (McGreal et al., 2012). Today, advanced video repository systems have seen enormous growth (e.g. Khan Academy, PBS Teachers, Moma’s Modern Teachers) through social software tools and the possibilities to enhance videos on them. Most of the social software tools, including wikis, weblogs, Facebook, Twitter, MySpace, and e-portfolios, can potentially provide a vehicle to promote video lectures.

In addition, many instructors in higher education are implementing video lectures in a variety of ways, such as broadcasting lectures in distance education (Maag, 2006), delivering recordings of in-class lectures with face-to-face meetings for review purposes (Brotherton and Abowd, 2004), and delivering lecture recordings before class to conserve class time and thereby flipping the day for hands-on activities (Day and Foley, 2006). Other uses include showing videos that demonstrate course topics (Jadin et al., 2009) and providing supplementary video learning materials for self-study (Dhonau and McAlpine, 2002). Researchers have delineated the educational advantages and disadvantages of video lectures (Traphagan et al., 2010; Ljubojevic et al., 2014). However, previous efforts have mainly focused on the sporadic use of video lectures and the investigation of a specific feature.

Students using video lectures enjoy control over when and where they learn, what they need to learn, and the pace of their learning (Heilesen, 2010). The study habits of those students using video lectures have been shown to improve, including a fostering of independence (Jarvis and Dickie, 2009), an increase in self-reflection (Leijen et al., 2008), the heightening of efficient test preparation (McCombs and Liu, 2007), and the practice of reviewing material more regularly (O’Bryan and Hegelheimer, 2007). Learner control in well-designed video lectures can be beneficial in terms of convenience and supplemental practice (Hannafin, 1984). Students use videos with many different patterns (Ullrich et al., 2013) and report a variety of reasons for using video lectures (Traphagan et al., 2010; Donkor, 2011). Van Zanten et al. (2012) indicate that students widely use video lectures for revision and review purposes during exam preparation. Harris and Park (2008) also argue that video lectures can also be used for several reasons, including dissemination of material, supplementing class materials, guest lecture presentations, and even as a marketing tool for attracting prospective customers and students.

When video lectures are available, students typically use them. For instance, Harley et al. (2003) found that almost all students (95–97%) viewed video lectures at least once. Our motivation for this work is based on the following issues. First, the use of videos for learning has become widely utilized in recent years (Giannakos, 2013); video-based technological tools have been developed, and many educational institutions and digital libraries have incorporated video into their instructional media portfolio. Second, despite the growing number and variety of video lectures available, there is limited understanding of the nature and quality of their effectiveness, in terms of how students use and learn from video lectures. Specifically, limited research currently exists regarding guidelines for the use of video lectures and the design of pedagogical systems for their use.

Innovative features, such as slide-video separation, social categorization and navigation, and advanced search, have become standards for any state of-the-art system. Working with video lectures has essential differences from working with traditional or even digital textbooks. Video lectures are easy to watch at a normal pace with extra affordances like fast-forward and rewind serving as useful tools for learners (Giannakos and Vlamos, 2013). While video lectures lack textbooks’ font and typography that allow learners and instructors to emphasize key points and categorize the content in many ways, they can provide extra information using video-pace, voice tone, expression of emotions, visual cues, and many other forms of social information.

Other innovative social features like collaborative video annotation and social navigation have also been used recently in video learning platforms (Risko et al., 2013; Wilk et al., 2013; Torres-Ramírez et al., 2014). Collaborative video annotation allows learners to comment and discuss specific points of the video lecture, whereas social navigation engages crowdsourcing to collate the whole-class watching behavior, showing which fragments were watched most frequently (Risko et al., 2013). As the number of students studying the same topic grows and our ability to capture their learning patterns, we can expect even more exciting socially driven technologies to be explored and brought into our everyday life.

In summary, students are using video lectures for a variety of subjective and objective benefits, and students perceive video technology as a practical learning resource. However, many aspects related with a) students’ video-navigation, b) learning performance, and c) attitudes with the video-assisted learning still remain unexplored:

a) Navigation – Are students viewing the entire video lecture?; what segments of the video lecture do students select to view, and why?; how many times do students view any given video lecture?; and what video applications are more attractive or engaging?

b) Learning performance – How do students perform with the assistance of video materials?; is there any relation among students’ viewing and learning performance?

c) Attitudes – How do students perceive video-assisted learning?; is there any significant shift in their attitudes during a video-assisted course?

To address these critical issues, this study provides a first step towards understanding students’ multi-faceted interactions with video lectures. In particular, this study was designed to assess and make sense of the analytics within video lectures and to investigate the relationship between these analytics and students’ attitudes and learning performance. To do so, we designed and deployed a longitudinal (7—week) study based on our video learning analytics system. Using this system, we collected and analyzed students’: 1) video navigation, 2) learning performance, and 3) attitudes; based on these diverse sources, learning analytics, and the respective interpretation through data triangulation, we provide the new information for further development and refinement of video-assisted courses and practices.

Building and Validating the Video Learning Analytics System

In this section, we present the development and validation of the video learning analytics system (VLAS). The VLAS consists of: (1) YouTube Application Programming Interface (API), (2) Google App Engine (GAE), and (3) Eclipse (Java), all of which were seamlessly integrated into a flexible architecture (Figure 1). The selected tools (GAE, YouTube, Google accounts) offer multiple benefits. GAE enables the development of web-based applications, as well as maintenance and administration of the traffic and the data storage. YouTube allows developers to use its infrastructures (e.g., YouTube videos) and provides a chrome-less user interface, which is a YouTube video player without any controls, thereby facilitating customization within Flash or HTML 5. As a result, we used JavaScript to create custom buttons and to implement their functions.

Figure 1

In order to test, validate, and finally improve the system, we conducted an empirical system validation study. The main goal of the user experiment was to collect some first activity data from the learners and to establish a flexible experimental procedure that can be replicated and validated by other researchers. Instead of mining real usage data, we designed a controlled experiment to provide a clean set of data. The experiment took place in a lab with Internet connections, general-purpose computers, and headphones. Twenty-three university students (18-35 years old, 13 females and 10 males) spent approximately ten minutes watching each video. All students had been attending the Human-Computer Interaction courses at the Department of Informatics at a post- or under-graduate level and received course credit in the respective courses. Next, a time restriction of five minutes was imposed, in order to motivate the users to actively interact with the video analytics system by browsing through the video and answering the respective questions. We enabled the Replay30 and Skip30 buttons, and we informed the users that the purpose of the study was to measure their performance in finding the answers to the questions within time constraints. Although further research could progress to larger scale studies, the observable connection between the learners’ behavior data and the important segments detection affirms the system via a clearly replicable process and should provide assurance of the validity of the learning analytics tool.

The questionnaire consisted of very simple questions that could not be answered through previous knowledge of the users. For instance, “Which are the main research topics?”; “What is the purpose of hackers?”; and “Which is the right order for mixing the ingredients?”. We selected three types of common videos at different levels (research, elementary, technical) of online video learning: 1) university lecture, 2) documentary, and 3) how-to.

In order to present the results of our field study, we used graphs that facilitated the visual comparison between 1) the original learner activity segments (LAS), 2) the rich information segments (RIS), and 3) smooth versions of the LAS (Figure 2, up). Next, we visually compared the smooth versions of the component and composite times series to the RIS (Figure 2, down). We observed that in most cases the Replay30 time series closely matched the RIS. Neither the Skip30 nor the composite time series seem to match the RIS (Figure 2, down).

Figure 2

Therefore, we computed the local maximums of the Replay30 time series for each one of the three videos (see Table 1).

Table 1

Based on the user experiment, we improved the system and transformed it into an online service (http://www.socialskip.org/; see also Chorianopoulos et al. 2014). Users visit the link rather than going through an installation process. If there is an updated version, they simply have to refresh the page. Additionally, the system’s architecture is modular and allows re-use of the components. Web developers might employ the open-source application logic (https://code.google.com/p/socialskip/) in order to develop new or custom features.

Practically every user with a Google account can be a researcher. To do so, one must merely sign in to the service, connect the selected video from YouTube, configure the video player buttons or slider, and copy-paste the address of the online survey he/she wants to use (see Figure 3, up). By taking these few steps, a respective video assessment (or experiment) URL is created (Figure 3, down).

Figure 3

The instructor/researcher can then share this URL with his students or post it in course’s blog/wiki webpage. Hence, this video analytics system simplifies and makes feasible for everyone to incorporate video assessments on his/her syllabus. This URL contains the learner/user-interface of the video analytics system (Figure 4). The video analytics system employs the utilities selected by the researcher buttons.

Figure 4

Learners have the option of using a personal Google account to sign in and watch the uploaded videos. In this way, we can accomplish user authentication and avoid the necessity of implementing a user account system specifically for the application. Thus, users’ interactions are recorded and stored according to their Gmail addresses. Each time a user signs into the web-video player application, a new log is created. Whenever a button is pressed, an abbreviation of the button’s name and the time it occurred are stored.

When the researcher terminates the specific experiment, he can visit the configuration/management area of his experiment to 1) download all the collected data, 2) visualize the activity of each video, 3) configure the experiment, and 4) delete it thereafter (Figure 5). These options give the researcher the flexibility to test different activities or functionalities on different groups of students, analyze the results, and develop useful conclusions about how students use and learn from video-based learning systems.

Figure 5

Concerning the visualization capabilities of the system, we have opted to use time series to represent learner activity. A time series is a sequence of data points, measured typically at successive points in time and spaced at uniform time intervals. Time series analysis provides methods for analyzing time series data to extract meaningful statistics and other characteristics of the data (Hamilton, 1994). Figure 6 illustrates an example of the visualization of the learner activity via the time series technique. Time series graph depicts user interactions with the video lecture. In addition to the preselected visualizations of time series, the user of the system has the option to download the data-set locally in a comma separated values (CSV) format. Then, the researcher might import the CSV file into a visualization program of preference (e.g., R) for more advanced analysis and graphics.

Figure 6

Methodology

Sampling and Procedures

In our effort to investigate students’ viewing patterns and interests, we conducted a small-scale longitudinal study. The sample of the study consisted of science majors who selected to enroll in an undergraduate reading course. The instructors developed a video lecture syllabus to assist students’ before-class preparation. The video lectures featured native speakers, and the pace and style of all the videos was the same. The video lectures were integrated with the appropriate assessments, consisting of questions that could be answered based on the content of the video. The respective video-lectures (with the integrated assessments) were posted to the wiki of the course for before-class use. Eleven freshmen (18-20 years old, 6 females and 5 males) participated in the study. The course has well-defined learning objectives and content, and the study took place at a public university. The learning objectives of the course were related to the use of information for problem solving, research, and decision making. The course lasted approximately 10 weeks, enhanced with video lecturers to assist students during a 7-week period (the first 2 weeks and the last one did not require video lectures). For the distribution and management of the video lectures, we employed our video learning analytics system.

Research Design

The research design of our study is a single-group time series design (Ross and Morrison, 1996), involving repeated measurement of a group with the experimental treatment induced between two of the measures. The single-group time series design can be diagrammed as shown below (1). As depicted, one group (G) is observed (O) and receives the treatment (X) several times.

G O1 X1 O2 X2 O3 (1)

Our study consists of seven repeated treatments and seven measures of the video analytics (navigation) and learning performance, as well as pre-post measurement of students’ attitudes.

Measures

We incorporated a respective assessment for each video lecture, consisting of questions that could be answered based on the content of the video lecture. The video-assessment integration provided students with higher cognitive level learning and urged them to navigate and achieve deeper engagement with the video lecture.

In addition to the data collected via the assessment and the video learning analytics system (students’ navigation), we integrated a short questionnaire at the first (pre) and the last (post) video assessment. The questionnaire included measures of 1) ease to use, 2) control, 3) intention to use, and 4) usefulness of the video-assisted course. Table 2 lists the operational definitions and the number of items (questions) of each of the constructs (measures), as well as the source from which the measures were adopted. We employed a 7-point Likert scale anchored from 2 (“completely disagree”) to 7 (“completely agree”).

Table 2

In summary, the data collection of our study can be divided in three basic categories:

a) Students’ video navigation (collected via video learning analytics system),

b) Students’ learning performance/score (collected via the integrated with the system assessments) and,

c) Students’ attitudes toward the video-assisted course (collected via the integrated with the system questionnaires).

Data Analysis

As aforementioned, the collected data consists of three different types; therefore, an appropriate data analysis was used for each different set of data. Per students’ video navigation, we used the aggregated time series visualization, in order to identify the peaks of students’ video views (global maximums). Afterwards, we investigated any potential relation between students’ views and learning performance in the respective video-based assessments. At the end of our analysis, we watched the respective global peak of each video, and we attempt to give an explanation of why those segments are so important for students.

Regarding students’ learning performance, we captured all students’ assessments scores and mapped them in a week-by-week diagram. In this way we are able to address students’ progress throughout the video-assisted course. In addition, we divided video assignments in those with low and high scores (based on students’ learning performance scores); afterwards we tried to understand any differences in students’ video navigation.

In order to identify any potential shift in students’ attitudes during the video-assisted course, we used t-tests between the pre- and the post-results of their attitudes as exhibited from the questionnaires. Hence, we were able not only to capture students’ attitudes toward the video-assisted course, but also to identify any potential shift during the course.

Research Findings

With the visualization of the students’ activity using graphs, we reach the conclusion that there is a positive relation among students’ learning performance scores and their views on the respective video assessment. As we can see from Figure 7, in videos where students’ exhibited low score they had less repeated views as exhibited from the low level of importance, which was <0.5 most of the time. On the other hand, in videos where students exhibited high scores (e.g. Fig. 7, right) they had many repeated views, as exhibited from the high level of importance, which was almost always > 0.5. Hence, we can indicate that video lecture production, which results in different video navigations, affects students’ learning performance; and that “attractive” videos result in better learning outcomes.

Trying to identify the “attractive” (many views) video segments, we found that activity peaks (global-local maximums) were identified at the video segments with information related to the answers of the assessment and at the segments where the presenter was giving the solution of the respective problem. So the main quality of the “attractive” video segments was the rich and useful amount of transferred information and knowledge.

Figure 7

However, it was still unclear why students found some video segments extremely important (global maximums). In order to be able to provide an explanation about this issue, we went through the seven video activity graphs (seven video lectures were offered) and located the global maximums (see Figure 8). We then reviewed those segments in correspondence with the rest of the video and the local maximums.

Figure 8

By doing this explorative investigation, we reach the conclusion that there is a correspondence between the level of cognition/thinking each question required and the size of the respective peak. Using the revised taxonomy of Bloom (see Table 3) (Anderson et al., 2001) we perceived that all the global maximums were identified in questions where higher order thinking/cognitive skills was required. This can be explained by the fact that similar studying behavior occurs with traditional or even digital textbooks, when students re-read text corpus where higher order thinking/cognitive skills are required.

Table 3

At the end of the course, we collected students’ assessment scores (Figure 9) and noticed that students had the lowest score on the first assessment and the highest score on the last one. In addition, the last assessment has the smaller standard deviation value. Based on the results, we can indicate that after the first three video assessments, students exhibited better and more robust (lower standard deviation) scores. Given that all the video assessments had the same difficulty since the answers were always within the video and were presented in a similar way, we can assume that the video assisted process became more familiar after the third week and students adjusted their studying accordingly. In addition, the fact that the standard deviations minimized supports the notion that low performers benefited more, which is also in alignment with instructors’ notes and comments.

Figure 9

Regarding students’ attitudes toward the video-assisted course, we used an attitudinal questionnaire to assess them (Table 2). Students expressed high Control and Usefulness (6.5/7) during the video enhancement. Additionally, they expressed a slightly lower (though still very high) level on the Easy to Use (6.27/7) quality of the video enhancement and their Intention to Use (6.35/7) it in the future. High levels of these constructs indicate positive views concerning usability, control, and usefulness regarding the video-assisted course.

To examine any potential shift in students’ attitudes during the video-assisted course, we used t-tests between the pre- and the post-scores of the attitudinal questionnaires. As we can see from Figure 10, students’ responses were at very high levels in both the pre and the post questionnaires. Performing a t-test between the pre- and post-scores of the questionnaire, the results showed no significant difference among all of our constructs: Easy to Use t(20) = 0.59, p > .05; Control t(20) = -0.36, p > .05; Intention to Use t(20) = 0.18, p > .05; Usefulness t(20) = 1.09, p > .05. As a consequence, there was no shift in students’ attitudes after the 7-week period; therefore, it can be concluded that the video-assisted course is considered stably useful, usable, and well-received by the students.

Figure 10

Discussion and Conclusions

Millions of learners enjoy video streaming from different platforms (Coursera, Khan Academy, EdX, Udacity, Iversity, Futurelearn), creating billions of simple interactions. This data might be converted into useful information for the benefit of all video learners. As the practice of learning through videos on web-based systems increases, more and more interactions are going to be gathered. Dynamic analysis of this wealth of data will allow us to better understand learners’ experience. In addition, the combination of richer user profiles and content metadata will provide opportunities for adding value to data obtained from video-assisted learning.

Although many corporations and academic institutions are making lecture videos and seminars available online, there have been few and scattered research efforts (i.e., Traphagan et al., 2010) to understand and leverage actual learner experience. Moreover, Kim et al. (2014) have provided analytics for millions of interactions, which have been produced by thousands of MOOC students. Nevertheless, video analytics in controlled conditions provide more than massive amounts of video interaction data, allowing us to make more interpretations, between-group comparisons, and investigations into hybrid learning settings. To the best of our knowledge, there are currently no efforts triangulating diverse data, such as interactions with the system and students’ performance and attitudes, in order to derive valuable information about how students use and ultimately learn via video systems. In addition, video materials can be used in blended settings, SPOCs, and flipped classrooms to supplement and improve the classroom experience. Many flipped classroom and SPOCs learning processes and experiments will be deployed in the near future, some of them will probably succeed and others will fail. Regardless of each future success or failure, the sensemaking, pluralism, and triangulation of the collected analytics will allow us to provide a more precise and realistic viewpoint from each learning intervention and experiment.

In order to provide a first step towards this direction in video learning analytics, we present a video learning analytics system and the first results of the captured data. The system is open-source, web-based, and accessible to anyone who wants to design his own experimental design to perform experiments on student learning. The major practical implication of this study is the open-source video learning analytics system. Specifically, our empirical results indicate that the system is easy to use, helpful and applicable to any viewer, and smoothly incorporates any video lectures from YouTube and tests from Google Drive. The fact that the system is available for further improvement and experimentation makes its practical uses, implications, and perspectives even greater.

This study produces additional implications for theory and practice. First, our study presents a clear outcome for instructors to engage video-assisted learning practices in their teaching. Students’ positive attitudes and slight increase in their performance proved benefits of providing video lectures throughout a course. In addition, the correspondence between the level of cognition/thinking each question required and the size of the respective peak (based on the “wisdom of the crowd”) provide implications for designers, developers, and video lecture creators. Taking into account this correspondence, video-assisted learning platforms and video lectures can be developed by providing extra affordances like annotations, slower-pace, or even extra visual information to the students in these particular video segments.

In our study, we investigated students’ video navigation and we explained how video production affects students’ learning performance in a video-assisted course. We also indicated the correspondence between the level of cognition/thinking each video segment requires and the size of the respective student activity peak. Last but not least, we presented students’ progress throughout the video-assisted course, and we examined students’ attitudes regarding easiness, usability, usefulness, and acceptance of the course.

We want to emphasize that our findings are clearly preliminary with inevitable limitations. One important limitation is the small scale of the study (11 students); however, capturing and analyzing the experiences of eleven students over a long period of time allow us to understand how students use the respective materials. Furthermore, collecting repeated interactions minimizes the limitation of the small sample size. Our future research will concentrate on further refinement of the proposed system and analysis by applying and evaluating it on larger scale classes. This study can serve as a springboard for other scholars and practitioners to further examine the efficacy of video assignments and of this specific tool in particular. For those interested in using video-assisted learning approaches like flipping the classroom, this is an established flexible tool that can be used and adapted to meet their needs.

Our future work will focus on collecting and triangulating analytics from more sources. By taking into account learners’ interactions and much other data, such as students’ demographic characteristics (gender, ethnicity, English-language skills, prior education and background knowledge, their success rate in each section, their emotional states, the speed at which they submit their answers, which video lectures seemed to help which students best in which sections, etc.), new avenues for significant research will open. Collecting the knowledge and experience of multiple learners will allow us to understand how students learn with the assistance of video lectures and then feed powerful algorithms to create seemingly personalized feedback. Future work will help to collect diverse data (i.e., success rate, emotional states), which will allow the community to consider the challenges for developing more personalized and effective video learning systems. These advancements will lead us to several interventions (e.g., to prevent drop-out), or to adaptive services and curricula. As the number of students grows and our ability to capture diverse analytics increases, we should expect even more exciting students-centered technologies to be explored and brought into learning.

Acknowledgements

The authors wish to thank the participants of the study who kindly spent their time and effort. We also want to thank Ioannis Leftheriotis for drawing Figure 1.

References

Anderson, et al. (2001). A taxonomy for learning, teaching, and assessing: a revision of bloom’s taxonomy of educational objectives, complete edition. Longman.

Brotherton, J., Abowd, G. (2004). Lessons learned from eclass: Assessing automated capture and access in the classroom, ACM Transactions on Computer-Human Interaction, 11(2), ACM Press, 121–155.

Chen, C. M., & Wu, C. H. (2014). Effects of different video lecture types on sustained attention, Emotion, cognitive load, and learning performance. Computers & Education. DOI: 10.1016/j.compedu.2014.08.015

Chorianopoulos, K., Giannakos, M. N., Chrisochoides, N., & Reed, S. (2014). open service for video learning analytics. In proceedings of the 14th International Conference on Advanced Learning Technologies (ICALT), IEEE Press, 28-30.

Day, J., Foley, J. (2006). Evaluating a web lecture intervention in a human–computer interaction course. IEEE Transactions on Education, 49(4), 420-431.

Donkor, F. (2011). Assessment of learner acceptance and satisfaction with video-based instructional materials for teaching practical skills at a distance. The International Review of Research in Open and Distance Learning, 12(5), 74- 92. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/953/1859

Dhonau, S., McAlpine, D. (2002) ‘‘Streaming’’ best practices: Using digital video-teaching segments in the FL/ESL methods course. Foreign Language Annals, 35(6), 632–636.

Evans, C. (2008). The effectiveness of m-learning in the form of podcast revision lectures in higher education. Computers & Education, 50(2), 491–498.

Fox, A. (2013). From MOOCs to SPOCs. Communications of the ACM, 56(12), 2013, 38-40.

Giannakos, M.N., Chorianopoulos, K., Ronchetti, M., Szegedi, P., Teasley, S.D. (2013). Analytics on video-based learning. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (LAK ‘13), ACM, 283–284.

Giannakos, M.N. (2013) Exploring the video-based learning research: A review of the literature. British Journal of Educational Technology, 44(6), 191–195.

Giannakos, M. N. and Vlamos, P. (2013), Educational webcasts’ acceptance: Empirical examination and the role of experience. British Journal of Educational Technology, 44, 125–143.

Hamilton, J. (1994). Time series analysis. Princeton: Princeton University Press.

Hannafin, M. J. (1984). Guidelines for using locus of instructional control in the design of computer-assisted instruction. Journal of Instructional Development, 7(3), 6–10.

Harley, D., Henke, J., Lawrence, S., McMartin, F., et al. (2003). Costs, culture, and complexity: An analysis of technology enhancements in a large lecture course at UC Berkeley. Retrieved from http://repositories.cdlib.org/cshe/CSHE3-03

Harris, H. and Park, S. (2008). Educational usages of podcasting. British Journal of Educational Technology, 39, 548–551.

Heilesen, S.B. (2010). What is the academic efficacy of podcasting? Computers & Education, 55(3), 1063–1068.

Jadin, T., Gruber, A., Batinic, B. (2009). Learning with e-lectures: the meaning of learning strategies. Educational Technology & Society, 12(3), 282–288.

Jarvis, C., Dickie, J. (2009). Acknowledging the ‘forgotten’ and the ‘unknown’: The role of video podcasts for supporting field-based learning. Planet, 22, 61–63.

Kazlauskas, A., Robinson, K. (2012). Podcasts are not for everyone. British Journal of Educational Technology, 43, 321–330.

Kim, et al. (2014). Understanding in-video dropouts and interaction peaks inonline lecture videos. Proc. of the first ACM conference on Learning @ Scale (L@S ’14), ACM, 31-40.

Lee, B.-C., Yoon, J.-O., Lee, I. (2009). Learners’ acceptance of e-learning in South Korea: theories and results. Computers & Education, 53(4), 1–44.

Leijen, A., Lam, I., Wildschut, L,. Simons, P.R.J., Admiraal, W. (2008). Streaming video to enhance students’ reflection in dance education. Computers & Education, 52(1), 169–176.

Ljubojevic, M., Vaskovic, V., Stankovic, S., & Vaskovic, J. (2014). Using supplementary video in multimedia instruction as a teaching tool to increase efficiency of learning and quality of experience. The International Review of Research in Open and Distance Learning, 15(3). Retrieved from: http://www.irrodl.org/index.php/irrodl/article/view/1825

Maag, M. (2006). Podcasting and MP3 players: Emerging education technologies,” Computers. Informatics, Nursing, 24(1), 9–13.

McCombs, S., Liu, Y. (2007). The efficacy of podcasting technology in instructional delivery. International Journal of Technology in Teaching and Learning, 3(2), 123–134.

McGreal, R., Sampson, D., Sunder Krishnan, M., Chen, N. S., Kinshuk. (2012). The open educational resources (OER) movement: Free learning for all students. Proc. IEEE Conf. on Advanced Learning Technologies (ICALT 2012), IEEE Press, 748-751.

Ngai, E. W. T., Poon, J. K. L., Chan, Y. H. (2007). Empirical examination of the adoption of WebCT using TAM. Computers & Education, 48(2), 2007, 250–267.

O’Bryan, A., Hegelheimer, V. (2007). Integrating CALL into the classroom: The role of podcasting in an ESL listening strategies course. ReCALL, 19(2), 162–180.

Risko, E. F., Foulsham, T., Dawson, S., & Kingstone, A. (2013). The collaborative lecture annotation system (CLAS): A new TOOL for distributed learning. Learning Technologies, IEEE Transactions on, 6(1), 4-13.

Roehl, A., Reddy, S.L., Shannon, G.J. (2012). The flipped classroom: An opportunity to engage millennial students through active learning strategies. Journal of Family & Consumer Sciences, 105(2), 44-49.

Ross, S. M., & Morrison, G. R. (1996). Experimental research methods. Handbook of research for educational communications and technology: A project of the association for educational communications and technology, 1148-1170.

Shih, H. (2008). Using a cognitive-motivation-control view to assess the adoption intention for Web-based learning. Computer & Education, 50(1), 327-337.

Siemens, G. (2011). Learning analytics: foundation for informed change in higher education. Retr. from: http://www.slideshare.net/gsiemens/learning-analytics-educause

Torres-Ramírez, M., García-Domingo, B., Aguilera, J., & De La Casa, J. (2014). Video-sharing educational tool applied to the teaching in renewable energy subjects. Computers & Education, 73, 160-177.

Traphagan, T., Kusera, J.V., Kishi, K. (2010). Impact of class lecture webcasting on attendance and learning. Educational Technology Research and Development, 58(1), 19–37.

Ullrich, C., Shen, R., Xie, W. (2004). Analyzing student viewing patterns in lecture videos. Proc. Conf. on Advanced Learning Technologies (ICALT 13), IEEE, 115–117.

Van Zanten, R., Somogyi, S., Curro, G. (2012). Purpose and preference in educational podcasting. British Journal of Educational Technology, 43(1), 130–138.

Wilk, S., Kopf, S., & Effelsberg, W. (2013). Social video: A collaborative video annotation environment to support e-learning. In World Conference on Educational Multimedia, Hypermedia and Telecommunications, Vol. 2013, No. 1, 1228-1237.

© Giannakos, Chorianopoulos, Chrisochoides