Skip to main content
Loading Events

« All Events

  • This event has passed.

Kenny Davila - Lecture Video Analysis and its Applications

January 27, 2022 @ 10:30 am MST

Kenny Davila
Assistant Professor
Universidad Tecnológica Centroamericana

Zoom Link

Lecture Video Analysis and its Applications


Kenny Davila received his bachelor’s degree in Computing Systems Engineering (2009) from Universidad Tecnológica Centroamericana (UNITEC) in Tegucigalpa, Honduras. He received his M.Sc. in Computer Science (2013) and Ph.D. in Computing and Information Sciences (2017) from Rochester Institute of Technology (RIT) in Rochester, NY, USA. He worked as a Post-doctoral Associate for the Center of Biometrics and Sensors (CUBS) at the University at Buffalo between 2017 and 2020. Currently, he is a Full-time Research and Teaching Faculty at UNITEC.

His research interests include pattern recognition, computer vision and information retrieval. His main research works are related to lecture video analysis, chart mining, and mathematical information retrieval. He has created and released open-source tools for labeling and evaluation of Lecture Video Summarization approaches (the AccessMath and LectureMath Datasets). He also co-organized the first and second editions of the CHART-Infographics competition at ICDAR 2019 and ICPR 2020, and developed the tools used for chart annotation. He is a co-author of award receiving papers at ICFHR 2018, ICDAR 2019 and CBDAR 2019.


Recording and sharing of educational or lecture videos has increased in recent years. Within these recordings, we find a large number of math-oriented lectures and tutorials which attract students of all levels. Many of the topics covered by these recordings are better explained using handwritten content on whiteboards or chalkboards. Hence, we find large numbers of lecture videos that feature the instructor writing on a surface. In this talk, I will discuss previous and current methods for extracting the handwritten content found in such videos. Our most recent method is based on a deep convolutional network, FCN-LectureNet, which can extract the handwritten content from the video as binary images. These are further analyzed to identify the unique and stable units of content to produce a spatial-temporal index of handwritten content. This index can be used to support advanced applications such as extractive summarization, advanced content-based search, retrieval, and navigation of lecture videos. Our code and data are publicly available.