TalkMiner (original) (raw)

TalkMiner: A Lecture Video Search Engine 要 旨 Abstract Authors

2014

The design and implementation of a search engine for lecture webcasts is described. A searchable text index is created allowing users to locate material within lecture videos found on a variety of websites such as YouTube and Berkeley webcasts. The searchable index is built from the text of presentation slides appearing in the video along with other associated metadata such as the title and abstract when available. The automatic identification of distinct slides within the video stream presents several challenges. For example, picture-in-picture compositing of a speaker and a presentation slide, switching cameras, and slide builds confuse basic algorithms for extracting keyframe slide images. Enhanced algorithms are described that improve slide identification. A public system was deployed to test the algorithms and the utility of the search engine at www.talkminer.com. To date, over 17,000 lecture videos have been indexed from a variety of public sources.

Survey on Content-Based Lecture Video Retrieval By Text

E-Teaching is becoming popular nowadays, as a result of this, there is a huge increase in the amount of lecture video data on the World Wide Web. Lecture videos contain text information in the visual channels: the presentation slides and lecturer's speech. Therefore, a more effective method for retrieval of video within large lecture video archives is needed. So, in this paper, we present an approach for automated video indexing and searching of video in archives. Firstly, we apply automatic video segmentation which will fragment the video into a number of frames and then keyframe detection is applied to offer a visual guideline for the video content navigation. Subsequently, by applying video Optical Character Recognition (OCR) technology on key-frames we extract textual metadata. The OCR detects as well as transcripts slide text for keyword extraction, by which all keywords are extracted from the video for content-based video browsing and search.

Content Based Lecture Video Retrieval Using OCR and ASR techniques

— The conventional video retrieval is based on available data such as speech recognition using voice of person. In this paper we present the approach of System Automated video indexing and video search in large lecture video archives. Lecture videos contain text information in the visual as well as audio channels the presentation slides and lecturer's speech. To extract the visual information, we apply video content analysis to detect slides and (OCR) Optical Character Recognition to obtain their text and (ASR) Automatic Speech Recognition is used to extract spoken text from the recorded audio. By applying video Optical Character Recognition (OCR) technology on key-frames and Automatic Speech Recognition (ASR) on lecture audio tracks, we extract textual metadata. For keyword extraction, by which both video-and segment-level keywords are extracted for content-based video browsing and search, the OCR and ASR transcript as well as detected slide text line types are adopted. I. INTRODUCTION Access video independent of time and location, a number of universities and research institutions are taking the opportunity to record their lectures and publish them online for students. On the Web, there has been a huge increase in the amount of multimedia data. To find desired videos without a search function within a video archive, for a user it is nearly impossible. Due to the rapid development in recording technology, improved video compression techniques and high-speed networks in the last few years its becoming very popular in universities to capture and record live presentations of lectures. E-lecturing is used so that students would be able to quickly access and review their required presentations independent of location and time. As a result, there is a huge increase in the amount of multimedia data on the Web. Hence it becomes nearly impossible to find desired videos without a search function within a video data. Also when the user found related video data, it is still difficult for him to judge whether a video is useful by only glancing at the title and other global metadata which are often brief and high level. By applying appropriate analysis techniques, we extract metadata from visual as well as audio resources of lecture videos automatically. Which can guide both visually and text-oriented users to navigate within lecture video, for evaluation purposes we developed several automatic indexing functionalities in a large lecture video portal. To verify the research hypothesis and to investigate the usability and the effectiveness of proposed video indexing features, we conducted a user study intended. For slide video segmentation and apply video OCR to gather text metadata, for visual analysis, we propose a new method. Furthermore, lecture outline is extracted from OCR transcripts by using stroke width and geometric information. Search function has been developed based on the structured video text. Which fills the gap in open-source ASR domain; we propose a solution for automatic German phonetic dictionary generation. The dictionary software and compiled speech corpus are provided for the further research use.

REVIEW ON CONTENT BASED VIDEO LECTURE RETRIEVAL

Recent advances in multimedia technologies allow the capture and storage of video data with relatively inexpensive computers. Furthermore, the new possibilities offered by the information highways have made a large amount of video data publicly available. However, without appropriate search techniques all these data are hardly usable. Users are not satisfied with the video retrieval systems that provide analogue VCR functionality. For example, a user analyses a soccer video will ask for specific events such as goals. Content-based search and retrieval of video data becomes a challenging and important problem. Therefore, the need for tools that can be manipulate the video content in the same way as traditional databases manage numeric and textual data is significant. Therefore, a more efficient method for video retrieval in WWW or within large lecture video archives is urgently needed. This project presents an approach for automated video indexing and video search in large lecture video archives. First of all, we apply automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. Subsequently, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames and Automatic Speech Recognition on lecture audio tracks.