Speech and sliding text aided sign retrieval from hearing impaired sign news videos (original) (raw)
The objective of this study is to automatically extract annotated sign data from the broadcast news recordings for the hearing impaired. These recordings present an excellent source for automatically generating annotated data: In news for the hearing impaired, the speaker also signs with the hands as she talks. On top of this, there is also corresponding sliding text superimposed on the video. The video of the signer can be segmented via the help of either the speech or both the speech and the text, generating segmented, and annotated sign videos. We call this application as Signiary, and aim to use it as a sign dictionary where the users enter a word as text and retrieves sign videos of the related sign. This application can also be used to automatically create annotated sign databases that can be used for training recognizers.