Devon Bates - Academia.edu (original) (raw)

Devon Bates

Uploads

Papers by Devon Bates

Research paper thumbnail of Smart Autonomous Vehicle in a Scaled Urban Environment

Autonomous vehicles are of increasing interest to researchers. However, analysis of full-scale au... more Autonomous vehicles are of increasing interest to researchers. However, analysis of full-scale autonomous vehicle technology is costly. The focus of this project is the design of an autonomous control system such that a 1/14 scale vehicle (RC MAN TGX 26.540 6x4 XLX) can navigate a proportionally scaled roadway. The top level behavioral objective is for the vehicle to approach an intersection, halt at the stop line, execute a right turn, and stay within lane lines at all times. A camera module (OV7670) is to be interfaced with two digital signal processors (TMS320C5515) to perform the image processing. The primary controller is implemented using a microcontroller (Stellaris LM4F120) and its output is received by a secondary controller for motor interfacing. The TMS320C5515 communicates with the Stellaris LM4F120 through an inter-integrated circuit bus. Lane detection is implemented on the TMS320C5515 using Canny/Hough estimation of vanishing points to generate a steering angle of cor...

Research paper thumbnail of Multi-Modal Analysis of Movies for Rhythm Extraction

This paper motivates a multi-modal approach for analysis of aesthetic elements of films through i... more This paper motivates a multi-modal approach for analysis of aesthetic elements of films through integration of visual and auditory features. Prior work in characterizing aesthetic elements of film has predominantly focused on visual features. We present comparison of analysis from multiple modalities in a rhythm extraction task. For detection of important events based on a model of rhythm/tempo we compare analysis of visual features and auditory features. We introduce an audio tempo function that characterizes the pacing of a video segment. We compare this function with its visual pace counterpart. Preliminary results indicate that an integrated approach could reveal more semantic and aesthetic information from digital media. With combined information from the two signals, tasks such as automatic identification of important narrative events, can enable deeper analysis of large-scale video corpora.

Research paper thumbnail of Smart Autonomous Vehicle in a Scaled Urban Environment

Autonomous vehicles are of increasing interest to researchers. However, analysis of full-scale au... more Autonomous vehicles are of increasing interest to researchers. However, analysis of full-scale autonomous vehicle technology is costly. The focus of this project is the design of an autonomous control system such that a 1/14 scale vehicle (RC MAN TGX 26.540 6x4 XLX) can navigate a proportionally scaled roadway. The top level behavioral objective is for the vehicle to approach an intersection, halt at the stop line, execute a right turn, and stay within lane lines at all times. A camera module (OV7670) is to be interfaced with two digital signal processors (TMS320C5515) to perform the image processing. The primary controller is implemented using a microcontroller (Stellaris LM4F120) and its output is received by a secondary controller for motor interfacing. The TMS320C5515 communicates with the Stellaris LM4F120 through an inter-integrated circuit bus. Lane detection is implemented on the TMS320C5515 using Canny/Hough estimation of vanishing points to generate a steering angle of cor...

Research paper thumbnail of Multi-Modal Analysis of Movies for Rhythm Extraction

This paper motivates a multi-modal approach for analysis of aesthetic elements of films through i... more This paper motivates a multi-modal approach for analysis of aesthetic elements of films through integration of visual and auditory features. Prior work in characterizing aesthetic elements of film has predominantly focused on visual features. We present comparison of analysis from multiple modalities in a rhythm extraction task. For detection of important events based on a model of rhythm/tempo we compare analysis of visual features and auditory features. We introduce an audio tempo function that characterizes the pacing of a video segment. We compare this function with its visual pace counterpart. Preliminary results indicate that an integrated approach could reveal more semantic and aesthetic information from digital media. With combined information from the two signals, tasks such as automatic identification of important narrative events, can enable deeper analysis of large-scale video corpora.

Log In