Signal Detection Research Papers - Academia.edu (original) (raw)

Fingerprint is one of the most widely used methods for locating devices in indoor wireless environments and we have witnessed the emergence of several positioning systems aimed for indoor environments based on this approach. However,... more

Fingerprint is one of the most widely used methods for locating devices in indoor wireless environments and we have witnessed the emergence of several positioning systems aimed for indoor environments based on this approach. However, additional efforts are required in order to improve the performance of these systems so that applications that are highly dependent on user location can provide better services to its users. In this work we discuss some improvements to the positioning accuracy of the fingerprint-based systems. Our algorithm ranks the information about the location in a hierarchical way by identifying the building, the floor, the room and the geometric position. The proposed fingerprint method uses a previously stored map of the signal strength at several positions and determines the position using similarity functions and majority rules. In particular, we compare different similarity functions to understand their impact on the accuracy of the positioning system. The experimental results confirm the possibility of correctly determining the building, the floor and the room where the persons or the objects are at with high rates, and with an average error around 3 meters. Moreover, detailed statistics about the errors are provided, showing that the average error metric, often used by many authors, hides many aspects on the system performance.

During the signal identification process, massive multiple-input multiple-output (MIMO) systems must manage a high quantity of matrix inversion operations. To prevent exact matrix inversion in huge MIMO systems, several strategies have... more

During the signal identification process, massive multiple-input multiple-output (MIMO) systems must manage a high quantity of matrix inversion operations. To prevent exact matrix inversion in huge MIMO systems, several strategies have been presented, which can be loosely classified into similarity measures and evolutionary computation. In the existing Neumann series expansion and Newton methods, the initial value will be taken as zero as a result wherein the closure speed will be slowed and the prediction of the channel state information is not done properly. In this paper, fish swarm optimization algorithm is proposed in which initial values are chosen optimally for ensuring the faster and accurate signal detection with reduced complexity. The optimal values are chosen between 0 to 1 value and the initial arbitrary values are chosen based on number of input signals. In the proposed work, Realistic condition based channel state information prediction is done by using machine learning algorithm. Simulation results demonstrate that the suggested receiver's bit error rate performance characteristics employing the Quadrature Amplitude Modulation (QAM) methodology outperform the existing Neumann series expansion and Newton methods.

This report summarizes positive reinforcement conditioning procedures for assessing sensory function in transgenic mice. To illustrate these behavioral methods auditory sensitivity was measured in mice lacking h9 acetylcholine receptor... more

This report summarizes positive reinforcement conditioning procedures for assessing sensory function in transgenic mice. To illustrate these behavioral methods auditory sensitivity was measured in mice lacking h9 acetylcholine receptor subunits (h9 knock-out mice). These receptors are known to play an important role in the efferent pathways that modify cochlear responses to sound stimuli. The strategies of parameter manipulation that led these subjects through their preliminary training stages to stable threshold performances are described in detail. Techniques for estimating and interpreting sensory thresholds are discussed from the perspective of signal detection analyses. This study found no significant differences between h9 knock-out mice and control subjects when hearing thresholds were measured under quiet conditions, as predicted by previous behavioral and electrophysiological evidence. (C.A. Prosen) 0165-0270/00/$ -see front matter © 2000 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 5 -0 2 7 0 ( 0 0 ) 0 0 1 6 9 -2

The problem and the solution. Most executives are aware of the negative consequences associated with an organizational crisis and focus on communications and public relations as a reactive strategy. However, many neglect the other... more

The problem and the solution. Most executives are aware of the negative consequences associated with an organizational crisis and focus on communications and public relations as a reactive strategy. However, many neglect the other leadership responsibilities associated with organizational crises.This may result from lack of formal training and on-the-job experiences that prepare executives to lead crises. Executive who enable their organizations to recover from a crisis exhibit a complex set of competencies in each of the five phases of a crisis-signal detection, preparation and prevention, damage control and containment, business recovery, and reflection and learning. In this article, through the use of qualitative research design and the analysis of firms in crises, we examine leadership competencies during each phase of a crisis. In addition, this article links the important role of human resource development to building organizational capabilities through crisis management activities.

For the purposes of this report, "systemic disease" will be interpreted as conditions that are spread out within the body rather than localized strictly to the tissues of the oral cavity. Since it would take many volumes to review all... more

For the purposes of this report, "systemic disease" will be interpreted as conditions that are spread out within the body rather than localized strictly to the tissues of the oral cavity. Since it would take many volumes to review all such conditions, the intent of the authors is to review a few examples of conditions where initial panoramic radiographic findings suggested widespread disease of significance enough to affect the quality of life and longevity of the patient.

Arterial spin labeling (ASL) perfusion fMRI data differ in important respects from the more familiar blood oxygen level-dependent (BOLD) fMRI data and require specific processing strategies. In this paper, we examined several factors that... more

Arterial spin labeling (ASL) perfusion fMRI data differ in important respects from the more familiar blood oxygen level-dependent (BOLD) fMRI data and require specific processing strategies. In this paper, we examined several factors that may influence ASL data analysis, including data storage bit resolution, motion correction, preprocessing for cerebral blood flow (CBF) calculations and nuisance covariate modeling. Continuous ASL data were collected at 3 T from 10 subjects while they performed a simple sensorimotor task with an epoch length of 48 s. These data were then analyzed using systematic variations of the factors listed above to identify the approach that yielded optimal signal detection for task activation. Improvements in statistical power were found for use of at least 10 bits for data storage at 3 T. No significant difference was found in motor cortex regarding using simple subtraction or sinc subtraction, but the former presented minor but significantly (Pb.024) larger peak t value in visual cortex. While artifactual head motion patterns were observed in synthetic data and background-suppressed ASL data when label/control images were realigned to a common target, independent realignment of label and control images did not yield significant improvements in activation in the sensorimotor data. It was also found that CBF calculations should be performed prior to spatial normalization and that modeling of global fluctuations yielded significantly increased peak t value in motor cortex. The implementation of all ASL data processing approaches is easily accomplished within an open-source toolbox, ASLtbx, and is advocated for most perfusion fMRI data sets.

This work addresses the problem of estimating the conduction velocity (CV) of single motor unit (MU) action potentials from surface EMG signals detected with linear electrode arrays during voluntary muscle contractions. In... more

This work addresses the problem of estimating the conduction velocity (CV) of single motor unit (MU) action potentials from surface EMG signals detected with linear electrode arrays during voluntary muscle contractions. In idea/conditions, that is without shape or scale changes of the propagating signals and with additive white Gaussian noise, the maximum likelihood (ML) is the optimum estimator of delay. Nevertheless, other methods with computational advantages can be proposed; among them, a modified version of the beamforming algorithm is presented and compared with the ML estimator. In real cases, the resolution in delay estimation in the time domain is limited because of the sampling process. Transformation to the frequency

We develop a canonical, adaptive cascade-structure IIR notch filter to detect and track multiple time-varying frequencies in additive white Gaussian noise. The algorithm uses allpass frequency transformation filters and a truncated... more

We develop a canonical, adaptive cascade-structure IIR notch filter to detect and track multiple time-varying frequencies in additive white Gaussian noise. The algorithm uses allpass frequency transformation filters and a truncated gradient. Simulations indicate that our algorithm is computationally simple, converges rapidly, and has good frequency resolution

A novel robust watermarking scheme based on wavelet transform is presented in this paper. The watermark is embedded into significant triplet of wavelet coefficients in the three lowest detail subband images. The embedding algorithm only... more

A novel robust watermarking scheme based on wavelet transform is presented in this paper. The watermark is embedded into significant triplet of wavelet coefficients in the three lowest detail subband images. The embedding algorithm only needs to modify a small amount of the wavelet coefficients, which leads to very small distortion and guarantees the visual quality of the watermarked image. The performance of the proposed algorithm is tested on a series of normal attacks in image processing, such as JPEG compression, Gaussian noise addition, median filter, etc. The corresponding experimental results show that the proposed watermarking scheme has better performance.

Objective: We examined the effects of knowledge of results (KR) on vigilance accuracy and report the first use of positive and negative predictive power (PPP and NPP) to assess vigilance training effectiveness. Background: Training... more

Objective: We examined the effects of knowledge of results (KR) on vigilance accuracy and report the first use of positive and negative predictive power (PPP and NPP) to assess vigilance training effectiveness. Background: Training individuals to detect infrequent signals among a plethora of nonsignals is critical to success in many failure-intolerant monitoring technologies. KR has been widely used for vigilance training, but the effect of the schedule of KR presentation on accuracy has been neglected. Previous research on training for vigilance has used signal detection metrics or hits and false alarms. In this study diagnosticity measures were applied to augment traditional analytic methods. Method: We examined the effects of continuous KR and a partial-KR regimen versus a no-KR control on decision diagnosticity. Results: Signal detection theory (SDT) analysis indicated that KR induced conservatism in responding but did not enhance sensitivity. However, KR in both forms equally e...

The objective of the paper is to present a segmentation method, using maximum a posteriori probability (MAP) estimator, with application in decision making, based on change detection and diagnosis. Some experimental results obtained by... more

The objective of the paper is to present a segmentation method, using maximum a posteriori probability (MAP) estimator, with application in decision making, based on change detection and diagnosis. Some experimental results obtained by Monte-Carlo simulations for signal segmentation using different signal models, including models with changes in the mean, in FIR, AR and ARX model parameters, that make the object of investigation in other papers, are presented to prove the effectiveness of the approach.

The potential of a continuous-wave microwave transceiver as a tool for detecting breathing and heartbeat of people buried in snow has been experimentally evaluated. The breathing has been clearly detected through a 1.8-m-thick snow... more

The potential of a continuous-wave microwave transceiver as a tool for detecting breathing and heartbeat of people buried in snow has been experimentally evaluated. The breathing has been clearly detected through a 1.8-m-thick snow barrier as well as through the 1.2-m-thick roof of an igloo dugout to simulate the experimental conditions of a human being trapped under an avalanche.

Anomaly detection in a wireless sensor network (WSN) is an important aspect of data analysis in order to identify data items that significantly differ from normal data. A characteristic of the data generated by a WSN is that the data... more

Anomaly detection in a wireless sensor network (WSN) is an important aspect of data analysis in order to identify data items that significantly differ from normal data. A characteristic of the data generated by a WSN is that the data distribution may alter over the lifetime of the network due to the changing nature of the phenomenon being observed. Anomaly detection techniques must be able to adapt to a nonstationary data distribution in order to perform optimally. In this survey, we provide a comprehensive overview of approaches to anomaly detection in a WSN and their operation in a nonstationary environment.

An approach is presented to the problem of detecting the number of sources impinging on a passive sensor array that is based on J. Rissanen's (1983) minimum description length (MDL) principle. The approach is applicable to any type... more

An approach is presented to the problem of detecting the number of sources impinging on a passive sensor array that is based on J. Rissanen's (1983) minimum description length (MDL) principle. The approach is applicable to any type of sources, including the case of sources which are fully correlated, referred to as the coherent signals case. Two slightly different detection criteria are derived, both requiring the estimation of the locations of the sources. The first is tailored to the detection problem per se, whereas the ...

De-noising and extraction of the weak signature are crucial to fault prognostics in which case features are often very weak and masked by noise. The wavelet transform has been widely used in signal de-noising due to its extraordinary... more

De-noising and extraction of the weak signature are crucial to fault prognostics in which case features are often very weak and masked by noise. The wavelet transform has been widely used in signal de-noising due to its extraordinary time-frequency representation capability. In this paper, the performance of wavelet decomposition-based de-noising and wavelet filter-based de-noising methods are compared based on signals from mechanical defects. The comparison result reveals that wavelet filter is more suitable and reliable to detect a weak signature of mechanical impulse-like defect signals, whereas the wavelet decomposition de-noising method can achieve satisfactory results on smooth signal detection. In order to select optimal parameters for the wavelet filter, a two-step optimization process is proposed. Minimal Shannon entropy is used to optimize the Morlet wavelet shape factor. A periodicity detection method based on singular value decomposition (SVD) is used to choose the appropriate scale for the wavelet transform. The signal de-noising results from both simulated signals and experimental data are presented and both support the proposed method. r

The high level context image analysis regards many fields as face recognition, smile detection, automatic red eye removal, iris recognition, fingerprint verification, etc. Techniques involved in these fields need to be supported by more... more

The high level context image analysis regards many fields as face recognition, smile detection, automatic red eye removal, iris recognition, fingerprint verification, etc. Techniques involved in these fields need to be supported by more powerful and accurate routines. The aim of the proposed algorithm is to detect elliptical shapes from digital input images. It can be successfully applied in topics as signal detection or red eye removal, where the elliptical shape degree assessment can improve performances. The method has been designed to handle low resolution and partial occlusions. The algorithm is based on the signature contour analysis and exploits some geometrical properties of elliptical points. The proposed method is structured in two parts: firstly, the best ellipse which approximates the object shape is estimated; then, through the analysis and the comparison between the reference ellipse signature and the object signature, the algorithm establishes if the object is elliptical or not. The first part is based on symmetrical properties of the points belonging to the ellipse, while the second part is based on the signature operator which is a functional representation of a contour. A set of real images has been tested and results point out the effectiveness of the algorithm in terms of accuracy and in terms of execution time.

The first atmospheric event built only from infrasound arrivals was reported in the Reviewed Event Bulletin (REB) of the International Data Centre (IDC) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO) in 2003. In the... more

The first atmospheric event built only from infrasound arrivals was reported in the Reviewed Event Bulletin (REB) of the International Data Centre (IDC) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO) in 2003. In the last decade, 42 infrasound stations from the International Monitoring System (IMS) have been installed and are transmitting data to the IDC. The growing amount of infrasound data and detections produced by the automatic system challenged the station and network processing at the IDC, which required the Organization to redesign the way infrasound data are processed. Each infrasound array is processed separately for signal detection using a progressive multi-channel correlation method (DFX-PMCC). For each detection, signal features - onset time, amplitude, frequency, duration, azimuth, phase velocity, F-statistics - are measured and used to identify a detection as infrasonic, seismic, or noise (including clutter). Infrasonic signals along with seismic and hydroacoustic signals are subsequently associated with Global Association software (GA) between stations to locate events. During detection and association phases, criteria are applied to eliminate clutter, identify signals of interest, and keep the number of automatic events containing infrasound detections to a manageable level for analyst review. The IDC has developed analysis and visualization tools specifically for infrasound review (e.g. Geotool-PMCC). The IDC has continued to build the Infrasound Reference Event Database (IRED) from observations on the IMS network. This database assists both the routine IDC infrasound analysis and analyst training as it reflects the global detection capability of the network, illustrates the spatial and temporal variability of the observed phenomena, and demonstrates the various origins of infragenic sources. Since 2007, the IDC has introduced new analyst procedures to review and add selected infrasound events to the REB. In early 2010, the IDC began routine automatic processing of infrasound data reviewed by interactive analysis; the detected and located events are now systematically included in the REB. Work is under way on the enhancement of the automatic system for the identification of valid signals and the optimization of the detection threshold of the network.

Limited battery power at wireless video sensor nodes, along with the transmission quality requirements for video data, makes quality-of-service (QoS) provisioning in a wireless video sensor network a very challenging task. In this paper,... more

Limited battery power at wireless video sensor nodes, along with the transmission quality requirements for video data, makes quality-of-service (QoS) provisioning in a wireless video sensor network a very challenging task. In this paper, a dynamic power-management framework is proposed for a wireless video sensor node to improve the energy-saving performance while satisfying video transmission quality requirements. This framework is developed based on a Markov decision process that considers the video traffic arrival process in the sensor node, the sleep and wakeup processes in the camera and wireless transceiver electronics, the queue status, and the wireless channel condition. A dynamic programming approach is used to find the optimum policy to achieve the desired performance measures in an energylimited sensor node.

Detection performance of an energy detector used for cooperative spectrum sensing in a cognitive radio network is investigated over channels with both multipath fading and shadowing. The analysis focuses on two fusion strategies: data... more

Detection performance of an energy detector used for cooperative spectrum sensing in a cognitive radio network is investigated over channels with both multipath fading and shadowing. The analysis focuses on two fusion strategies: data fusion and decision fusion. Under data fusion, upper bounds for average detection probabilities are derived for four scenarios: 1) single cognitive relay; 2) multiple cognitive relays; 3) multiple cognitive relays with direct link; and 4) multi-hop cognitive relays. Under decision fusion, the exact detection and false alarm probabilities are derived under the generalized " -outof-" fusion rule at the fusion center with consideration of errors in the reporting channel due to fading. The results are extended to a multi-hop network as well. Our analysis is validated by numerical and simulation results. Although this research focuses on Rayleigh multipath fading and lognormal shadowing, the analytical framework can be extended to channels with Nakagami-multipath fading and lognormal shadowing as well.

Objective: This study was concerned with the development of quantitative measures of social development in autism. Method: Multiple regression equati ons predictin g social , communicative, and daily living skills on the Vineland Adaptive... more

Objective: This study was concerned with the development of quantitative measures of social development in autism. Method: Multiple regression equati ons predictin g social , communicative, and daily living skills on the Vineland Adaptive Behavior Scales were derived from a large, normative sample and applied to groups of autistic and nonautistic, developmentally disordered children. Predicti ve model s included either mental or chronologica l age and other relevant variables. Results: Social skills in the autistic group were more than two standard deviations below those predicted by their mental age; an index derived from the ratio of actual to predicted social skills correctly classified 94% of the autistic and 92% of the nonautistic, developmentally disordered cases. Conclusions: The findings are consistent with the idea that social disturbance is central in the definition of autism. The approa ch used in this study has potential advantages for providing more precise measures of social development in autism.

A subthreshold signal is transmitted through a channel and may be detected when some noise -with known structure and proportional to some level -is added to the data. There is an optimal noise level, called stochastic resonance, that... more

A subthreshold signal is transmitted through a channel and may be detected when some noise -with known structure and proportional to some level -is added to the data. There is an optimal noise level, called stochastic resonance, that corresponds to the highest Fisher information in the problem of estimation of the signal. As noise we consider an ergodic diffusion process and the asymptotic is considered as time goes to infinity.

To gain insight into the proportion of consumers who respond to extraneous factors in the paired preference testing situation, rather than the sensory properties of products being assessed, "placebo" preference tests with putatively... more

To gain insight into the proportion of consumers who respond to extraneous factors in the paired preference testing situation, rather than the sensory properties of products being assessed, "placebo" preference tests with putatively identical products are used. One use is as a tool to select consumers who ignore extraneous factors and are thus suitable for use in preference tests, where experimenters wish to have confidence that consumers are ignoring extraneous factors and responding only to the sensory attributes of the products being tested. Yet, selecting such consumers tends often to reduce the sample size to approximately 20-35% of its original, which is unacceptably low. The protocol used in this study employed unusual instructions and questions to reverse the hidden demand characteristics of the test, so that the sample size was only reduced to 80-90% of its original.

Various methods are used to locate cloud-toground lightning flash. The accuracy of the methods depends on the number of sensor stations, station topology, mathematical model adopted for the analyses, and the type of measuring equipment.... more

Various methods are used to locate cloud-toground lightning flash. The accuracy of the methods depends on the number of sensor stations, station topology, mathematical model adopted for the analyses, and the type of measuring equipment. For a single station with the configuration of short baseline in a measuring system, the accuracy of analysis is mostly based on the chosen mathematical equations (and assumptions) which can be solved in either linear or nonlinear mode. This paper concentrates on the simulation of a lightning flash and its detection based on the Time of Arrival (TOA) method by utilizing four broadband antennas. The accuracy of the TOA method is also evaluated in this work. A Matlab based simulation of the measuring system is implemented to construct a randomly located lightning flash with its corresponding electromagnetic radiation. Then, suitable mathematical models used as measurement systems based on the 3-antenna TOA method to determine the azimuth and elevation angles, which in turn can be used for the calculation of flash coordinates in term of (X,Y,Z) with the help of the fourth antenna.

Wireless sensor networks have been used to gather data and information in many diverse application settings. The capacity of such networks remains a fundamental obstacle toward the adaptation of sensor network systems for advanced... more

Wireless sensor networks have been used to gather data and information in many diverse application settings. The capacity of such networks remains a fundamental obstacle toward the adaptation of sensor network systems for advanced pplications
that require higher data rates and throughput. In this paper, we explore potential benefits of integrating directional antennas into wireless sensor networks. While the usage of directional antennas has been investigated in the past for ad hoc networks, their usage in sensor networks bring both opportunities as well as challenges. In this paper, Sectored-Antenna Medium Access Control (SAMAC), an integrated cross-layer protocol that provides the communication mechanisms for sensor network to fully utilize sectored antennas, is introduced. Simulation studies show that SAMAC delivers high energy efficiency and predictable delay performance with graceful degradation in performance with increased load.

The introduction of multifocal stimulus recording has enhanced our ability to examine the human visual field with electrophysiologic techniques. We have adapted the multifocal pattern visual evoked potential (PVEP) to detect visual field... more

The introduction of multifocal stimulus recording has enhanced our ability to examine the human visual field with electrophysiologic techniques. We have adapted the multifocal pattern visual evoked potential (PVEP) to detect visual field loss. In glaucoma patients we sought to determine the extent to which the PVEP amplitudes correlate with perimetric thresholds. Multifocal pseudorandomly alternated pattern stimuli, which were cortically scaled in size, were presented with use of the VERIS-Scientific system. Bipolar occipital straddle electrode positions were used. The visual field up to 25 Њ of eccentricity was investigated. Forty-three glaucoma patients with reproducible visual field defects were tested. The bipolar PVEP corresponded well with Humphrey visual field defects, showing loss of signal in the scotoma area. For Humphrey quadrant threshold totals and PVEP quadrant amplitudes, the correlation coefficient was strong (r ϭ 0.49, P Ͻ 0.0001). The multifocal PVEP demonstrates good correspondence with the topography of the visual field. This technique represents the first practical application of the multifocal PVEP to objective detection of visual field defects in glaucoma. ( Surv Ophthalmol 43 [Suppl 1] :S199-S209, 1999.

the paper addresses the problem of estimating the chirp signals embedded in Gaussian noise. The proposed method is based on a model of the signal phase as a polynomial. This approach offers the opportunity to represent these signals by an... more

the paper addresses the problem of estimating the chirp signals embedded in Gaussian noise. The proposed method is based on a model of the signal phase as a polynomial. This approach offers the opportunity to represent these signals by an adequate state space model and to apply standard Kalman filtering procedures in view to estimate the parameters of chirp signals. Procedure simulations were made on linear chirp sinusoids with time-varying amplitude and are consistent with the theoretical approach. The paper presents the most important results.

Note onset detection and localization is useful in a number of analysis and indexing techniques for musical signals. The usual way to detect onsets is to look for "transient" regions in the signal, a notion that leads to many definitions:... more

Note onset detection and localization is useful in a number of analysis and indexing techniques for musical signals. The usual way to detect onsets is to look for "transient" regions in the signal, a notion that leads to many definitions: a sudden burst of energy, a change in the short-time spectrum of the signal or in the statistical properties, etc. The goal of this paper is to review, categorize, and compare some of the most commonly used techniques for onset detection, and to present possible enhancements. We discuss methods based on the use of explicitly predefined signal features: the signal's amplitude envelope, spectral magnitudes and phases, time-frequency representations; and methods based on probabilistic signal models: model-based change point detection, surprise signals, etc. Using a choice of test cases, we provide some guidelines for choosing the appropriate method for a given application.

This paper discusses about design of common wireless communication services recognition for three different standards, i.e. Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS) or Wideband Code... more

This paper discusses about design of common wireless communication services recognition for three different standards, i.e. Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS) or Wideband Code division Access (WCDMA), and Long Term Evolution (LTE), also known well as 2G, 3G, and 3.9G respectively. To recognize them using a common system, we propose performing within synchronization channel detection since the similarity in cell search procedures i.e. always detect synchronization signals. To minimize resources, we prefer using autocorrelation to detect GSM and LTE synchronization signals and match filter for UMTS synchronization signal. However, it is not enough to distinguish each others. Therefore, we also propose to recognize the peak period of the autocorrelation output as a main concept to distinguish the standards. We design this system using symplify DSP tool and simulate in simulink. The simulations are performed within AWGN channel and show good performance even with 10 dB SNR. Moreover, FPGA implementation for target device Virtex 4 XC4VLX200, the proposed design can be implemented within 24171 look up tables (LUTs).

The recent interest in the surveillance of public, military, and commercial scenarios is increasing the need to develop and deploy intelligent and/or automated distributed visual surveillance systems. Many applications based on... more

The recent interest in the surveillance of public, military, and commercial scenarios is increasing the need to develop and deploy intelligent and/or automated distributed visual surveillance systems. Many applications based on distributed resources use the socalled software agent technology. In this paper, a multi-agent framework is applied to coordinate videocamera-based surveillance. The ability to coordinate agents improves the global image and task distribution efficiency. In our proposal, a software agent is embedded in each camera and controls the capture parameters. Then coordination is based on the exchange of high-level messages among agents. Agents use an internal symbolic model to interpret the current situation from the messages from all other agents to improve global coordination.

Abstract__ The automatic recognition of the modulation format of a detected signal, the intermediate step between signal detection and demodulation, is a major task of an intelligent receiver, with various civilian and military... more

Abstract__ The automatic recognition of the modulation format of a detected signal, the intermediate step between signal detection and demodulation, is a major task of an intelligent receiver, with various civilian and military applications.

We suggest a new interpretation of Woodward's ambiguity function as the expected value of an operator. The operator represents the physics of the interaction of the waveform with the object. This approach provides a new approach to... more

We suggest a new interpretation of Woodward's ambiguity function as the expected value of an operator. The operator represents the physics of the interaction of the waveform with the object. This approach provides a new approach to understanding the return signal at the receiver and can reveal more detailed understanding of the underlying interactions within the return signal that are not usually brought out by standard signal processing techniques.

A receiver operating characteristic curve (ROC) is a statistical tool to assess the accuracy of a test. In this article, we propose to review the methods and applications of the ROC curve for various situations in clinical trials and drug... more

A receiver operating characteristic curve (ROC) is a statistical tool to assess the accuracy of a test. In this article, we propose to review the methods and applications of the ROC curve for various situations in clinical trials and drug development. The ROC curve is widely accepted as a method for selecting an optimal cutoff point and for comparing the accuracy of diagnostic tests. It is a plot of the true positive rate against the false positive rate for different cutoff points. The sensitivity and specificity of a test, however, depend on the level that has been chosen as a cutoff point for normal or abnormal. In bioavailability or bioequivalence (BA/ BE) studies, the area under the curve (AUC) is often used to measure the extent of absorption or total amount of drug absorbed into the body. AUC is usually calculated using a trapezoidal rule based on blood or plasma concentrations obtained at various sampling time points. In pharmacovigilance, the ROC curve is used in signal detection.

The interferometer employs a 5.5-m and a 3-m-diameter antenna on an east-west baseline of 60 or 120 m, yielding fringe spacings at transit of 28" or 14", respectively. The broad intermediate-frequency bandpass of 1 00-350 MHz and the... more

The interferometer employs a 5.5-m and a 3-m-diameter antenna on an east-west baseline of 60 or 120 m, yielding fringe spacings at transit of 28" or 14", respectively. The broad intermediate-frequency bandpass of 1 00-350 MHz and the system noise temperature of 500 K provide high sensitivity for the measurement of continuum sources. The interferometer has been used for high-resolution studies of the planets and the Sun, and it is currently being adapted to study solar flare emissions at high spatial and time resolution.

In mobile radio systems the effects of fading can be counteracted by the use of diversity techniques either at the base station or the mobile. Since reducing fading, diversity schemes contribute to the reduction of the bit error rate that... more

In mobile radio systems the effects of fading can be counteracted by the use of diversity techniques either at the base station or the mobile. Since reducing fading, diversity schemes contribute to the reduction of the bit error rate that degrades the performance of any modulation scheme. This paper attempts to determine the performance of a typical TETRA system, without

In this paper, a quasi-peak detector mode for a time-domain electromagnetic interference (TDEMI) measurement system is described. Measurements were performed in the 30-1000-MHz range. The digital signal processing of EMI measurements can... more

In this paper, a quasi-peak detector mode for a time-domain electromagnetic interference (TDEMI) measurement system is described. Measurements were performed in the 30-1000-MHz range. The digital signal processing of EMI measurements can emulate in real time the modes of conventional analog equipment, e.g., peak, average, rms, and quasi-peak detector. With the presented time-domain measurement system, the measurement time can be reduced by a factor of ten. A novel signal recording routine for TDEMI measurements and quasi-peak detection is described. Measurement results obtained from the investigation of a drill machine, monitor, and laptop obtained with the TDEMI measurement system are discussed. The comparison of the results obtained with the described TDEMI measurement system and measurements performed with a conventional EMI receiver show an average deviation over the whole frequency range less than 3 dB.

Novelty detection is the ident ification of new or unknown data or signal that a machine learning system is not aware of during training. In this paper we focus on neural network based approaches for novelty detection. Statistical... more

Novelty detection is the ident ification of new or unknown data or signal that a machine learning system is not aware of during training. In this paper we focus on neural network based approaches for novelty detection. Statistical approaches are covered in part-I paper.

Wireless sensor networks (WSNs) offer the potential to significantly improve the efficiency of existing transportation systems. Currently, collecting traffic data for traffic planning and management is achieved mostly through wired... more

Wireless sensor networks (WSNs) offer the potential to significantly improve the efficiency of existing transportation systems. Currently, collecting traffic data for traffic planning and management is achieved mostly through wired sensors. The equipment and maintenance cost and time-consuming installations of existing sensing systems prevent large-scale deployment of real-time traffic monitoring and control. Small wireless sensors with integrated sensing, computing, and wireless communication capabilities offer tremendous advantages in low cost and easy installation. In this paper, we first survey existing WSN technologies for intelligent transportation systems (ITSs), including sensor technologies, energy-efficient networking protocols, and applications of sensor networks for parking lot monitoring, traffic monitoring, and traffic control. Then, we present new methods on applying WSNs in traffic modeling and estimation and traffic control, and show their improved performance over existing solutions. Copyright © 2008 John Wiley & Sons, Ltd.

The chord progression of a song is an important high-level feature which enables indexing as well as deeper analysis of musical recordings. Different approaches to chord recognition have been suggested in the past. Though their... more

The chord progression of a song is an important high-level feature which enables indexing as well as deeper analysis of musical recordings. Different approaches to chord recognition have been suggested in the past. Though their performance increased, still significant error rates seem to be unavoidable. One way to improve accuracy is to try to correct possible misclassifications. In this paper, we propose a postprocessing method based on considerations of musical harmony, assuming that the pool of chords used in a song is limited and that strong oscillations of chords are uncommon. We show that exploiting (uncertain) knowledge about the chorddistribution in a chord's neighbourhood can significantly improve chord detection accuracy by evaluating our proposed post-processing method for three baseline classifiers on two early Beatles albums.