T. Lookabaugh - Academia.edu (original) (raw)
Papers by T. Lookabaugh
IEEE Communications Magazine, 2016
IEEE MILCOM 2004. Military Communications Conference, 2004., 2004
IEEE Transactions on Acoustics, Speech, and Signal Processing, 1989
Akfmct-An iterative descent algorithm based on a Lagrangian formulation is introduced for designi... more Akfmct-An iterative descent algorithm based on a Lagrangian formulation is introduced for designing vector quantizers having minimum distortion subject to an entropy constraint. These entropy-constrained vector quantizers (ECVQ's) can be used in tandem with variable rate noiseless coding systems to provide locally optimal variable rate block source coding with respect to a fidelity criterion. Experiments on sampled speech and on synthetic sources with memory indicate that for waveform coding at low rates (about 1 bit/sample) under the squared error distortion measure, about 1.6 dB improvement in the signal-to-noise ratio can be expected over the best scalar and lattice quantizers when block entropy-coded with blocklength 4. Even greater gains are made over other forms of entropy-coded vector quantizers. For pattern recognition, it is shown that the ECVQ algorithm is a generalization of the k-means and related algorithms for estimating cluster means, in that the ECVQ algorithm estimates the prior cluster probabilities as well. Experiments on multivariate Gaussian distributions show that for clustering problems involving classes with widely different priors, the ECVQ outperforms the k-means algorithm in both likelihood and probability of error.
Proceedings Frontiers in Education 35th Annual Conference, 2005
The Interdisciplinary Telecommunications Program at the University of Colorado has developed an i... more The Interdisciplinary Telecommunications Program at the University of Colorado has developed an internet based remote laboratory environment for master's level graduate students; our suite of telecommunications experiments substantially extends prior work focused on networking equipment, by (1) providing a systems focus, (2) enabling multiple reinforcing methods of accessing the educational material, (3) providing a configuration matrix to support realtime network reconfigurations (of real network elements) (4) undertaking a careful assessment of the learning environment. The goal was to create an environment that reproduced (not just emulated) the lab experience. We recently completed the final phase of this project focusing on assessment of this learning tool; such assessment is still rare in the literature on remote laboratories. We describe the project from three perspectives; students' exam results, students' lab reports, and students' satisfaction with the distance experience (based on interviews). We conclude that our remote laboratories provide similar learning outcomes to their in class analogues, but that there are important differences in student perceptions of the experience, including perceived difficulty and pace.
Proceedings. 1991 IEEE International Symposium on Information Theory, 1991
... Finallj, given the above encoder. ... Each of the above steps reduces the value of the Lagran... more ... Finallj, given the above encoder. ... Each of the above steps reduces the value of the Lagrangian and hence when iterated until convergence yields a locally opti-mal VV VQ. References P. .4. Chou, T. Lookabaugh, and RM Gray. constrained vector quantization. ...
Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, 2004., 2004
ABSTRACT Selective encryption only encrypts a portion of a compressed bitstream, relying on the c... more ABSTRACT Selective encryption only encrypts a portion of a compressed bitstream, relying on the characteristics of the compression format to render the remaining in-the-clear content unusable. While the technique has been proposed in a number of practical applications, it is also motivated by concepts from the origins of Shannon theory on the links between source coding and encryption and benefits from an evaluation of its performance on compression primitives, such as quantization and Huffman coding, used in constructing many compression algorithms.
Proceedings of ICASSP '94. IEEE International Conference on Acoustics, Speech and Signal Processing, 1994
ABSTRACT We introduce a method for locally optimal variable-to-variable length source coding with... more ABSTRACT We introduce a method for locally optimal variable-to-variable length source coding with distortion, and apply it to coding the linear predictive coefficients of speech. The method is similar to entropy-constrained vector quantization, but it uses a dynamic programming algorithm to encode. The method automatically discovers variable-length source structure, in this case the acoustic-phonetic structure of speech. Using this structure, it is possible to compress the linear predictive coefficients of speech to one-third the rate of entropy-constrained vector quantization of speech, with no increase in spectral distortion. Auditory tests reveal that using this method, the spectral component of speech can be coded naturally and intelligibly to as low as 50 bits per second.
International Conference on Acoustics, Speech, and Signal Processing, 1989
... This is true, because, ignoring phase factors, the 2-D DCT basis functions are the product of... more ... This is true, because, ignoring phase factors, the 2-D DCT basis functions are the product of two cosine fonctions, one with a vertical orientation, and one with a horizontal orientation. But cos[Zaf,z] cos[27rfvy] = 1 2 (C42dfiz + fvY)l + COS[2T(f%T. - f,Y)l) ! (12) ...
34th Annual Frontiers in Education, 2004. FIE 2004., 2004
ABSTRACT Information technology, and in particular distance education technology, is becoming mor... more ABSTRACT Information technology, and in particular distance education technology, is becoming more prevalent across society and throughout higher education. But as information technology mediated education moves from trials towards educating non-trivial numbers of students, we can expect established universities to resist wholesale adoption; particularly when it threatens core perceptions of what students want and need and the culture and financial model of the institution. The resulting increasing tension creates the potential for sudden and dramatic shifts rather than gradual adoption. Applications and practices that can signal the maturation of information technology mediated education include course importation and remote laboratory experiences. For institutions, successful development of information technology mediated education may require autonomous units. For individuals, the decision revolves around whether to participate and, if so, in what manner, particularly given academic culture and the potential for institutional resistance.
SMPTE Motion Imaging Journal, 1995
IEEE International Engineering Management Conference, 2002
Innovation by acquisition is an increasingly important strategy for innovation in established hig... more Innovation by acquisition is an increasingly important strategy for innovation in established high technology firms. After describing the background for innovation in small and large firms, a model for innovation by acquisition and the roles of established firms, entrepreneurial firms, and investors is developed, and evidence presented suggesting increasing application of the model.
This assessment of recent data compression and coding research outside the United States examines... more This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.
IEEE Transactions on Medical Imaging, 1990
Three techniques for variable-rate vector quantizer design are applied to medical images. The fir... more Three techniques for variable-rate vector quantizer design are applied to medical images. The first two are extensions of an algorithm for optimal pruning in tree-structured classification and regression due to Breiman et al. The code design algorithms find subtrees of a given tree-structured vector quantizer (TSVQ), each one optimal in that it has the lowest average distortion of all subtrees of the TSVQ with the same or lesser average rate. Since the resulting subtrees have variable depth, natural variable-rate coders result. The third technique is a joint optimization of a vector quantizer and a noiseless variable-rate code. This technique is relatively complex but it has the potential to yield the highest performance of all three techniques.
IEEE Transactions on Information Theory, 1995
The performance of optimum vector quantizers subject to a conditional entropy constraint is studi... more The performance of optimum vector quantizers subject to a conditional entropy constraint is studied in this paper. This new class of vector quantizers was originally suggested by Chou and Lookabaugh. A locally optimal design of this kind of vector quantizer can be accomplished through a generalization of the well known entropy-constrained vector quantizer (ECVQ) algorithm. This generalization of the ECVQ algorithm to a conditional entropy-constrained is called CECVQ, i.e., conditional ECVQ. Furthermore, we have extended the high-rate quantization theory to this new class of quantizers to obtain a new high-rate performance bound, which is a generalization of the works of Gersho and Yamada, Tazaki and Gray. The new performance bound is compared and shown to be consistent with bounds derived through conditional rate-distortion theory. Recently, a new algorithm for designing entropyconstrained vector quantizers was introduced by Garrido, Pearlman and Finamore, and is named entropy-constrained pairwise nearest neighbor (ECPNN). The algorithm is basically an entropy-constrained version of the pairwise nearest neighbor (PNN) * This material is based upon work supported by the Conselho Nacional de Densenvolvimento Científico e Tecnológico (CNPq-Brazil) under Grant 202.178-90, and by the National Science Foundation under Grant Nos. INT-8802240 and NCR-9004758. The government has certain rights in this material. † D. P. de Garrido was with the Electrical,
IEEE Transactions on Information Theory, 1989
An algorithm recently introduced by Breiman, Friedman, Olshen, and Stone in the context of classi... more An algorithm recently introduced by Breiman, Friedman, Olshen, and Stone in the context of classification and regression trees is reinterpreted and extended to cover a variety of applications in source coding and modeling in which trees are involved. These include variable-rate and minimum-entropy tree-structured vector quantization, minimum expected cost decision trees, variable-order Markov modeling, optimum bit allocation, and computer graphics and image processing using quadtrees. A concentration on the first of these and a detailed analysis of variable-rate tree-structured vector quantization are provided. We find that variable-rate tree-structured vector quantization outperforms not only the fixed-rate variety but also full-search vector quantization as well. Furthermore, the "successive approximation" character of variable-rate tree-structured vector quantization permits it to degrade gracefully if the rate is reduced at the encoder. This has applications to the problem of buffer overflow.
IEEE Transactions on Communications, 1993
The performance of a vector quantizer can be improved by using a variable rate code. We apply thr... more The performance of a vector quantizer can be improved by using a variable rate code. We apply three variable rate vector quantization systems to speech, image, and video sources and compare them to standard vector quantization and noiseless variable rate coding approaches. The systems range from a simple and flexible tree-based vector quantizer to a high performance, but complex, jointly optimized vector quantizer and noiseless code. The systems provide significant performance improvements for subband speech coding, predictive image coding, and motion compensated video, but provide only marginal improvements for vector quantization of linear predictive coefficients in speech and direct vector quantization of images. We suggest criteria for determining when variable rate vector quantization may provide significant performance improvement over standard approaches.
IEEE Communications Magazine, 2000
Selective encryption is a technique to save computational complexity or enable interesting new sy... more Selective encryption is a technique to save computational complexity or enable interesting new system functionality hy only encrypting a portion of a compressed bitstream while still achieving adequate security. Although suggested in a number of specific cases, selective encryption ' could be much more widely used in consumer electronic applications ranging from mobile multimedia terminals through digital cameras were it subjected to a more thorough security analysis. We describe selective encryption and develop a simple scalar quantizer example to demonstrate the power of the concept, list a number of potential consumer electronics applications, and then describe an appropriate method for developing and analyzing selective encryption for particular compression algorithms. We summarize results from application of this method to the MPEG-2 video compression algorithm
IEEE Communications Magazine, 2000
IEEE COMMUNICATIONS MAGAZINE. Volume: 42 Issue: 5 Date: May 2004. Providing Quality of Service in... more IEEE COMMUNICATIONS MAGAZINE. Volume: 42 Issue: 5 Date: May 2004. Providing Quality of Service in Heterogeneous Environments Proceedings of the 18th International Teletraffic Congress Stankiewetz, R. Page(s): 20- 20. UNIX Network Programming, Volume 1: The Sockets Networking Juszkiewicz, K. Page(s): 20- 21. New products Page(s): 25- 27. Urban optical wireless communication networks: the main challenges and possible solutions Kedar, D.; Arnon, S. Page(s): S2- S7. ...
Low cost networks of wireless sensors can be distributed to provide information about an environm... more Low cost networks of wireless sensors can be distributed to provide information about an environment. Even a network of sensors providing scalar measurements (for instance, of temperature) presents both formidable challenges in terms of integrating and interpreting measurements over space and time, and important opportunities in extended observations. Cameras are particularly powerful multidimensional sensors for dispersing in unknown environments for surveillance and tracking of activity. Understanding the spatial patterns of such activity requires the camera network to self-organize in terms of understanding relative positions of nodes. Cameras also pose problems for resource limited motes because of the high volumes of image data for local processing or transmission. We describe a self-righting or weeble node architecture for camera networks based on integrating a low cost camera into the Mica2 sensor node platform. The node uses a wide field of view lens (typically called a fish eye lens) which allows us to capture a very broad region around the node providing greater view overlap between the nodes and generally a larger frame for identifying and tracking activity.
IEEE Communications Magazine, 2016
IEEE MILCOM 2004. Military Communications Conference, 2004., 2004
IEEE Transactions on Acoustics, Speech, and Signal Processing, 1989
Akfmct-An iterative descent algorithm based on a Lagrangian formulation is introduced for designi... more Akfmct-An iterative descent algorithm based on a Lagrangian formulation is introduced for designing vector quantizers having minimum distortion subject to an entropy constraint. These entropy-constrained vector quantizers (ECVQ's) can be used in tandem with variable rate noiseless coding systems to provide locally optimal variable rate block source coding with respect to a fidelity criterion. Experiments on sampled speech and on synthetic sources with memory indicate that for waveform coding at low rates (about 1 bit/sample) under the squared error distortion measure, about 1.6 dB improvement in the signal-to-noise ratio can be expected over the best scalar and lattice quantizers when block entropy-coded with blocklength 4. Even greater gains are made over other forms of entropy-coded vector quantizers. For pattern recognition, it is shown that the ECVQ algorithm is a generalization of the k-means and related algorithms for estimating cluster means, in that the ECVQ algorithm estimates the prior cluster probabilities as well. Experiments on multivariate Gaussian distributions show that for clustering problems involving classes with widely different priors, the ECVQ outperforms the k-means algorithm in both likelihood and probability of error.
Proceedings Frontiers in Education 35th Annual Conference, 2005
The Interdisciplinary Telecommunications Program at the University of Colorado has developed an i... more The Interdisciplinary Telecommunications Program at the University of Colorado has developed an internet based remote laboratory environment for master's level graduate students; our suite of telecommunications experiments substantially extends prior work focused on networking equipment, by (1) providing a systems focus, (2) enabling multiple reinforcing methods of accessing the educational material, (3) providing a configuration matrix to support realtime network reconfigurations (of real network elements) (4) undertaking a careful assessment of the learning environment. The goal was to create an environment that reproduced (not just emulated) the lab experience. We recently completed the final phase of this project focusing on assessment of this learning tool; such assessment is still rare in the literature on remote laboratories. We describe the project from three perspectives; students' exam results, students' lab reports, and students' satisfaction with the distance experience (based on interviews). We conclude that our remote laboratories provide similar learning outcomes to their in class analogues, but that there are important differences in student perceptions of the experience, including perceived difficulty and pace.
Proceedings. 1991 IEEE International Symposium on Information Theory, 1991
... Finallj, given the above encoder. ... Each of the above steps reduces the value of the Lagran... more ... Finallj, given the above encoder. ... Each of the above steps reduces the value of the Lagrangian and hence when iterated until convergence yields a locally opti-mal VV VQ. References P. .4. Chou, T. Lookabaugh, and RM Gray. constrained vector quantization. ...
Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, 2004., 2004
ABSTRACT Selective encryption only encrypts a portion of a compressed bitstream, relying on the c... more ABSTRACT Selective encryption only encrypts a portion of a compressed bitstream, relying on the characteristics of the compression format to render the remaining in-the-clear content unusable. While the technique has been proposed in a number of practical applications, it is also motivated by concepts from the origins of Shannon theory on the links between source coding and encryption and benefits from an evaluation of its performance on compression primitives, such as quantization and Huffman coding, used in constructing many compression algorithms.
Proceedings of ICASSP '94. IEEE International Conference on Acoustics, Speech and Signal Processing, 1994
ABSTRACT We introduce a method for locally optimal variable-to-variable length source coding with... more ABSTRACT We introduce a method for locally optimal variable-to-variable length source coding with distortion, and apply it to coding the linear predictive coefficients of speech. The method is similar to entropy-constrained vector quantization, but it uses a dynamic programming algorithm to encode. The method automatically discovers variable-length source structure, in this case the acoustic-phonetic structure of speech. Using this structure, it is possible to compress the linear predictive coefficients of speech to one-third the rate of entropy-constrained vector quantization of speech, with no increase in spectral distortion. Auditory tests reveal that using this method, the spectral component of speech can be coded naturally and intelligibly to as low as 50 bits per second.
International Conference on Acoustics, Speech, and Signal Processing, 1989
... This is true, because, ignoring phase factors, the 2-D DCT basis functions are the product of... more ... This is true, because, ignoring phase factors, the 2-D DCT basis functions are the product of two cosine fonctions, one with a vertical orientation, and one with a horizontal orientation. But cos[Zaf,z] cos[27rfvy] = 1 2 (C42dfiz + fvY)l + COS[2T(f%T. - f,Y)l) ! (12) ...
34th Annual Frontiers in Education, 2004. FIE 2004., 2004
ABSTRACT Information technology, and in particular distance education technology, is becoming mor... more ABSTRACT Information technology, and in particular distance education technology, is becoming more prevalent across society and throughout higher education. But as information technology mediated education moves from trials towards educating non-trivial numbers of students, we can expect established universities to resist wholesale adoption; particularly when it threatens core perceptions of what students want and need and the culture and financial model of the institution. The resulting increasing tension creates the potential for sudden and dramatic shifts rather than gradual adoption. Applications and practices that can signal the maturation of information technology mediated education include course importation and remote laboratory experiences. For institutions, successful development of information technology mediated education may require autonomous units. For individuals, the decision revolves around whether to participate and, if so, in what manner, particularly given academic culture and the potential for institutional resistance.
SMPTE Motion Imaging Journal, 1995
IEEE International Engineering Management Conference, 2002
Innovation by acquisition is an increasingly important strategy for innovation in established hig... more Innovation by acquisition is an increasingly important strategy for innovation in established high technology firms. After describing the background for innovation in small and large firms, a model for innovation by acquisition and the roles of established firms, entrepreneurial firms, and investors is developed, and evidence presented suggesting increasing application of the model.
This assessment of recent data compression and coding research outside the United States examines... more This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.
IEEE Transactions on Medical Imaging, 1990
Three techniques for variable-rate vector quantizer design are applied to medical images. The fir... more Three techniques for variable-rate vector quantizer design are applied to medical images. The first two are extensions of an algorithm for optimal pruning in tree-structured classification and regression due to Breiman et al. The code design algorithms find subtrees of a given tree-structured vector quantizer (TSVQ), each one optimal in that it has the lowest average distortion of all subtrees of the TSVQ with the same or lesser average rate. Since the resulting subtrees have variable depth, natural variable-rate coders result. The third technique is a joint optimization of a vector quantizer and a noiseless variable-rate code. This technique is relatively complex but it has the potential to yield the highest performance of all three techniques.
IEEE Transactions on Information Theory, 1995
The performance of optimum vector quantizers subject to a conditional entropy constraint is studi... more The performance of optimum vector quantizers subject to a conditional entropy constraint is studied in this paper. This new class of vector quantizers was originally suggested by Chou and Lookabaugh. A locally optimal design of this kind of vector quantizer can be accomplished through a generalization of the well known entropy-constrained vector quantizer (ECVQ) algorithm. This generalization of the ECVQ algorithm to a conditional entropy-constrained is called CECVQ, i.e., conditional ECVQ. Furthermore, we have extended the high-rate quantization theory to this new class of quantizers to obtain a new high-rate performance bound, which is a generalization of the works of Gersho and Yamada, Tazaki and Gray. The new performance bound is compared and shown to be consistent with bounds derived through conditional rate-distortion theory. Recently, a new algorithm for designing entropyconstrained vector quantizers was introduced by Garrido, Pearlman and Finamore, and is named entropy-constrained pairwise nearest neighbor (ECPNN). The algorithm is basically an entropy-constrained version of the pairwise nearest neighbor (PNN) * This material is based upon work supported by the Conselho Nacional de Densenvolvimento Científico e Tecnológico (CNPq-Brazil) under Grant 202.178-90, and by the National Science Foundation under Grant Nos. INT-8802240 and NCR-9004758. The government has certain rights in this material. † D. P. de Garrido was with the Electrical,
IEEE Transactions on Information Theory, 1989
An algorithm recently introduced by Breiman, Friedman, Olshen, and Stone in the context of classi... more An algorithm recently introduced by Breiman, Friedman, Olshen, and Stone in the context of classification and regression trees is reinterpreted and extended to cover a variety of applications in source coding and modeling in which trees are involved. These include variable-rate and minimum-entropy tree-structured vector quantization, minimum expected cost decision trees, variable-order Markov modeling, optimum bit allocation, and computer graphics and image processing using quadtrees. A concentration on the first of these and a detailed analysis of variable-rate tree-structured vector quantization are provided. We find that variable-rate tree-structured vector quantization outperforms not only the fixed-rate variety but also full-search vector quantization as well. Furthermore, the "successive approximation" character of variable-rate tree-structured vector quantization permits it to degrade gracefully if the rate is reduced at the encoder. This has applications to the problem of buffer overflow.
IEEE Transactions on Communications, 1993
The performance of a vector quantizer can be improved by using a variable rate code. We apply thr... more The performance of a vector quantizer can be improved by using a variable rate code. We apply three variable rate vector quantization systems to speech, image, and video sources and compare them to standard vector quantization and noiseless variable rate coding approaches. The systems range from a simple and flexible tree-based vector quantizer to a high performance, but complex, jointly optimized vector quantizer and noiseless code. The systems provide significant performance improvements for subband speech coding, predictive image coding, and motion compensated video, but provide only marginal improvements for vector quantization of linear predictive coefficients in speech and direct vector quantization of images. We suggest criteria for determining when variable rate vector quantization may provide significant performance improvement over standard approaches.
IEEE Communications Magazine, 2000
Selective encryption is a technique to save computational complexity or enable interesting new sy... more Selective encryption is a technique to save computational complexity or enable interesting new system functionality hy only encrypting a portion of a compressed bitstream while still achieving adequate security. Although suggested in a number of specific cases, selective encryption ' could be much more widely used in consumer electronic applications ranging from mobile multimedia terminals through digital cameras were it subjected to a more thorough security analysis. We describe selective encryption and develop a simple scalar quantizer example to demonstrate the power of the concept, list a number of potential consumer electronics applications, and then describe an appropriate method for developing and analyzing selective encryption for particular compression algorithms. We summarize results from application of this method to the MPEG-2 video compression algorithm
IEEE Communications Magazine, 2000
IEEE COMMUNICATIONS MAGAZINE. Volume: 42 Issue: 5 Date: May 2004. Providing Quality of Service in... more IEEE COMMUNICATIONS MAGAZINE. Volume: 42 Issue: 5 Date: May 2004. Providing Quality of Service in Heterogeneous Environments Proceedings of the 18th International Teletraffic Congress Stankiewetz, R. Page(s): 20- 20. UNIX Network Programming, Volume 1: The Sockets Networking Juszkiewicz, K. Page(s): 20- 21. New products Page(s): 25- 27. Urban optical wireless communication networks: the main challenges and possible solutions Kedar, D.; Arnon, S. Page(s): S2- S7. ...
Low cost networks of wireless sensors can be distributed to provide information about an environm... more Low cost networks of wireless sensors can be distributed to provide information about an environment. Even a network of sensors providing scalar measurements (for instance, of temperature) presents both formidable challenges in terms of integrating and interpreting measurements over space and time, and important opportunities in extended observations. Cameras are particularly powerful multidimensional sensors for dispersing in unknown environments for surveillance and tracking of activity. Understanding the spatial patterns of such activity requires the camera network to self-organize in terms of understanding relative positions of nodes. Cameras also pose problems for resource limited motes because of the high volumes of image data for local processing or transmission. We describe a self-righting or weeble node architecture for camera networks based on integrating a low cost camera into the Mica2 sensor node platform. The node uses a wide field of view lens (typically called a fish eye lens) which allows us to capture a very broad region around the node providing greater view overlap between the nodes and generally a larger frame for identifying and tracking activity.