A survey of video processing techniques for traffic applications (original) (raw)

Abstract

Video sensors become particularly important in traffic applications mainly due to their fast response, easy installation, operation and maintenance, and their ability to monitor wide areas. Research in several fields of traffic applications has resulted in a wealth of video processing and analysis methods. Two of the most demanding and widely studied applications relate to traffic monitoring and automatic vehicle guidance. In general, systems developed for these areas must integrate, amongst their other tasks, the analysis of their static environment (automatic lane finding) and the detection of static or moving obstacles (object detection) within their space of interest. In this paper we present an overview of image processing and analysis tools used in these applications and we relate these tools with complete systems developed for specific traffic applications. More specifically, we categorize processing methods based on the intrinsic organization of their input data (feature-driven, area-driven, or model-based) and the domain of processing (spatial/frame or temporal/video). Furthermore, we discriminate between the cases of static and mobile camera. Based on this categorization of processing tools, we present representative systems that have been deployed for operation. Thus, the purpose of the paper is threefold. First, to classify image-processing methods used in traffic applications. Second, to provide the advantages and disadvantages of these algorithms. Third, from this integrated consideration, to attempt an evaluation of shortcomings and general needs in this field of active research.

Figures (5)

Progressive use of information in different levels of system complexity and functionality

Progressive use of information in different levels of system complexity and functionality

Representative systems and their functionality

Representative systems and their functionality

the actual task is to reconstruct an inherent 3D represen- tation of the spatial environment from the observed 2D images.  whicn cad lo dilferent processing tecnniques On Varlous levels of information abstraction. Video sensors have demonstrated the ability to obtain traffic measurements more efficiently than other conventional sensors. In cases emulating conventional sensors, video sensors have been shown to offer the following advantages: competitive cost, non-intrusive sensing, lower maintenance and operation costs, lower installation cost and installation/operation during construction. However, because video sensors have the potential of wide area viewing, they are capable of more than merely emulating conventional sensors. Some additional measurements needed for adaptive traffic man- agement are: approach queue length, approach flow profile, ramp queue length, vehicle deceleration, and automatic measurement of turning movements. Vision also provides powerful means for collecting information regarding the environment and its actual state during autonomous locomotion. A vision-based guidance system applied to outdoor navigation usually involves two main tasks of perception, namely finding the road geometry and detecting the road obstacles. First of all, the knowledge about road geometry allows a vehicle to follow its route. Subsequently, the detection of road obstacles is a necessary and important task to avoid other vehicles present on a road. The complexity of the navigation problem is quite high. since   V. Kastrinaki et al. / Image and Vision Computing 21 (2003) 359-381

the actual task is to reconstruct an inherent 3D represen- tation of the spatial environment from the observed 2D images. whicn cad lo dilferent processing tecnniques On Varlous levels of information abstraction. Video sensors have demonstrated the ability to obtain traffic measurements more efficiently than other conventional sensors. In cases emulating conventional sensors, video sensors have been shown to offer the following advantages: competitive cost, non-intrusive sensing, lower maintenance and operation costs, lower installation cost and installation/operation during construction. However, because video sensors have the potential of wide area viewing, they are capable of more than merely emulating conventional sensors. Some additional measurements needed for adaptive traffic man- agement are: approach queue length, approach flow profile, ramp queue length, vehicle deceleration, and automatic measurement of turning movements. Vision also provides powerful means for collecting information regarding the environment and its actual state during autonomous locomotion. A vision-based guidance system applied to outdoor navigation usually involves two main tasks of perception, namely finding the road geometry and detecting the road obstacles. First of all, the knowledge about road geometry allows a vehicle to follow its route. Subsequently, the detection of road obstacles is a necessary and important task to avoid other vehicles present on a road. The complexity of the navigation problem is quite high. since V. Kastrinaki et al. / Image and Vision Computing 21 (2003) 359-381

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

References (128)

  1. B.D. Stewart, I. Reading, M.S. Thomson, T.D. Binnie, K.W. Dickinson, C.L. Wan, Adaptive lane finding in road traffic image analysis, Proceedings of Seventh International Conference on Road Traffic Monitoring and Control, IEE, London (1994).
  2. M. Xie, L. Trassoudaine, J. Alizon, J. Gallice, Road obstacle detection and tracking by an active and intelligent sensing strategy, Machine Vision and Applications 7 (1994) 165 -177.
  3. A. Broggi, S. Berte, Vision-based road detection in automotive systems: a real-time expectation-driven approach, Journal of Artificial Intelligence Research 3 (1995) 325 -348.
  4. M. Bertozzi, A. Broggi, GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection, IEEE Transaction Image Processing 7 (1) (1998).
  5. C.L. Wan, K.W. Dickinson, T.D. Binnie, A cost-effective image sensor system for transport applications utilising a miniature CMOS single chip camera, IFAC Transportation systems, Tianjin, Proceed- ings (1994).
  6. M. Bertozzi, A. Broggi, Vision-based vehicle guidance, Computer Vision 30 (7) (1997).
  7. J. Weber, D. Koller, Q.-T. Luong, J. Malik, New results in stereo- based automatic vehicle guidance, Proceedings of IEEE Intelligent Vehicles 95, Detroit (1995) 530 -535.
  8. Q.-T. Luong, J. Weber, D. Koller, J. Malik, An integrated stereo- based approach to automatic vehicle guidance, Proceedings of the Fifth ICCV, Boston (1995) 12-20.
  9. A. Soumelidis, G. Kovacs, J. Bokor, P. Gaspar, L. Palkovics, L. Gianone, Automatic detection of the lane departure of vehicles, IFAC Transportation Systems Chania, Greece (1997).
  10. D.J. LeBlanc, G.E. Johnson, P.J.T. Venhovens, G. Gerber, R. DeSonia, R. Ervin, C.F. Lin, A.G. Ulsoy, T.E. Pilutti, CAPC: a road- departure prevention system, IEEE Control Systems December (1996).
  11. C. Thorpe, M.H. Hebert, T. Kanade, S.A. Shafer, Vision and navigation for the carnegie-mellon navlab, IEEE Transactions on Pattern Analysis and Machine Intelligence 10 (3) (1988).
  12. M.A. Turk, D.G. Morgenthaler, K.D. Gremban, M. Marra, VITS-a vision system for autonomous land vehicle navigation, IEEE Transactions on Pattern Analysis and Machine Intelligence 10 (3) (1988).
  13. S.D. Buluswar, B.A. Draper, Color machine vision for autonomous vehicles, Engineering Applications of Artificial Intelligence 11 (1998) 245 -256.
  14. M. Betke, E. Haritaoglu, L.S. Davis, Real-time multiple vehicle detection and tracking from a moving vehicle, Machine Vision and Applications 12 (2000) 69-83.
  15. J. Zhang, H. Nagel, Texture-based segmentation of road images, Proceedings of IEEE, Symposium on Intelligent Vehicles 94, IEEE Press, Piscataway, NJ, 1994.
  16. J. Badenas, M. Bober, F. Pla, Segmenting traffic scenes from grey level and motion information, Pattern Analysis and Applications 4 (2001) 28 -38.
  17. K. Kluge, G. Johnson, Statistical characterization of the visual characteristics of painted lane markings, Proceedings of IEEE Intelligent Vehicles 95, Detroit (1995) 488-493.
  18. W. Kasprzak, An iconic classification scheme for video-based traffic sensor tasks, in: W. Skarbek (Ed.), CAIP 2001, Springer, Berlin, 2001, pp. 725 -732.
  19. W. Enkelmann, G. Struck, J. Geisler, ROMA-a system for model- based analysis of road markings, Proceedings of IEEE Intelligent Vehicles 95, Detroit (1995) 356-360.
  20. A.Y. Nooralahiyan, H.R. Kirby, Vehicle classification by acoustic signature, Mathematical and Computer Modeling 27 (9-11) (1998).
  21. A. Broggi, Parallel and local feature extraction: a real-time approach to road boundary detection, IEEE Transaction on Image Processing 4 (2) (1995) 217-223.
  22. S. Beucher, M. Bilodeau, Road segmentation and obstacle detection by a fast watershed transform, Proceedings of IEEE Intelligent Vehicles 94, Paris, France October (1994) 296-301.
  23. X. Yu, S. Beucher, M. Bilodeu, Road tracking, lane segmentation and obstacle recognition by mathematical morphology, Proceedings of IEEE Intelligent Vehicles 92 (1992) 166-170.
  24. C. Kreucher, S. Lakshmanan, LANA: a lane extraction algorithm that uses frequency domain features, IEEE Transactions on Robotics and Automation 15 (2) (1999).
  25. A.L. Yuille, J.M. Coughlan, Fundamental limits of Bayesian inference: order parameters and phase transitions for road tracking, IEEE Pattern Analysis and Machine Intelligence 22 (2) (2000) 160-173. February.
  26. Y. Wang, D. Shen, E.K. Teoh, Lane detection using spline model, Pattern Recognition Letters 21 (1994) 677-689.
  27. K. Kluge, S. Lakshmanan, A deformable-template approach to lane detection, IEEE Proceedings of Intelligent Vehicles 95 (1995) 54 -59.
  28. C.J. Taylor, J. Malik, J. Weber, A real time approach to stereopsis and lane-finding, IFAC Transportation Systems Chania, Greece (1997).
  29. D. Geman, B. Jedynak, An active testing model for tracking roads in satellite images, IEEE Pattern Analysis and Machine Intelligence 18 (1) (1996) 1-14.
  30. E.D. Dickmanns, Vehicle Guidance By Computer Vision, Papa- georgiou Markos (Ed.), Concise Encyclopedia of Traffic and Transportation Systems.
  31. E.D. Dickmanns, B. Mysliwetz, T. Christians, An integrated spatio- temporal approach to automatic visual guidance of autonomous vehicles, IEEE Transactions on Systems, Man, and Cybernetics 20 (6) (1990).
  32. P.G. Michalopoulos, D.P. Panda, Derivation of advanced traffic parameters through video imaging, IFAC Transportation Systems Chania, Greece (1997).
  33. D.G. Morgenthaler, S.J. Hennessy, D. DeMenthon, Range-video fusion and comparison of inverse perspective algorithms in static images, IEEE Transactions on Systems Man and Cybernetics 20 (1990) 1301-1312.
  34. E.D. Dickmanns, B.D. Mysliwetz, Recursive 3D road and relative ego-state recognition, IEEE Pattern Analysis and Machine Intelli- gence 14 (2) (1992) 199-213.
  35. M. Papageorgiou, Video sensors, Papageorgiou Markos (Ed.), Concise Encyclopedia of traffic and transportation systems, pp. 610-615.
  36. W. Enkelmann, Obstacle detection by evaluation of optical flow field from image sequences, Proceedings of European Conference on Computer Vision, Antibes, France 427 (1990) 134-138.
  37. B. Ross, A practical stereo vision system, Proceedings of International Conference on Computer Vision and Pattern Recog- nition, Seattle, WA (1993) 148-153.
  38. M. Bertozzi, A. Broggi, S. Castelluccio, A real-time oriented system for vehicle detection, Journal of System Architecture 43 (1997) 317-325.
  39. F. Thomanek, E.D. Dickmanns, D. Dickmanns, Multiple object recognition and scene interpretation for autonomous road vehicle guidance, Proceedings of IEEE Intelligent Vehicles 94, Paris, France (1994) 231-236.
  40. G.L. Foresti, V. Murino, C. Regazzoni, Vehicle recognition and tracking from road image sequences, IEEE Transactions on Vehicular Technology 48 (1) (1999) 301-317.
  41. G.L. Foresti, V. Murino, C.S. Regazzoni, G. Vernazza, A distributed approach to 3D road scene recognition, IEEE Transactions on Vehicular Technology 43 (2) (1994).
  42. H.-H. Nagel, W. Enkelmann, An investigation of smoothness constrains for the estimation of displacement vector fields from image sequences, IEEE Transactions on Pattern Analysis and Machine Intelligence (1986) 565 -593.
  43. T.N. Tan, G.D. Sullivan, K.D. Baker, Model-based location and recognition of road vehicles, International Journal of Computer Vision 27 (1) (1998) 5-25.
  44. D. Koller, K. Daniilidis, H. Nagel, Model-based object tracking in monocular image sequences of road traffic scenes, International Journal Computer Vision 10 (1993) 257-281.
  45. E.D. Dickmanns, V. Graefe, Dynamic monocular machine vision, Machine vision and applications 1 (1988) 223 -240.
  46. Y. Park, Shape-resolving local thresholding for object detection, Pattern Recognition Letters 22 (2001) 883 -890.
  47. J.M. Blosseville, C. Krafft, F. Lenoir, V. Motyka, S. Beucher, TITAN: new traffic measurements by image processing, IFAC Transportation systems, Tianjin, Proceedings (1994).
  48. Y. Won, J. Nam, B.-H. Lee, Image pattern recognition in natural environment using morphological feature extraction, in: F.J. Ferri (Ed.), SSPR&SPR 2000, Springer, Berlin, 2001, pp. 806-815.
  49. K. Shimizu, N. Shigehara, Image processing system used cameras for vehicle surveillance, IEE Second International Conference on Road Traffic Monitoring, Conference Publication Number 299 February (1989) 61-65.
  50. M. Fathy, M.Y. Siyal, An image detection technique based on morphological edge detection and background differencing for real- time traffic analysis, Pattern Recognition Letters 16 (1995) 1321-1330.
  51. N. Hoose, Computer vision as a traffic surveillance tool, IFAC Transportation systems, Tianjin, Proceedings (1994).
  52. X. Li, Z.-Q. Liu, K.-M. Leung, Detection of vehicles from traffic scenes using fuzzy integrals, Pattern Recognition 35 (2002) 967-980.
  53. H. Moon, R. Chellapa, A. Rosenfeld, Performance analysis of a simple vehicle detection algorithm, Image and Vision Computing 20 (2002) 1-13.
  54. A. Kuehnel, Symmetry based recognition of the vehicle rears, Pattern Recognition Letters 12 (1991) 249-258. North Holland, Amsterdam.
  55. M. Fathy, M.Y. Siyal, A window-based image processing technique for quantitative and qualitative analysis of road traffic parameters, IEEE Transactions on Vehicular Technology 47 (4) (1998).
  56. D.C. Hogg, G.D. Sullivan, K.D. Baker, D.H. Mott, Recognition of vehicles in traffic scenes using geometric models, IEE, Proceedings of the International Conference on Road Traffic Data Collection, London (1984) 115-119. London.
  57. P. Klausmann, K. Kroschel, D. Willersinn, Performance prediction of vehicle detection algorithms, Pattern Recognition 32 (1999) 2063-2065.
  58. K.W. Dickinson, C.L. Wan, Road traffic monitoring using the TRIP II system, IEE Second International Conference on Road Traffic Monitoring, Conference Publication Number 299 February (1989) 56-60.
  59. G.D. Sullivan, K.D. Baker, A.D. Worrall, C.I. Attwood, P.M. Remagnino, Model-based vehicle detection and classification using orthographic approximations, Image and Vision Computing 15 (1997) 649-654.
  60. A.D. Houghton, G.S. Hobson, N.L. Seed, R.C. Tozer, Automatic vehicle recognition, IEE Second International Conference on Road Traffic Monitoring, Conference Publication Number 299 February (1989) 71-78.
  61. C.L. Wan, K.W. Dickinson, Road traffic monitoring using image processing-a survey of systems, techniques and applications, IFAC Control Computers, Communications in Transportation, Paris, France (1989).
  62. S. Mantri, D. Bullock, Analysis of feedforward-backpropagation neural networks used in vehicle detection, Transportation Research Part C 3 (3) (1995) 161 -174.
  63. N. Hoose, IMPACT: an image analysis tool for motorway analysis and surveillance, Traffic Engineering Control Journal (1992) 140-147.
  64. N. Paragios, R. Deriche, Geodesic active contours and level sets for the detection and tracking of moving objects, IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (3) (2000) 266-280.
  65. T. Aach, A. Kaup, Bayesian algorithms for adaptive change detection in image sequences using Markov random fields, Signal Processing: Image Communication 7 (1995) 147-160.
  66. N. Paragios, G. Tziritas, Adaptive detection and localization of moving objects in image sequences, Signal Processing: Image Communication 14 (1999) 277 -296.
  67. J.B. Kim, H.S. Park, M.H. Park, H.J. Kim, A real-time region-based motion segmentation using adaptive thresholding and K-means clustering, in: M. Brooks, D. Corbett, M. Stumptner (Eds.), AI 2001, Springer, Berlin, 2001, pp. 213-224.
  68. M. Dubuisson, A. Jain, Contour extraction of moving objects in complex outdoor scenes, International Journal of Computer Vision 14 (1995) 83 -105.
  69. N. Hoose, Computer Image Processing in Traffic Engineering, Taunton Research Studies Press, UK, 1991.
  70. J. Badenas, J.M. Sanchiz, F. Pla, Motion-based segmentation and region tracking in image sequences, Pattern Recognition 34 (2001) 661-670.
  71. C. Wohler, J.K. Anlauf, Real-time object recognition on image sequences with the adaptable time delay neural network algorithm- applications for autonomous vehicles, Image and Vision Computing 19 (2001) 593 -618.
  72. K.W. Lee, S.W. Ryu, S.J. Lee, K.T. Park, Motion based object tracking with mobile, Camera Electronics Letters 34 (3) (1998) 256-258.
  73. B. Coifman, D. Beymer, P. McLauchlan, J. Malik, A real-time computer vision system for vehicle tracking and traffic surveillance, Transportation Research Part C 6 (1998) 271 -288.
  74. Y.-K. Jung, Y.-S. Ho, A feature-based vehicle tracking system in congested traffic video sequences, in: H.-Y. Shum, M. Liao, S.F. Chang (Eds.), PCM 2001, Springer, Berlin, 2001, pp. 190-197.
  75. A. Techmer, Real-time motion based vehicle segmentation in traffic lanes, in: B. Radig, S. Florczyk (Eds.), DAGM 2001, Springer, Berlin, 2001, pp. 202-207.
  76. R. Fraile, S.J. Maybank, Building 3D models of vehicles for computer vision, in: D. Huijsmans, A. Smeulders (Eds.), Visual'99, Springer, Berlin, 1999, pp. 697-702.
  77. B. Gloyer, H.K. Aghajan, K.-Y. Siu, T. Kailath, Video-based monitoring system using recursive vehicle tracking, Proceedings of IS&T/SPIE Symposium on Electronic Image: Science & Technol- ogy-Image and Video Processing (1995).
  78. B.K.P. Horn, B.G. Schunck, Determining optical flow, Artificial Intelligence 17 (1981) 185 -203.
  79. W. Enkelmann, R. Kories, H.-H. Nagel, G. Zimmermann, An experimental investigation of estimation approaches for optical flow fields, in: W.N. Martin, J.K. Aggarwal (Eds.), Motion Under- standing: Robot and Human Vision, Kluwer, Dordrecht, 1987, pp. 189-226.
  80. R. Kories, H.-H. Nagel, G. Zimmermann, Motion detection in image sequences: an evaluation of feature detectors, Proceedings of International Joint Conference on Pattern Recognition, Montreal (1984) 778-780.
  81. W. Enkelmann, Interpretation of traffic scenes by evaluation of optical flow fields from image sequences, IFAC Control Computers, Communications in Transportation, Paris, France (1989).
  82. H.H. Nagel, Constraints for the estimation of displacement vector fields from image sequences, Proceedings of Intelligent Joint Conference on Artificial Intelligence, Karlsruhe/FRG August (1983) 945-951.
  83. A. Giachetti, M. Campani, V. Torre, The use of optical flow for road navigation, IEEE Transactions on Robotics and Automation 14 (1) (1998).
  84. R. Kories, G. Zimmermann, Workshop on Motion: Representation and Analysis, Kiawah Island Resort, Charleston/SC, Workshop on Motion: Representation and Analysis, Kiawah Island Resort, Charleston/SC, May, IEEE Computer Society Press, 1986, pp. 101-106.
  85. S.A. Velastin, J.H. Yin, M.A. Vicencio-Silva, A.C. Davies, R.E. Allsop, A. Penn, Image processing for on-line analysis of crowds in public areas, IFAC Transportation systems, Tianjin, Proceedings (1994).
  86. W. Enkelmann, Investigations of multigrid algorithms for the estimation of optical flow fields in image sequences, Computer Vision, Graphics, and Image Processing (1988) 150 -177.
  87. S. Carlsson, J.O. Eklundh, Object detection using model-based prediction and motion parallax, Proceedings of Europe Con- ference on Computer Vision, Antibes, France 427 (1990) 297-306.
  88. A.R. Bruss, B.K.P. Horn, Passive navigation, Computer Vision, Graphics, and Image Processing 21 (1983) 3-20.
  89. L. Dreschler, H.-H. Nagel, Volumetric model and 3D-trajectory of a moving car derived from monocular TV-frame sequences of a street scene, Computer Vision, Graphics, and Image Processing 20 (1982) 199-228.
  90. M. Irani, P. Anandan, A unified approach to moving object detection in 2D and 3D scenes, IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (6) (1998) 577 -589.
  91. M. Irani, B. Rousso, S. Peleg, Recovery of egomotion using region alignment, IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (3) (1997) 268-272.
  92. A. Nagai, Y. Kuno, Y. Shirai, Detection of moving objects against a changing background, Systems and Computer in Japan 30 (11) (1999) 107-116.
  93. S.M. Smith, J.M. Brady, A scene segmenter; visual tracking of moving vehicles, Engineering Applications of Artificial Intelligence 7 (2) (1994) 191-204.
  94. H.A. Mallot, H.H. Bulthoff, J.J. Little, S. Bohrer, Inverse perspective mapping simplifies optical flow computation and obstacle detection, Biological Cybernetics 64 (1991) 177-185.
  95. S. Tsugawa, Vision-based vehicles in Japan: machine vision systems and driving control systems, IEEE Transactions on Industrial Electronics 41 (4) (1994) 398 -405.
  96. D. Pomerleau, T. Jocchem, Rapidly adaptive machine vision for automated vehicle steering, IEEE Machine Vision April (1996) 19 -27.
  97. L. Matthies, Stereo vision for planetary rovers: stochastic modeling to near real-time implementation, International Journal of Computer Vision 8 (1992) 71-91.
  98. J. Schick, E.D. Dickmanns, Simultaneous estimation of 3D shape and motion of objects by computer vision, Proceedings of IEEE Workshop on Visual Motion, Princeton, NJ October (1991) 256 -261.
  99. S. Mammar, J.M. Blosseville, Traffic variables recovery, IFAC Transportation Systems, Tianjin, Proceedings (1994).
  100. P.G. Michalopoulos, R.D. Jacobson, C.A. Anderson, T.B. DeBruycker, Automatic Incident Detection Through Video Image Processing, Traffic Engineering รพ Control February (1993) 66 -75.
  101. P.G. Michalopoulos, Vehicle detection video through image processing: the autoscope system, IEEE Transactions on Vehicular Technology 40 (1) (1991).
  102. J.G. Postaire, P. Stelmaszyk, P. Bonnet, A visual surveillance system for traffic collision avoidance control, IFAC Transportation Symposium International Federation of Automatic Control, Laxen- burg, Austria (1986).
  103. S. Takaba, et al., A traffic flow measuring system using a solid state sensor, Proceedings of IEE Conference on Road Traffic Data Collection, London, UK (1984).
  104. A. Rourke, M.G.H. Bell, Applications of low cost image processing technology in transport, Proceedings of the World Conference on Transport Research, Japan (1989) 169 -183.
  105. Z. Zhu, G. Xu, B. Yang, D. Shi, X. Lin, VISATRAM: a real-time vision system for automatic traffic monitoring, Image and Vision Computing 18 (2000) 781-794.
  106. K. Kluge, Extracting road curvature and orientation from image edge points without perceptual grouping into features, Proceed- ings of IEEE Intelligent Vehicles Symposium'94 (1994) 109 -111.
  107. S.K. Kenue, LANELOK: an algorithm for extending the lane sensing operation range to 100 feet, Procedings of SPIE-Mobile Robots V 1388 (1991) 222-233.
  108. M. Beauvais, S. Lakshmanan, CLARK: a heterogeneous sensor fusion method for finding lanes and obstacles, Image and Vision Computing 18 (5) (2000) 397-413.
  109. D. Pomerleau, Ralph: rapidly adapting lateral position handler, Proceedings of IEEE Intelligent Vehicles'95, Detroit, MI (1995) 506 -511.
  110. Y. Goto, K. Matsuzaki, I. Kweon, T. Obatake, CMU sidewalk navigation system: a blackboard outdoor navigation system using sensor fusion with color-range images, Proceedings of First Joint Conference ACM/IEEE November (1986).
  111. J.D. Crisman, C.E. Thorpe, SCARF: a color vision system that tracks roads and intersections, IEEE Transactions on Robotics and Automation 9 (1) (1993).
  112. T.M. Jochem, D.A. Pomereau, C.E. Thorpe, Maniac, a next generation neurally based autonomous road follower, Proceedings of International Conference on Intelligent Autonomous Systems: IAS-3, Pittsburgh, PA February (1993).
  113. H.H. Nagel, F. Heimes, K. Fleischer, M. Haag, H. Leuck, S. Noltemeier, Quantitative comparison between trajectory estimates obtained from a binocular camera setup within a moving road vehicle and from the outside by a stationary monocular camera, Image and Vision Computing 18 (2000) 435 -444.
  114. Q. Zheng, R. Chellappa, Automatic feature point extraction and tracking in image sequences for arbitrary camera motion, Inter- national Journal of Computer Vision 15 (1995) 31-76.
  115. D. Coombs, M. Herman, T. Hong, M. Nashman, Real-time obstacle avoidance using central flow divergence, and peripheral flow, IEEE Transactions on Robotics and Automation 14 (1) (1998) 49 -59.
  116. N. Kehtarnavaz, N.C. Griswold, J.S. Lee, Visual control of an autonomous vehicle (BART)-the vehicle-following problem, IEEE Transactions on Vehicular Technology 40 (3) (1991) 654-662.
  117. S.E. Shladover, D.A. Desoer, J.K. Hedrick, M. Tomizuka, J. Walrand, W.-B. Zhang, D.H. McMahon, H. Peng, S. Sheikholeslam, N. McKeown, Automatic vehicle control developments in the PATH program, IEEE Transactions on Vehicular Technology 40 (1) (1991) 114-129.
  118. S. Kato, S. Tsugawa, Cooperative driving of autonomous vehicles based on localization, inter-vehicle communications and vision systems, JSAE Review 22 (2001) 503 -509.
  119. M.M. Trivedi, M.A. Abidi, R.O. Eason, R.C. Gonzalez, Developing robotics systems with multiple sensors, IEEE Transactions on Systems, Man, and Cybernetics 20 (6) (1990) 1285-1300.
  120. F. Chavand, E. Colle, Y. Chekhan, E.C. N'zi, 3D measurements using a video and a range finder, IEEE Transactions on Instrumenta- tion and Measurement 46 (6) (1997) 1229-1235.
  121. M.J. Magee, B.A. Boyter, C.H. Chien, J.K. Aggarwal, Experiments in intensity guided range sensing recognition of three-dimensional objects, IEEE Pattern Analysis and Machine Intelligence 7 (1985) 629-637.
  122. P.J. Burt, Smart sensing within a pyramid vision machine, Proceedings of IEEE 76 (1988) 1006-1115.
  123. A. Jain, T. Newman, M. Goulish, Range-intensity histogram for segmenting LADAR images, Pattern Recognition Letters 13 (1992) 41-56.
  124. D.A. Pomerleau, Advances in Neural Information Processing, Advances in Neural Information Processing, 1, Morgan-Kaufman, San Francisco, CA, 1989, pp. 305-313.
  125. U. Handmann, T. Kalinke, C. Tzomakas, M. Werner, W. Seelen, An image processing systems for driver assistance, Image and Vision Computing (18) (2000) 367 -376.
  126. H. Schodel, Utilization of fuzzy techniques in intelligent sensors, Fuzzy Sets and Systems 63 (1994) 271-292.
  127. F. Sandakly, G. Giraudon, 3D scene interpretation for a mobile robot, Robotics and Autonomous Systems 21 (1997) 399-414.
  128. Y.-G. Wu, J.-Y. Yang, K. Liu, Obstacle detection and environment modeling based on multisensor fusion for robot navigation, Artificial Intelligence in Engineering 10 (1996) 323-333.