IMAGE SENSORS and SIGNAL PROCESSING for DIGITAL STILL CAMERAS (original) (raw)

Comparison of CCD, CMOS and intensified cameras

Experiments in Fluids, 2007

The properties of digital cameras are often important to obtain accurate results with image based measurement techniques. Unfortunately, a detailed comparison of the sensor specifications and performance is not accessible as this information is normally not provided by the manufacturers and no generally accepted comparison standard exists. Therefore, a detailed quantitative comparison was performed to evaluate and assess the characteristics of state-of-theart CCD, CMOS and intensified CMOS sensors. These results may be of assistance when selecting the appropriate sensor for a desired application.

Photography Image Sensors

A digital file is data, no different to any other computer file. It can be saved to any computer storage media. The file can also be copied and recopied without any loss of quality. Copies can be kept in more than one picture library, or in other locations, all giving high-quality images. Image files can be opened on a computer using imaging software such as Adobe Photoshop. This allows dust spots and other minor blemishes to be removed quickly and easily. It is also possible to make more significant changes to an image, but this may not be acceptable in areas such as news, sport and wildlife. Almost all newspapers, magazines, books, brochures and other printed materials are now created on computers and use digital image files for their photographs. To meet this demand, most picture libraries now only accept digital images. Although it is possible to scan film transparencies to create digital files, it is more convenient to shoot digitally in the first place. The term analog refers to simple or SLR cameras, which uses film as the printing medium for photographs. They are then produced in analog, using a chemical process. Film cameras have the advantage of being relatively inexpensive as compared digital cameras of the same quality, but on the other hand, buying film and developing rolls can get expensive. Analog cameras have 35 mm film; hence light covering a bigger area than in digital cameras. The main disadvantage of an analog camera is for beginners; they would need to note down the camera settings, before actually taking pictures. Different effects on pictures, which result from developing, also need to be tracked. The most important advantage of an analog camera, the reason why most photographers choose it, is the picture quality. The picture quality that can be achieved with a film camera is very high and the images turn out extremely sharp. The reason behind this is the chemical reaction which takes place when light from the shutter falls on the film, and as a result an exact, crisp and inverted i.e. negative representation of the object is obtained. This is not possible with a digital camera. Digital photography is no different from film/analog photography, as matter of fact the technique and style used is the same, except for one aspect. The distinction between digital and analog photography is that traditional film is replaced by a charged coupled device (CCD), which contains tiny grids containing millions of photosensitive elements. When a picture is clicked, a ray of light falls on the photosensitive elements, which then registers a specific intensity of light, as an electrical charge. The electrical charges are then passed onto an analog-to-digital converter that transforms them into digital data. To determine the actual color value of any one pixel, the camera's software makes a calculated guess based on the values registered by three neighboring photosensitive elements. This is the reason which accounts for the reduction the image's level of detail, and eventually translates into the quality of image. Image Sensor An image sensor or imaging sensor is a sensor that detects and conveys the information that constitutes an image. It does so by converting the variable attenuation of waves (as they pass through or reflect off objects) into signals, the small bursts of current that convey the information.

Digital Photography - Image Sensors

A digital file is data, no different to any other computer file. It can be saved to any computer storage media. The file can also be copied and recopied without any loss of quality. Copies can be kept in more than one picture library, or in other locations, all giving high-quality images. Image files can be opened on a computer using imaging software such as Adobe Photoshop. This allows dust spots and other minor blemishes to be removed quickly and easily. It is also possible to make more significant changes to an image, but this may not be acceptable in areas such as news, sport and wildlife. Almost all newspapers, magazines, books, brochures and other printed materials are now created on computers and use digital image files for their photographs. To meet this demand, most picture libraries now only accept digital images. Although it is possible to scan film transparencies to create digital files, it is more convenient to shoot digitally in the first place. The term analog refers to simple or SLR cameras, which uses film as the printing medium for photographs. They are then produced in analog, using a chemical process. Film cameras have the advantage of being relatively inexpensive as compared digital cameras of the same quality, but on the other hand, buying film and developing rolls can get expensive. Analog cameras have 35 mm film; hence light covering a bigger area than in digital cameras. The main disadvantage of an analog camera is for beginners; they would need to note down the camera settings, before actually taking pictures. Different effects on pictures, which result from developing, also need to be tracked. The most important advantage of an analog camera, the reason why most photographers choose it, is the picture quality. The picture quality that can be achieved with a film camera is very high and the images turn out extremely sharp. The reason behind this is the chemical reaction which takes place when light from the shutter falls on the film, and as a result an exact, crisp and inverted i.e. negative representation of the object is obtained. This is not possible with a digital camera. Digital photography is no different from film/analog photography, as matter of fact the technique and style used is the same, except for one aspect. The distinction between digital and analog photography is that traditional film is replaced by a charged coupled device (CCD), which contains tiny grids containing millions of photosensitive elements. When a picture is clicked, a ray of light falls on the photosensitive elements, which then registers a specific intensity of light, as an electrical charge. The electrical charges are then passed onto an analog-to-digital converter that transforms them into digital data. To determine the actual color value of any one pixel, the camera's software makes a calculated guess based on the values registered by three neighboring photosensitive elements. This is the reason which accounts for the reduction the image's level of detail, and eventually translates into the quality of image. Image Sensor An image sensor or imaging sensor is a sensor that detects and conveys the information that constitutes an image. It does so by converting the variable attenuation of waves (as they pass through or reflect off objects) into signals, the small bursts of current that convey the information.

Standard CMOS active pixel image sensors for multimedia applications

arvlsi, 1995

The task of image acquisition is completely dominated by CCD-based sensors fabricated on specialized process lines. These devices provide an essentially passive means of detecting photons and moving image data across chip. We argue that line widths in standard ...

Comparison of Global Shutter Pixels for CMOS Image Sensors

In this paper we are presenting preliminary results from 4T technology based CMOS image sensors with global shutter, i.e. all pixels in the active array integrate light simultaneously. The global shutter operation mode is particularly important for high-speed video applications, where the more commonly implemented rolling line shutter creates motion blur. Our chips were fabricated using a 0.18 micron 4T, CIS technology with pinned photodiode and transfer gate. Different from conventional 3T type CMOS image sensors with global shutter pixel, in these 4T technology based global shutter pixels, the charge is transferred, not just sampled, onto the sense node. This translates into very high sensitivity and low readout noise at low power. For an imager with 7 transistors per pixel that is operated in global shutter, "Integrate While Read" mode, we measure an input referred noise of 10 electrons. The extinction ratio at full well signal charge is ~ 97.7%.

The comparison of CCD and CMOS image sensors

2008

abstract The architectures of CCD and CMOS image sensors are introduced briefly, followed by comparison of their performances in detail. At last, the future development trends of CCD and CMOS image sensors are prospected. It is pointed out that CCD and CMOS image sensors will remain complementary and competition, and flourish the image sensor market together in predictable future.

An Ultra-High Resolution Digital Camera

The Journal of Photographic Science, 1994

We are developing a camera capable of capturing images of fine art paintings of up to one metre square at a resolution of 20 pixels per mm, based on the Kon Iron ProgRes 3012 camera which uses piezo micro-adjustment of the CCD array to produce 3000 by 2320 pixels. The new camera will incorporate an X-Y translation stage that moves the array around the image plane to acquire blocks that can be mosaiced together to form the final large digital image.

A 640×512 CMOS image sensor with ultrawide dynamic range floating-point pixel-level ADC

IEEE Journal of Solid-state Circuits, 1999

Analysis results demonstrate that multiple sampling can achieve consistently higher signal-to-noise ratio at equal or higher dynamic range than using other image sensor dynamic range enhancement schemes such as well capacity adjusting. Implementing multiple sampling, however, requires much higher readout speeds than can be achieved using typical CMOS active pixel sensor (APS). This paper demonstrates, using a 640 2 2 2 512 CMOS image sensor with 8-b bit-serial Nyquist rate analog-todigital converter (ADC) per 4 pixels, that pixel-level ADC enables a highly flexible and efficient implementation of multiple sampling to enhance dynamic range. Since pixel values are available to the ADC's at all times, the number and timing of the samples as well as the number of bits obtained from each sample can be freely selected and read out at fast SRAM speeds. By sampling at exponentially increasing exposure times, pixel values with binary floating-point resolution can be obtained. The 640 2 2 2 512 sensor is implemented in 0.35-m CMOS technology and achieves 10.5 2 2 2 10.5 m pixel size at 29% fill factor. Characterization techniques and measured quantum efficiency, sensitivity, ADC transfer curve, and fixed-pattern noise are presented. A scene with measured dynamic range exceeding 10 000 : 1 is sampled nine times to obtain an image with dynamic range of 65 536 : 1. Limits on achievable dynamic range using multiple sampling are presented.

Evaluation and Characterization of a Logarithmic Image Sensor

2013

In this thesis, issued by Axis Communications AB, a CMOS-type image sensor with a logarithmic response and wide dynamic range capabilities was evaluated and characterized with respect to signal response and noise characteristics. The evaluation was performed in the context of the need for wide dynamic range imaging in video surveillance. Noise characteristics were thoroughly evaluated with measurements performed with an integrating sphere. Additionally some aspects of temperature dependence of the device were investigated. The dynamic range was measured with a laser diffraction setup. The capability of the sensor to accurately capture motion was also investigated. I want to thank my supervisor Pelle Ohlsson for valuable comments and for putting up with a somewhat chaotic hand-in schedule. Finally a special thanks to all the people at Axis Communications that I have met while I have been working on this thesis and especially the people at the Department of Fixed Cameras/Fixed Domes for their companionship and support.

Low-power digital image sensor for still-picture image acquisition

Proceedings of …, 2001

This article presents the design and realization of a CMOS digital image sensor optimized for button-battery powered applications. First, a pixel with local analog memory was designed, allowing efficient sensor global shutter operation. The exposure time becomes independent on the ...

High-resolution DSP-based CCD Camera System

This paper presents a programmable full frame CCD camera system with high performance based on the Texas Instruments (TI) TMS320C6416T Digital Signal Processor (DSP). The Kodak KAF-16803 image sensor is used in this system named CMR6416T. One major advantage of the system is that, after capturing an image from the CCD sensor at a high speed, the raw image processing is performed in the TMS320C6416T DSP instead of the Personal Computer (PC). The camera can be used for astronomy, digital radiography, life sciences and so on. This paper outlines the design of this system. The programmable nature of the system allows for applications in various situations.

Brandaris 128: A digital 25 million frames per second camera with 128 highly sensitive frames

Review of Scientific Instruments, 2003

A high-speed camera that combines a customized rotating mirror camera frame with charge coupled device ͑CCD͒ image detectors and is practically fully operated by computer control was constructed. High sensitivity CCDs are used so that image intensifiers, which would degrade image quality, are not necessary. Customized electronics and instruments were used to improve the flexibility and control precisely the image acquisition process. A full sequence of 128 consecutive image frames with 500ϫ292 pixels each can be acquired at a maximum frame rate of 25 million frames/s. Full sequences can be repeated every 20 ms, and six full sequences can be stored on the in-camera memory buffer. A high-speed communication link to a computer allows each full sequence of about 20 Mbytes to be stored on a hard disk in less than 1 s. The sensitivity of the camera has an equivalent International Standards Organization number of 2500. Resolution was measured to be 36 lp/mm on the detector plane of the camera, while under a microscope a bar pattern of 400 nm spacing line pairs could be resolved. Some high-speed events recorded with this camera, dubbed Brandaris 128, are presented.

A sub pixel resolution method

One of the main limitations for the resolution of optical instruments is the size of the sensor's pixels. In this paper we introduce a new sub pixel resolution algorithm to enhance the resolution of images. This method is based on the analysis of multiimages which are fast recorded during the fine relative motion of image and pixel arrays of CCDs. It is shown that by applying this method for a sample noise free image one will enhance the resolution with 14 10 − order of error.

Multiresolution image sensor

1997

Abstract-The recent development of the CMOS active pixel sensor (APS) has, for the first time, pennitted large scale integration of supporting circuitry and smart camera functions on the same chip as a high-performance image sensor. This paper reports on the demonstration of a new 128 x 128 CMOS APS with programmable multiresolution readout capability. By placing signal processing circuitry on the imaging focal plane, the image sensor can output data at varying resolutions which can decrease the computational load of downstream image processing. For instance, software intensive image pyramid reconstruction can be eliminated. The circuit uses a passive switched capacitor network to average arbitrarily large neighborhoods of pixels which can then be read out at any user-defined resolution by configuring a set of digital shift registers. The full resolution frame rate is 30 Hz with higher rates for all other image resolutions. The sensor achieved 80 dB of dynamic range while dissipatin...

Cited by

Steganalysis of JSteg algorithm using hypothesis testing theory

EURASIP Journal on Information Security, 2015

This paper investigates the statistical detection of JSteg steganography. The approach is based on a statistical model of discrete cosine transformation (DCT) coefficients challenging the usual assumption that among a subband all the coefficients are independent and identically distributed (i. i. d.). The hidden information-detection problem is cast in the framework of hypothesis testing theory. In an ideal context where all model parameters are perfectly known, the likelihood ratio test (LRT) is presented, and its performances are theoretically established. The statistical performance of LRT serves as an upper bound for the detection power. For a practical use where the distribution parameters are unknown, by exploring a DCT channel selection, a detector based on estimation of those parameters is designed. The loss of power of the proposed detector compared with the optimal LRT is small, which shows the relevance of the proposed approach.

Motion microscopy for visualizing and quantifying small motions

Proceedings of the National Academy of Sciences of the United States of America, 2017

Although the human visual system is remarkable at perceiving and interpreting motions, it has limited sensitivity, and we cannot see motions that are smaller than some threshold. Although difficult to visualize, tiny motions below this threshold are important and can reveal physical mechanisms, or be precursors to large motions in the case of mechanical failure. Here, we present a "motion microscope," a computational tool that quantifies tiny motions in videos and then visualizes them by producing a new video in which the motions are made large enough to see. Three scientific visualizations are shown, spanning macroscopic to nanoscopic length scales. They are the resonant vibrations of a bridge demonstrating simultaneous spatial and temporal modal analysis, micrometer vibrations of a metamaterial demonstrating wave propagation through an elastic matrix with embedded resonating units, and nanometer motions of an extracellular tissue found in the inner ear demonstrating a me...

Review of Denoising Framework for Efficient Removal of Noise from 3D Images

Computer Networks, Big Data and IoT, 2021

An image is a distributed amplitude of colors on a plane. An image may be in the form of two-dimensional image or three-dimensional image. Such images are compiled using optical sensors like camera and are processed using various image processing tools for better visualization. Purpose of the image processing is not limited for better visualization, but it is extended to remove noise from the captured image. Noise is a random variation of brightness, contrast and color pallets in an image. In the present discussion through review of denoising framework for efficient removal of noise from 3D images, different filters which are used so far for removal of noise are discussed. The research work is further extended by designing novel denoising framework for efficient removal of noise from the 3D image.

They See Me Rollin’: Inherent Vulnerability of the Rolling Shutter in CMOS Image Sensors

Annual Computer Security Applications Conference, 2021

Cameras have become a fundamental component of vision-based intelligent systems. As a balance between production costs and image quality, most modern cameras use Complementary Metal-Oxide Semiconductor image sensors that implement an electronic rolling shutter mechanism, where image rows are captured consecutively rather than all-at-once. In this paper, we describe how the electronic rolling shutter can be exploited using a bright, modulated light source (e.g., an inexpensive, off-the-shelf laser), to inject fine-grained image disruptions. These disruptions substantially affect camera-based computer vision systems, where high-frequency data is crucial in extracting informative features from objects. We study the fundamental factors affecting a rolling shutter attack, such as environmental conditions, angle of the incident light, laser to camera distance, and aiming precision. We demonstrate how these factors affect the intensity of the injected distortion and how an adversary can take them into account by modeling the properties of the camera. We introduce a general pipeline of a practical attack, which consists of: (i) profiling several properties of the target camera and (ii) partially simulating the attack to find distortions that satisfy the adversary's goal. Then, we instantiate the attack to the scenario of object detection, where the adversary's goal is to maximally disrupt the detection of objects in the image. We show that the adversary can modulate the laser to hide up to 75% of objects perceived by state-ofthe-art detectors while controlling the amount of perturbation to keep the attack inconspicuous. Our results indicate that rolling shutter attacks can substantially reduce the performance and reliability of vision-based intelligent systems.

DNN-HMM based Automatic Speech Recognition for HRI Scenarios

Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018

In 1 this paper, we propose to replace the classical black box integration of automatic speech recognition technology in HRI applications with the incorporation of the HRI environment representation and modeling, and the robot and user states and contexts. Accordingly, this paper focuses on the environment representation and modeling by training a deep neural networkhidden Markov model based automatic speech recognition engine combining clean utterances with the acoustic-channel responses and noise that were obtained from an HRI testbed built with a PR2 mobile manipulation robot. This method avoids recording a training database in all the possible acoustic environments given an HRI scenario. Moreover, different speech recognition testing conditions were produced by recording two types of acoustics sources, i.e. a loudspeaker and human speakers, using a Microso. Kinect mounted on top of the PR2 robot, while performing head rotations and movements towards and away from the fixed sources. In this generic HRI scenario, the resulting automatic speech recognition engine provided a word error rate that is at least 26% and 38% lower than publicly available speech recognition APIs with the playback (i.e. loudspeaker) and human testing databases, respectively, with a limited amount of training data.

Efficiency and spectral performance of narrowband organic and perovskite photodetectors: a cross-sectional review

Journal of Physics: Materials, 2019

The capability of detecting visible and near infrared light within a narrow wavelength range is in high demand for numerous emerging application areas, including wearable electronics, the Internet of Things, computer vision, artificial vision and biosensing. Organic and perovskite semiconductors possess a set of properties that make them particularly suitable for narrowband photodetection. This has led to rising interest in their use towards such functionality, and has driven remarkable progress in recent years. Through a comparative analysis across an extensive body of literature, this review provides an up-to-date assessment of this rapidly growing research area. The transversal approach adopted here focuses on the identification of: (a) the unifying aspects underlying organic and perovskite narrowband photodetection in the visible and in the near infrared range; and (b) the trends relevant to photoconversion efficiency and spectral width in relation to material, device and proces...

Big data aggregation in the case of heterogeneity: a feasibility study for digital health

International Journal of Machine Learning and Cybernetics, 2019

In big data applications, an important factor that may affect the value of the acquired data is the missing data, which arises when data is lost either during acquisition or during storage. The former can be a result of faulty acquisition devices or non responsive sensors whereas the latter can occur as a result of hardware failures at the storage units. In this paper, we consider human activity recognition as a case study of a typical machine learning application on big datasets. We conduct a comprehensive feasibility study on the fusion of sensory data that is acquired from heterogeneous sources. We present insights on the aggregation of heterogeneous datasets with minimal missing data values for future use. Our experiments on the accuracy, F-1 score, and PPV of various key machine learning algorithms show that sensory data acquired by wearables are less vulnerable to missing data and smaller training sets whereas smart portable devices require larger training sets to reduce the impacts of possibly missing data.

Self-Adaptive Architecture for Multi-Sensor Embedded Vision System

Lecture Notes in Computer Science, 2016

Architectural optimization for heterogeneous multi-sensor processing is a real technological challenge. Most of the vision systems involve only one single color sensor and they do not address the heterogeneous sensors challenge. However, more and more applications require other types of sensor in addition, such as infrared or low-light sensor, so that the vision system could face various luminosity conditions. These heterogeneous sensors could differ in the spectral band, the resolution or even the frame rate. Such sensor variety needs huge computing performance, but embedded systems have stringent area and power constraints. Reconfigurable architecture makes possible flexible computing while respecting the latter constraints. Many reconfigurable architectures for vision application have been proposed in the past. Yet, few of them propose a real dynamic adaptation capability to manage sensor heterogeneity. In this paper, a self-adaptive architecture is proposed to deal with heterogeneous sensors dynamically. This architecture supports on-the-fly sensor switch. Architecture of the system is self-adapted thanks to a system monitor and an adaptation controller. A stream header concept is used to convey sensor information to the self-adaptive architecture. The proposed architecture was implemented in Altera Cyclone V FPGA. In this implementation, adaptation of the architecture consists in Dynamic and Partial Reconfiguration of FPGA. The self-adaptive ability of the architecture has been proved with low resource overhead and an average global adaptation time of 75 ms.

Iterative content adaptable purple fringe detection

Signal, Image and Video Processing, 2017

In some cameras, defects in the sensor grid induce fringing artifacts near high-contrast regions. This false coloration is usually purple in color and is termed as a purple fringing aberration (PFA). Since PFA effects find applications in image forensics, it becomes important to discover ways to detect these fringes reliably and then use them for further analysis. Much of the literature rely on static gradient and saturation thresholds, selected through progressive experimentation for localizing these fringes. Given the spectral diversity associated with these fringes over a wide variety of natural images, it becomes progressively difficult to find a specific choice of parameters for detecting these fringe affected pixels. Our contributions are twofold: In the first part, we propose a content adaptive relative thresholdbased PFA detection procedure which is insular and does not require any form of external training or tuning. In cases, where the fringes are mixed with background texture and this mixture exhibits extreme gradient magnitude profile variations, the proposed baseline approach demands manual tuning. To overcome this problem and to ensure complete automation, an iterative extension of the same baseline algorithm based on region growing is proposed.

Signal Injection Attacks against CCD Image Sensors

Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security

Since cameras have become a crucial part in many safety-critical systems and applications, such as autonomous vehicles and surveillance, a large body of academic and non-academic work has shown attacks against their main component-the image sensor. However, these attacks are limited to coarse-grained and often suspicious injections because light is used as an attack vector. Furthermore, due to the nature of optical attacks, they require the line-of-sight between the adversary and the target camera. In this paper, we present a novel post-transducer signal injection attack against CCD image sensors, as they are used in professional, scientific, and even military settings. We show how electromagnetic emanation can be used to manipulate the image information captured by a CCD image sensor with the granularity down to the brightness of individual pixels. We study the feasibility of our attack and then demonstrate its effects in the scenario of automatic barcode scanning. Our results indicate that the injected distortion can disrupt automated vision-based intelligent systems.

Technological Evolution of Image Sensing Designed by Nanostructured Materials

ACS Materials Letters, 2023

Imaging sensing holds a remarkable place in modern electronics and optoelectronics for the complementary metal-oxide-semiconductor integration of high-speed optical communications and photodetection with the merits of high-speed operation, cost-effectiveness, and noncomplex fabrication. The optimum quality of image sensing relies on noise, sensitivity, power consumption, voltage operation, and speed imaging to access and compete with the state-of-the-art image sensing devices in the industry. Many studies have been conducted to address these issues; however, performance has not yet been optimized and solutions are still in the works. In this review, we briefly provide information on recent advances in image sensing using nanostructured emerging materials through nanofabrication integration including the technology evolution on traditional and modern technology platforms, general mechanisms, classification, and actual applications as well as existing limitations. Finally, new challenges and perspectives for the future trends of image sensing and their possible solutions are also discussed.

Powder bed monitoring via digital image analysis in additive manufacturing

Journal of Intelligent Manufacturing, 2023

Due to the nature of Selective Laser Melting process, the built parts suffer from high chances of defects formation. Powders quality have a significant impact on the final attributes of SLM-manufactured items. From a processing standpoint, it is critical to ensure proper powder distribution and compaction in each layer of the powder bed, which is impacted by particle size distribution, packing density, flowability, and sphericity of the powder particles. Layer-by-layer study of the process can provide better understanding of the effect of powder bed on the final part quality. Image-based processing technique could be used to examine the quality of parts fabricated by Selective Laser Melting through layerwise monitoring and to evaluate the results achieved by other techniques. In this paper, a not supervised methodology based on Digital Image Processing through the build-in machine camera is proposed. Since the limitation of the optical system in terms of resolution, positioning, lighting, field-of-view, many efforts were paid to the calibration and to the data processing. Its capability to individuate possible defects on SLM parts was evaluated by a Computer Tomography results verification.

A Review of Energy Hole Mitigating Techniques in Multi-Hop Many to One Communication and its Significance in IoT Oriented Smart City Infrastructure

IEEE Access

A huge increase in the percentage of the world's urban population poses resource management, especially energy management challenges in smart cities. In this paper, the growing challenges of energy management in smart cities have been explored and the significance of elimination of energy holes in converge cast communication has been discussed. The impact of mitigation of energy holes on the network lifetime and energy efficiency has been thoroughly covered. The particular focus of this work has been on energy-efficient practices in two major key enablers of smart cities namely, the Internet of Things (IoT) and Wireless Sensor Networks (WSNs). In addition, this paper presents a robust survey of state-of-the-art energy-efficient routing and clustering methods in WSNs. A niche energy efficiency issue in WSNs routing has been identified as energy holes and a detailed survey and evaluation of various techniques that mitigate the formation of energy holes and achieve balanced energy-efficient routing has been covered. INDEX TERMS Balanced load routing, energy holes, energy management, Internet of Things (IoT), many to one communication, multi-hop communication, smart cities (SC), wireless sensor network (WSN).

A 316MP, 120FPS, High Dynamic Range CMOS Image Sensor for Next Generation Immersive Displays

Sensors

We present a 2D-stitched, 316MP, 120FPS, high dynamic range CMOS image sensor with 92 CML output ports operating at a cumulative date rate of 515 Gbit/s. The total die size is 9.92 cm × 8.31 cm and the chip is fabricated in a 65 nm, 4 metal BSI process with an overall power consumption of 23 W. A 4.3 µm dual-gain pixel has a high and low conversion gain full well of 6600e- and 41,000e-, respectively, with a total high gain temporal noise of 1.8e- achieving a composite dynamic range of 87 dB.