عafifi (original) (raw)

Camera Software Engineer / Student Researcher / Research Intern: Worked with the Pixel team on color correction of Pixel phone cameras. Developed a camera-independent spatially varying AWB correction framework, a chromaticity mapping method for cross-camera transfer without traditional color-chart calibration, and a dual-exposure feature (DEF) for HDR-based illuminant estimation. As a side project, I collaborated with the Gemini team to explore zero-shot image classification using multimodal large language models (WDYS). As a student researcher, I developed a cross-camera AWB method (C5) with the Gcam team.

Machine Learning / Camera Algorithms Engineer: Worked with the Camera ISP Algorithm team on color correction for iPhone cameras. Contributed to spatially varying AWB correction of iPhone cameras and conducted proof-of-concept designs for spectral light sensors.

Computer Vision R&D Engineer / Consultant: Developed an ML algorithm for skin color correction and consulted on hairstyle editing and hair color matching used in LUXY HAIR virtual demo.

Consultant: Worked on image harmonization.

Research Engineer / Consultant: Developed the color correction module in NUDEMETER and consulted on skin tone analysis.

I am interested in low-level computer vision and computational photography, with a focus on color processing, editing, and photographic quality enhancement. Below are selected examples of my work. For a full list of publications, please click here.

Mahmoud Afifi, Zhongling Wang, Ran Zhang, and Michael S. Brown

arXiv, 2025

AI Center-Toronto, Samsung Electronics

Unlike most neural ISPs that treat the entire imaging pipeline as a single black-box network, we decompose the process into standard, interpretable, modular learning-based components. This design requires no manual tuning and enables easier debugging, seamless scaling, cross-camera generalization, and fine-grained customization. It also gives users complete control over every stage of the pipeline and supports unlimited post-editable re-rendering by storing compact raw data within the final image. In addition, it enables learning diverse picture styles with minimal memory overhead and integrates naturally with external image-processing functions.

arXiv | Code & Executables | GPU-Accelerated Bilateral Solver | Video

Mahmoud Afifi, Ran Zhang, and Michael S. Brown

arXiv, 2025

AI Center-Toronto, Samsung Electronics

A lightweight, learnable pre-processing pipeline that adapts camera raw images before standard JPEG compression using spatial-domain and optionally frequency-domain transforms. The operations are fully invertible, with parameters fitting within the JPEG file’s comment (COM) segment (<64 KB), enabling accurate reconstruction of the original raw image after JPEG decoding. This yields high-fidelity raw-to-JPEG storage with significant size reduction.

arXiv | Code

Mahmoud Afifi*, Luxi Zhao*, Abhijith Punnappurath, Mohamed A. Abdelsalam, Ran Zhang, and Michael S. Brown

ICCV 2025

24% acceptance rate

AI Center-Toronto, Samsung Electronics

Timestamp and geolocation, combined with capture metadata, provide strong cues for estimating scene illuminants in smartphone camera white balancing. Our method leverages this data, along with color information, using a lightweight learnable model (~5K parameters) that runs efficiently on a flagship mobile DSP (0.25 ms) and CPU (0.80 ms), achieving high accuracy. We also introduce a large dataset (~3.2K raw images) from the S24 Ultra, containing ground-truth illuminants (neutral and user-preference-based) and capture metadata.

* Equal contribution

Paper | Supplementary Materials | arXiv | Code | Data | Patent Application

SaiKiran Tedla*, Junyong Lee*, Beixuan Yang, Mahmoud Afifi, and Michael S. Brown

ICCV 2025 (Highlight)

24% acceptance rate

AI Center-Toronto, Samsung Electronics in collaboration with York University

A method for multispectral (MS) image demosaicing designed for dual-camera setups that leverages co-captured high-fidelity RGB images to guide the reconstruction of lower-fidelity MS images. We also provide a large dataset of paired RGB and MS mosaiced images with ground-truth demosaiced outputs.

* Equal contribution

Paper | Supplementary Materials | arXiv | Code | Data | Patent Application

Dongyoung Kim, Mahmoud Afifi, Dongyun Kim, Michael S. Brown, and Seon Joo Kim

ICCV 2025

24% acceptance rate

AI Center-Toronto, Samsung Electronics in collaboration with Yonsei University

By leveraging pre-calibrated color correction matrices (CCMs) existing in camera ISPs, we generate a compact camera fingerprint embedding to adapt our method to new cameras. Our method achieves state-of-the-art performance in color constancy across diverse cameras, while remaining lightweight.

Paper | Supplementary Materials | arXiv | Project Page | Code | Patent Application

Abdelrahman Abdelhamed*, Mahmoud Afifi*, and Alec Go

arXiv, 2024

Google

With some prompt engineering, multimodal LLMs (e.g., Gemini) can perform zero-shot image classification. However, they may not consistently produce accurate target dataset labels. Our approach leverages multimodal LLMs & cross-modal embedding encoders to produce initial class prediction feature & image description feature alongside image feature, improving zero-shot image classification accuracy without the need for dataset-specific prompts. Our method outperforms prior methods across various datasets, achieving a 6.8% increase in accuracy on ImageNet.

* Equal contribution

arXiv | Code & Data | Patent Application | Google Research Post

Mahmoud Afifi, Zhenhua Hu, and Liang Liang

ECCV 2024

27.9% acceptance rate

Google

Utilizing the chromatic distortion present between long and short exposure frames of HDR photography, we introduce a compact guiding feature for illuminant estimators. Processed by just ~300 learnable parameters, our method achieves results that match or surpass previous methods relying on thousands or even millions of parameters.

Paper | Supplementary Materials | arXiv | Poster | Patent Application

Georgy Perevozchikov, Nancy Mehta, Mahmoud Afifi, and Radu Timofte

ECCV 2024

27.9% acceptance rate

In collaboration with the University of Würzburg

Rawformer, an unsupervised Transformer-based encoder-decoder model for raw-to-raw mapping, enables the utilization of learnable camera ISP trained on a specific camera's raw images to render raw images taken by new cameras with different characteristics.

Paper | Supplementary Materials | arXiv | Code

Mahmoud Afifi, Marcus A. Brubaker, and Michael S. Brown

CVPR 2021

23.4% acceptance rate

York University

HistoGAN is the first work to control colors of GAN-generated images based on features derived directly from color histograms. Our method learns to transfer the color information encapsulated in histogram features to the colors of a GAN-generated images (HistoGAN) or real input images (ReHistoGAN). As color histograms provide an abstract representation of image color that is decoupled from spatial information, our HistoGAN and ReHistoGAN are less restrictive and suitable across arbitrary domains.

Paper | Supplementary Materials | arXiv | Code & Dataset | Colab (histogram loss) | Poster | Presentation | Talk

Mahmoud Afifi and Abdullah Abuolaim

BMVC 2021

York University

A semi-supervised method to map between two different camera-raw spaces. Training requires an unpaired set of images besides a very small set of paired images taken by these two camera models.

arXiv | Dataset | Presentation

Mahmoud Afifi and Michael S. Brown

CIC 2020 (Oral Presentation)

York University

A simple method to link the nonlinear white-balance correction functions, introduced in our CVPR'19 work, to the user's selected colors to allow interactive white-balance manipulation

arXiv | Code | Presentation

Atima Lui, Nyalia Lui, Mahmoud Afifi, and Ariadne Bazigos

US Patent 2020

My Nudest Inc

A system for analyzing user input, combining user's image(s) and query responses to provide tailored color outputs. Through color correction and comparison to predetermined color identifiers, it delivers accurate results and product recommendations.

Patent | PDF

Mahmoud Afifi and Michael S. Brown

BMVC 2019 (Oral Presentation)

28% acceptance rate | 5% oral presentation acceptance rate

York University

Learning a new canonical space in an unsupervised manner allows us to train a single deep model on multiple camera sensors and perform accurate illuminant estimation for images captured by new unseen camera sensors in the inference phase.

Paper | Project Page | Supplementary Materials | arxiv | Code | Presentation | Talk | Patent

Mahmoud Afifi and Abdelrahman Abdelhamed

JVCI 2019

Impact factor: 2.6 (2023)

Assiut University

Gender classification can be improved using different facial features; a user study validates our finding.

arxiv | Dataset

Mahmoud Afifi

MTA 2019

Impact factor: 3.0 (2023)

Assiut University

Hand images can be used for gender recognition and biometric identification; a large dataset of hand images enables us to train our two-stream deep model.

arxiv | Project Page & Dataset | Code

Dissertation Award

Best Paper Award

Outstanding Reviewer

Challenges

Others