Ben Poole (original) (raw)
Research
I'm interested in machine learning, computational neuroscience, information theory, computer vision, optimization, and cycling. My current focus is on generative models that enable new creative applications.
Your browser does not support the video tag. | ReconFusion: 3D Reconstruction with Diffusion Priors Rundi Wu*,Ben Mildenhall*,Keunhong Park,Philipp Henzler, Ruiqi Gao,Daniel Watson,Dor Verbin,Pratul Srinivasan, Jonathan T. Barron,Ben Poole,Aleksander Holynski* Preprint 2023 project page /arXiv 3D reconstruction of real-world scenes from only a few photos |
---|---|
![]() ![]() |
Diffusion Self-Guidance for Controllable Image Generation Dave Epstein, Allan Jabri, Ben Poole, Alexei A. Efros, Aleksander Holynski NeurIPS 2023 project page / arXiv Self-guidance is a method for controllable image generation that guides sampling using only the attention and activations of a pretrained diffusion model. |
Your browser does not support the video tag. ![]() |
DreamBooth3D: Subject-Driven Text-to-3D Generation Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz,Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubinstein,Jonathan T. Barron, Yuanzhen Li, Varun Jampani ICCV 2023 project page /arXiv Combining DreamBooth (personalized text-to-image) and DreamFusion (text-to-3D) yields high-quality, subject-specific 3D assets with text-driven modifications |
![]() ![]() |
Learning a Diffusion Prior for NeRFs Guandao Yang,Abhijit Kundu,Leonidas Guibas,Jonathan Barron,Ben Poole ICLR Neural Fields Workshop 2023 Learn a regularized set of NeRFs in parallel, then learn a 3D diffusion model that can generate new NeRFs. |
Your browser does not support the video tag. ![]() |
DreamFusion: Text-to-3D using 2D Diffusion Ben Poole,Ajay Jain,Jonathan T. Barron,Ben Mildenhall ICLR 2023 (Outsanding Paper Award) project page /arXiv /gallery We optimize a NeRF from scratch using a pretrained text-to-image diffusion model to do text-to-3D generative modeling. |
Your browser does not support the video tag. ![]() |
Imagen Video: High Definition Video Generation with Diffusion Models Jonathan Ho*,William Chan*,Chitwan Saharia*,Jay Whang*,Ruiqi Gao, Alexey Gritsenko,Diederik P. Kingma,Ben Poole,Mohammad Norouzi, David J. Fleet,Tim Salimans* Preprint 2022 arXiv / project page A general framework for training and sampling from score-based models enabling likelihood computation and controllable generaiton. |
Your browser does not support the video tag. ![]() |
Journey to the BAOAB-limit: finding effective MCMC samplers for score-based models Ajay Jain*,Ben Poole* NeurIPS 2022 Score-Based Models Workshop project page /paper Sometimes bugs are effective MCMC samplers for score-based models. |
Your browser does not support the video tag. ![]() |
Zero-Shot Text-Guided Object Generation with Dream Fields Ajay Jain,Ben Mildenhall,Jonathan T. Barron,Pieter Abbeel,Ben Poole CVPR 2022 project page /arXiv /video Supervising the CLIP embeddings of NeRF renderings lets you to generate 3D objects from text prompts. |
![]() |
Autoregressive Diffusion Models Emiel Hoogeboom,Alexey Gritsenko,Jasmijn Bastings, Ben Poole,Rianne van den Berg,Tim Salimans ICLR 2022 arXiv A new model class for discrete variables encompassing order agnostic autoregressive models and absorbing discrete diffusion. |
![]() |
VeLO: Training Versatile Learned Optimizers by Scaling Up Luke Metz, James Harrison, C. Daniel Freeman, Amil Merchant, Lucas Beyer, James Bradbury, Naman Agarwal, Ben Poole, Igor Mordatch, Adam Roberts, Jascha Sohl-Dickstein Preprint 2022 arXiv / code A general purpose learned optimizer. |
![]() |
Variational Diffusion Models Diederik P. Kingma*,Tim Salimans*,Ben Poole,Jonathan Ho NeurIPS 2021 arXiv / code SOTA likelihood using diffusion models with learnable noise schedule |
![]() |
Score-Based Generative Modeling through Stochastic Differential Equations Yang Song,Jascha Sohl-Dickstein,Diederik P. Kingma, Abhishek Kumar,Stefano Ermon,Ben Poole ICLR 2021 (Outstanding Paper Award) arXiv / code A general framework for training and sampling from score-based models enabling likelihood computation and controllable generaiton. |
![]() |
Learning Energy-Based Models by Diffusion Recovery Likelihood Ruiqi Gao,Yang Song,Ben Poole,Ying Nian Wu Diederik P. Kingma ICLR 2021 arXiv Tractably learn and sample from a sequence of EBMs based on a diffusion process. High sample quality and stable long-run MCMC chains. |
![]() |
What Makes for Good views for Contrastive Learning? Yonglong Tian,Chen Sun,Ben Poole,Dilip Krishnan, Cordelia Schmid,Philip Isola NeurIPS 2020 arXiv / project page |
![]() |
Weakly-Supervised Disentanglement Without Compromises Francesco Locatello,Ben Poole,Gunnar Rätsch, Bernhard Schölkopf,Olivier Bachem,Michael Tschannen ICML 2020 arXiv With a causality-inspired twist, disentangled representations are identifiable in theory and practice. |
![]() |
On Implicit Regularization in β-VAEs Abhishek Kumar,Ben Poole ICML 2020 arXiv Heuristics in VAEs can lead to uniqueness and beneficial regularization. |
![]() |
VIB is Half Bayes Alex Alemi,Warren Morningstar,Ben Poole,Ian Fischer,Josh Dillon AABI Symposium 2020 arXiv The Variational Information Bottleneck can rederived as Half-Bayesian. |
![]() |
On variational bounds of mutual information Ben Poole, Sherjil Ozair, Aäron van den Oord, Alex Alemi, George Tucker ICML 2019 arXiv / colab / video / slides / poster Old, new, and improved estimators of mutual information w/neural nets. |
![]() |
Discrete Flows: Invertible Generative Models of Discrete Data Dustin Tran, Keyon Vafa, Kumar Krishna Agrawal, Laurent Dinh, Ben Poole ICLR Deep GenStruct Workshop 2019 arXiv Fast sampling generative models for discrete data. |
![]() |
Preventing posterior collapse with delta-VAEs Ali Razavi, Aäron van den Oord, Ben Poole, Oriol Vinyals ICLR 2019 OpenReview / arXiv / poster Avoid posterior collapse by lower bounding the rate. |
![]() |
Neuronal Dynamics Regulating Brain and Behavioral State Transitions Aaron Andalman, Vanessa Burns, Matthew Lovett-Barron, Michael Broxton, Ben Poole, Samuel Yang, Logan Grosenick, Talia Lerner, Ritchie Chen, Tyler Benster, Philippe Mourrai, Marc Levoy, Kanaka Rajan, Karl Deisseroth Cell 2019 |
![]() |
Fixing a Broken ELBO Alex Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, Kevin Murphy ICML, 2018 arXiv Understanding tradeoffs in VAE models through the lens of information theory. |
![]() |
Continuous relaxation training of discrete latent-variable image models Casper Kaae Sønderby*, Ben Poole*, Andriy Mnih NIPS Bayesian Deep Learning Workshop, 2017 PDF Continuous relaxation training of discrete latent-variable models can flexibly capture both continuous and discrete aspects of natural data. |
![]() |
Identification Of Cellular-Activity Dynamics Across Large Tissue Volumes In The Mammalian Brain Logan Grosenick*, Michael Broxton*, Christina Kim*, Conor Liston*, Ben Poole, Samuel Yang, Aaron Andalman, Edward Scharff, Noy Cohen, Ofer Yizhar, Charu Ramakrishnan, Surya Ganguli, Patrick Suppes, Marc Levoy, Karl Deisseroth *equal contribution bioRxiv Large-scale cellular-level imaging in the mammalian brain using lightfield microscopy. 1x1x0.5mm3 @ 100Hz. |
![]() |
Continual Learning through Synaptic Intelligence Friedemann Zenke*, Ben Poole*, Surya Ganguli *equal contribution ICML, 2017 arXiv / code Learns to solve tasks sequentially without forgetting by learning which weights are important. |
![]() |
On the expressive power of deep neural networks Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein ICML, 2017 arXiv Random neural networks show exponential growth in activation patterns and more. |
![]() |
Time-warped PCA: simultaneous alignment and dimensionality reduction of neural data Ben Poole, Alex Williams, Niru Maheswaranathan, Byron Yu, Gopal Santhanam, Stephen Ryu, Stephen Baccus, Krishna Shenoy, Surya Ganguli Computational Systems Neuroscience (COSYNE), 2017 abstract / poster / code Extends dimensionality reduction techniques to account for trial-to-trial variability in timing. |
![]() |
Categorical Reparameterization with Gumbel-Softmax Eric Jang, Shane Gu, Ben Poole ICLR, 2017 arXiv / blog post Efficient gradient estimator for categorical variables. |
![]() |
Unrolled Generative Adversarial Networks Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein ICLR, 2017 arXiv / code Stabilize GANs by defining the generator objective with respect to an unrolled optimization of the discriminator. |
![]() |
Adversarially learned inference Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville ICLR, 2017 arXiv / project page Jointly learn a generative model and an inference network through an adversarial process. |
![]() |
Exponential expressivity in deep neural networks through transient chaos Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, Surya Ganguli Neural Information Processing Systems (NIPS), 2016 arXiv /code /poster Random neural networks have curvature that grows exponentially with depth. |
![]() |
Improved generator objectives for GANs Ben Poole, Alex Alemi, Jascha Sohl-Dickstein, Anelia Angelova NIPS Workshop on Adversarial Training, 2016 arXiv / poster A new take on the Generative Adversarial Network training procedure. |
![]() |
Direction Selectivity in Drosophila Emerges from Preferred-Direction Enhancement and Null-Direction Suppression Jonathan Leong*, Jennifer Esch*, Ben Poole*, Surya Ganguli, Thomas Clandinin * equal contribution The Journal of Neuroscience, 2016 Fruit flies detect motion using a very similar algorithm to humans. |
![]() ![]() |
The Fast Bilateral Solver Jonathan T. Barron, Ben Poole European Conference on Computer Vision (ECCV), 2016 (oral presentation) arXiv / supplement /code /bibtex Fast and accurate edge-aware smoothing. Differentiable for all your deep learning needs. |
![]() |
Fast large-scale optimization by unifying stochastic gradient and quasi-newton methods Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli International Conference on Machine Learning (ICML), 2014 arXiv /code Speed up quasi-newton methods by maintaining a low-dimensional approximation of the Hessian for each minibatch. |
![]() |
Analyzing noise in auotoencoders and deep networks Ben Poole, Jascha Sohl-Dickstein, Surya Ganguli NIPS Workshop on Deep Learning, 2013 arXiv Derives analytic regularizers for different forms of noise injection, and shows how alternative types of additive noise can improve over dropout. |
![]() |
Brain Regions Engaged by Part- and Whole-task Performance in a Video Game: A Model-based Test of the Decomposition Hypothesis John Anderson, Daniel Bothell, Jon Fincham, Abraham Anderson, Ben Poole, Yulin Qin Journal of Cognitive Neuroscience, 2011 Complex tasks, like the Space Fortress video game, can be decomposed into a set of independent reusable components. |
![]() |
Robust non-rigid alignment of volumetric calcium imaging data Ben Poole, Logan Grosenick, Michael Broxton, Karl Deisseroth, Surya Ganguli Computational Systems Neuroscience (COSYNE), 2015 poster Correct for translations and rotations of noisy volumetric data without a clean reference volume. |
---|---|
![]() |
Connecting scene statistics to probabilistic population codesand tuning properties of V1 neurons Ben Poole, Ian Lenz, Grace Linsday, Jason Samonds, Tai Sing Lee Society for Neuroscience (SFN), 2010 (oral presentation) |