David Fouhey (original) (raw)
![]() |
David Fouhey Quick info: CV Google Scholar Email: david.fouhey@nyu.edu (Please read about Admissions or RA/TA/GA positions) Physical Locations: You can find me at either: 60 Fifth Avenue, New York, NY 10011370 Jay Street, Brooklyn, NY 11201 Name FAQ: `foe'-`eee'. It rhymes with snowy or Joey: the key is to forget how it is spelled. It (but not me) is from County Cork, Ireland. Photos: One picture is hard to identify a person with. Here are some more (but dated). |
---|
**Summary:**I am an Assistant Professor at NYU, jointly appointed between Computer Science in the Courant Institute of Mathematical Sciencesand Electrical and Computer Engineeringin the Tandon School of Engineering. From 2019 to 2023, I was an Assistant Professor at CSE at the University of Michigan. Before that, I was a postdoctoral fellow atUC Berkeley (withAlyosha EfrosandJitendra Malik). Before that, I received a Ph.D. in robotics fromCMU(with Abhinav Gupta and Martial Hebert).
I work on learning-based computer vision, with a particular focus on systems that reliably estimate physical properties and dynamics from images. This has led to three interrelated interests:
- Measurements for the Sciences: An immensely promising area for computer vision is providing new sensors and capabilities to other disciplines. Successful work here requires long-term partnership and collaborations. I have worked in solar physics and evolutionary ecology. Solar missions that I've provided expertise to includeSDO andMUSE.
- 3D from Pictures: For the past 15+ years, I've built systems that produce 3D from images, with a focus on how to ideally use multiple images and how to best incorporate our knowledge of the problem.
- Interaction: Images depict a world of possibility in which one can act. I am interested in develping systems that reliably understand how humans interact with objects, including systems for finding hands and reconstructing them.
Student Collaborators
Postdoc:
- Dr. Jacob Berv, Schmidt AI in Science Postdoc (Co-advised with Brian Weeks)
Current PhD student collaborators:
- Dandan Shan, Sep 2020 — present, previously ECE MS Student with me, Rackham International Student Fellowship winner
- Chris Rockwell, Sep 2020 — present, (co-advised with Justin Johnson), previously CSE MS Student with me
- Sarah Jabbour, Sep 2020 — present, (co-advised with Jenna Wiens)
- Linyi Jin, May 2021 — present, previously Robotics MS student with me
- Ruoyu Wang, Sept 2023 — present, previously Undergrad (CS+Physics) with me
- Denis Akola, July 2024 — present
- Yidan Gao, Sept 2024 — present (co-advised with Daniele Panozzo)
- Joseph Tung, Sept 2024 — present
- Samuel Pérez-Díaz, Starting January 2025
PhD Graduates:
- Dr. Shengyi Qian, July 2024. Next position: Research Scientist, Meta Fundamental AI Research (FAIR)
- Dr. Nilesh Kulkarni, August 2024. Co-advised with Justin Johnson. Next position: Research Scientist, Netflix
- Dr. Richard Higgins, defended January 2025.
I am also proud of my past student UG/MS collaborators who are off doing great things elsewhere! For a complete list, please see my CV.
Teaching
NYU:
- Spring 2025: Computer Vision for Science & Engineering
- Fall 2024: Selected Topics in Signal Processing: Advanced Topics in Computer Vision
- Spring 2024: Computer Vision for Science & Engineering
University of Michigan: while I no longer teach at UM, I am keeping an archived version of the final form of each course in case they are useful for others.
- EECS 542 (Advanced Topics in Computer Vision — i.e., let's read papers and discuss where vision is heading)
- Fall 2021 (53 students), Fall 2020 (50 students)
EECS 442 (Computer Vision — i.e., let's learn about computer vision): - Winter 2023 (280 students), Winter 2022 (300 students), Winter 2021 (With Justin Johnson) (329 students), Fall 2019 (155 students), Winter 2019 (151 students)
- Fall 2021 (53 students), Fall 2020 (50 students)
- EECS 598 (Special Topics: The Ecological Approach to Visual Perception — i.e., let's talk about how embodiment and vision relate):
- Winter 2020 (31 students)
- AI4All (a two-week residential program for high schoolers):
- Summers 2022, 2021, 2020, 2019
- Intro to CSE Grad Research (our first year of grad school cohort-building + grad school skills seminar):
- Fall 2020, 2021, 2022
Selected Publications
For a full publication list, please see my Google Scholar Profile.
2024 | |
---|---|
![]() |
Sarah Jabbour, Gregory Kondas, Ela Kazerooni, Michael Sjoding, David Fouhey, Jenna Wiens _DEPICT: Diffusion-Enabled Permutation Importance for Image Classification Tasks_ECCV 2024 If you're careful about it, you can use diffusion models to let you perform classic permutation importance testing (as in Random Forests), but on image classifiers. |
![]() |
Ruoyu Wang, David Fouhey, Richard Higgins, Spiro K. Antiochos, Graham Barnes, J. Todd Hoeksema, K.D. Leka, Yang Liu, Peter W. Schuck, Tamas I. Gombosi _SuperSynthIA: Physics-Ready Full-Disk Vector Magnetograms from HMI, Hinode, and Machine Learning_To appear in the Astrophysical Journal, 2024 We produce solar magnetograms combining the best features of multiple instruments and producing data products that can be immediately used in downstream systems (e.g., surface flux transport or solar wind forecasting) [Project Page] [Code] [Paper] |
![]() |
Chris Rockwell, Nilesh Kulkarni, Linyi Jin, JJ Park, Justin Johnson, David Fouhey _FAR: Flexible, Accurate and Robust 6DoF Relative Camera Pose Estimation_CVPR 2024[Project Page] [Code] By combining deep learning with classic geometry in the right way, you can get a method that's super accurate like geometry, but robust like learning-based methods |
![]() |
Linyi Jin, Nilesh Kulkarni, David Fouhey _3DFIRES: Few Image 3D REconstruction for Scenes with Hidden Surface_CVPR 2024[Project Page] [Arxiv (PDF)] You can train a model to take one or a few posed images as input and produce scenes, including hidden surfaces |
![]() |
Georgios Pavlakos, Dandan Shan, Ilija Radosavovic, Angjoo Kanazawa, David Fouhey, Jitendra Malik _Reconstructing Hands in 3D with Transformers_CVPR 2024[Project Page] Scaling models + data up works really well for reconstructing hands in 3D |
2023 | |
![]() |
Sarah Jabbour, David Fouhey, Stephanie Shepard, Thomas S. Valley, Ella A. Kazerooni, Nikola Banovic, Jenna Wiens, Michael W. Sjoding Measuring the Impact of AI in the Diagnosis of Hospitalized Patients A Randomized Clinical Vignette Survey Study JAMA 330(23) Showing explanations doesn't help clinicians recover from being negatively influenced by systematically biased models. |
![]() |
Tianyi Cheng*, Dandan Shan*, Ayda Sultan, Richard Higgins, David FouheyTowards A Richer 2D Understanding of Hands at ScaleNeurIPS 2023[Project Page] [Code] [Data] New dataset, tasks, and model for understanding more complex hand interactions, including bimanual manipulation and tool use |
![]() |
Vadim Tschernezki*, Ahmad Darkhalil*, Zhifan Zhu*, David Fouhey Iro Laina Diane Larlus Dima Damen Andrea VedaldiEPIC Fields Marrying 3D Geometry and Video UnderstandingNeurIPS Datasets & Benchmarks 2023[Project Page] Accurate camera poses for EPIC Kitchens leads to a number of exciting new challenges |
![]() |
Shengyi Qian, David FouheyUnderstanding 3D Object Interaction from a Single Image.ICCV 2023[Project Page] We use human judgments to get a first pass understanding of potential interaction in 3D from a single image |
![]() |
Richard Higgins, David FouheyMOVES: Moving Objects in Video Enable SegmentationCVPR 2023[Project Page]Disagreement with a really simple background model provides surprisingly effective pseudolabel cues for performing grouping and hand-object association |
![]() |
Nilesh Kulkarni, Linyi Jin, Justin Johnson, David FouheyLearning to Predict Scene-Level Implicit 3D from Posed RGBD DataCVPR 2023[Project Page]We can learn to predict implicit function-based 3D from posed RGBD. |
![]() |
Linyi Jin, Jianming Zhang, Yannick Hold-Geoffroy, Oliver Wang, Kevin Matzen, Matthew Sticha, David FouheyPerspective Fields for Single Image Camera CalibrationCVPR 2023 (Highlight -- 2.5% accept rate)[Project Page] We develop a effective representation for camera geometry that distributes the parameters throughout the image, resulting in robustness and lots of fun applications |
![]() |
David F. Fouhey, Richard E.L. Higgins, Spiro K. Antiochos, Graham Barnes, Marc DeRosa, J. Todd Hoeksema, K.D. Leka, Yang Liu, Peter W. Schuck, Tamas I. Gombosi _Large-Scale Spatial Cross-Calibration of Hinode/SOT-SP and SDO/HMI_Accepted in The Astrophysics Journal Supplement Series[Arxiv] We fix a more than decade-long issue with pointing and pixel scale in the spectropolarimeter onboard Hinode (which gets cold during eclipse season) |
2022 | |
![]() |
Ahmad Darkhalil*, Dandan Shan*, Bin Zhu*, Jian Ma*, Amlan Kar, Richard E.L. Higgins, Sanja Fidler, David F. Fouhey, Dima Damen. _EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations_NeurIPS Datasets and Benchmarks 2022 [Paper and Reviews] [Project Webpage] [Download] [Trailer] A new large-scale dataset of segments of people engaged in interaction with objects, including three new challenges and loads of data. |
![]() |
Chris Rockwell, Justin Johnson, David F. Fouhey _The 8-Point Algorithm as an Inductive Bias for Relative Pose Prediction by ViTs_3DV 2022[Project Page] [Paper] [Bibtex] Small tweaks let vision transformers imitate much of the 8-pt algorithm, which facilitates learning to estimate full 6D relative camera pose, especially in few-sample settings |
![]() |
Nilesh Kulkarni, Justin Johnson, David F. Fouhey _What's behind the couch? Directed Ray Distance Functions for 3D Scene Reconstruction_ECCV 2022[Arxiv] [PDF] [Project Page] We can produce high-quality 3D reconstructions from a single RGB image via implicit function by carefully analyzing what we expect networks to produce during training. |
![]() |
Samir Agarwala, Linyi Jin, Chris Rockwell, David F. Fouhey _PlaneFormers: From Sparse View Planes to 3D Reconstruction_ECCV 2022[Arxiv] [PDF] [Project Page] [Bibtex] Transformers are really good at integrating evidence across multiple views and producing a planar reconstruction. |
![]() |
Shengyi Qian, Linyi Jin, Chris Rockwell, Siyi Chen, David F. Fouhey _Understanding 3D Object Articulation in Internet Videos_CVPR 2022 [Arxiv] [Arxiv PDF] [Bibtex] By training on both video data and 3D reconstructions in the right way, we can build models of articulations of 3D objects on ordinary video data. |
![]() |
Brian C. Weeks, Zhizhuo Zhou, Bruce K. O'Brien, Rachel Darling, Morgan Dean, Tiffany Dias, Gemmechu Hassena, Mingyu Zhang, and David F. Fouhey _A deep neural network for high throughput measurement of functional traits on museum skeletal specimens._Accepted in Methods in Ecology and Evolution.[Paper] Bird sizes correlate with temperature. We reduce measurement time of museum specimens by ≈10x, leading to datasets at previously unexplored scales. |
![]() |
Richard E.L. Higgins, David F. Fouhey, Spiro K. Antiochos, Graham Barnes, Mark C.M. Cheung, J. Todd Hoeksema, K.D. Leka, Yang Liu, Peter W. Schuck, Tamas I. Gombosi _SynthIA: A Synthetic Inversion Approximation for the Stokes Vector Fusing SDO and Hinode into a Virtual Observatory_Accepted in The Astrophysics Journal Supplement Series[Arxiv] [Open Access] [Video of SynthIA outputs from May 5 to June 24, 2016] Our system produces synthetic solar magnetograms that combine the best aspects of multiple instruments. This system formed the basis of a successful NASA Heliophysics Division Tools and Method grant to integrate the system into SDO/HMI's Joint Science Center. |
2021 | |
![]() |
Dandan Shan*, Richard E.L. Higgins*, David F. Fouhey _COHESIV: Contrastive Object and Hand Embedding Segmentation In Video_NeurIPS 2021[PDF] [Bibtex] By applying the Gestalt principle of common fate at scale, we can learn how to segment hand-held objects with fairly minimal supervision. |
![]() |
Alexander Raistrick, Nilesh Kulkarni, David F. Fouhey _Collision Replay: What Does Bumping Into Things Tell You About Scene Geometry?_BMVC 2021 (Oral)[PDF] [Supplement (PDF)] [Supplement Video (MP4)] [Bibtex] Collisions with the world are usually seen as a nuisance. At scale and with a random-walk-inspired formulation, they can be used to learn a depth sensor |
![]() |
Linyi Jin, Shengyi Qian, Andrew Owens, David F. Fouhey _Planar Surface Reconstruction from Sparse Views_ICCV 2021 (Oral) [Arxiv] [PDF] [Project Page] We can learn to reconstruct scenes from a handful of views with an unknown relationship. Humans seem to do this fine, but it poses serious challenges for computers. |
![]() |
Chris Rockwell, David F. Fouhey, Justin C. Johnson _PixelSynth: Generating a 3D-Consistent Experience from a Single Image_ICCV 2021[Arxiv[PDF[Project Page] [Bibtex] [CSE News Piece] PixelSynth fuses the complementary strengths of 3D reasoning and autoregressive modeling to create an immersive experience from a single image. |
![]() |
Zhizhuo Zhou, Gemmechu Hassena, Brian C. Weeks, David F. Fouhey _Quantifying Bird Skeletons_CV4Animals Workshop[PDF] [Bibtex] We can measure bird skeleton specimens extraordinarily accurately and quite fast with deep learning. This system can unlock datasets of birds at unprecedented scales. |
![]() |
Richard E.L. Higgins, David F. Fouhey, Dichang Zhang, Spiro K. Antiochos, Graham Barnes, J. Todd Hoeksema, K.D. Leka, Yang Liu, Peter W. Schuck, Tamas I. Gombosi _Fast and Accurate Emulation of the SDO/HMI Stokes Inversion with Uncertainty Quantification_The Astrophysical Journal (ApJ), Volume 911, Number 2, 2021[Arxiv] [Published PDF] [Bibtex] [Project Page] [HMI Nugget] We can emulate the magnetogram production pipeline of SDO/HMI, a key NASA mission. This system lays the ground-work for SynthIA, which produces best-of-both-worlds style magnetograms |
2020 | |
![]() |
S. Qian*, L. Jin*, D. F. Fouhey _Associative3D: Volumetric Reconstruction from Sparse Views_ECCV 2020[Arxiv] [Project Page] [Code] [Bibtex] We can build a voxel-based reconstruction of images from two views, even without access to the relative camera positions |
![]() |
C. Rockwell, D. F. Fouhey _Full-Body Awareness from Partial Observations_ECCV 2020[Arxiv] [Project Page] [Bibtex] [Code] Human 3D pose estimation systems work poorly on people as they are usually depicted in video. A self-training method works well at fixing this problem. |
![]() |
S. Jabbour, D.F. Fouhey, E. Kazerooni, M.W. Sjoding, J. Wiens _Deep Learning Applied to Chest X-Rays: Exploiting and Preventing Shortcuts_MLHC 2020[PDF] [Bibtex] [Code] Deep nets can easily exploit shortcuts (e.g., apparent bone density), but a simple transfer learning approach can help mitigate the use of shortcuts. |
![]() |
D. Shan, J. Geng*, M. Shu*, D.F. Fouhey _Understanding Human Hands in Contact at Internet Scale_CVPR 2020 (Oral) [PDF] [Bibtex] [Project Page & Code] We built a new dataset and model that enables really accurate recognition of basic hand information. Since hands are key to interaction, this basic information unlocks tons of useful new problems. |
![]() |
M. El Banani, J. Corso, D.F. Fouhey _Novel Object Viewpoint Estimation through Reconstruction Alignment_CVPR 2020[PDF] [Supp.] [Bibtex] [Code and Project Page] We can learn to do relative pose estimation by aligning reconstructions |
![]() |
N. Kulkarni, A. Gupta, D.F. Fouhey, S. Tulsiani _Articulation-aware Canonical Surface Mapping_CVPR 2020[Arxiv] [PDF] [Bibtex] We can build canonical surface maps for objects that articulate, such as elephants and horses |
2019 | |
![]() |
A. Szenicer*, D.F. Fouhey*, A. Munoz-Jaramillo, P.J. Wright, R. Thomas, R. Galvez, M. Jin, M.C.M. Cheung _A Deep Learning Virtual Instrument for Monitoring Extreme UV Solar Spectral Irradiance_Science Advances, Vol. 5, Number 10, 2019[Open Acess] [Bibtex] [Prediction Video] [Activations Video] [Overview Video] We built a virtual version of the EVE MEGS-A instrument that can serve as a replacement after its electrical short |
Press coverage/releases: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
|
![]() |
D. Zhukov, J.-B. Alayrac, G. Cinbis, D.F. Fouhey, I. Laptev, J. Sivic _Cross-task weakly-supervised learning from instructional videos_CVPR 2019[PDF] [Project Page] [Arxiv] [Bibtex] By accounting for the compositional nature of language, we can learn better models from instructional videos |
![]() |
R. Galvez*, D.F. Fouhey*, M. Jin, A. Szenicer, A. Munoz-Jaramillo, M.C.M. Cheung, P.J. Wright, M.G. Bobra, Y. Liu, J. Mason, R. Thomas _A Machine Learning Dataset Prepared From the NASA Solar Dynamics Observatory Mission_The Astrophysical Journal Supplement, 242:1, 2019[PDF] [Arxiv] [Bibtex] [Movie & Explanation] [Small dataset + demo] We produced a machine-learning-ready dataset that merges the three instruments aboard the NASA SDO mission |
Earlier | |
![]() |
A. Kumar, S. Gupta, D. F. Fouhey, S. Levine, J. Malik _Visual Memory for Robust Path Following_NeurIPS 2018 (Oral)[Project Page] [PDF] [Bibtex] |
![]() |
D. F. Fouhey, W. Kuo, A. A. Efros, J. Malik _From Lifestyle VLOGs to Everyday Interactions_CVPR 2018[Project Page] [Arxiv] [Bibtex] |
![]() |
S. Tulsiani, S. Gupta, D. F. Fouhey, A. A. Efros, J. Malik _Factoring Shape, Pose, and Layout from the 2D Image of a 3D Scene_CVPR 2018[Project Page] [Arxiv] [Bibtex] |
![]() |
M. Lescroart, D. F. Fouhey, J. Malik Convolutional neural networks represent shape dimensions -- but not as accurately as humans Abstract at VSS 2018[Abstract] |
![]() |
S. Gupta, D.F. Fouhey, S. Levine, J. Malik _Unifying Map and Landmark Based Representations for Visual Navigation_Arxiv 2017[Project Page] [Arxiv] [Bibtex] |
![]() |
D. F. Fouhey, A. Gupta, A. Zisserman _From Images to 3D Shape Attributes_TPAMI (Pre-print on Arxiv)The TPAMI version has ugly typesetting (full-width tables on the bottom?) that I was unable to change. **Read the Arxiv one.**[Arxiv] [Bibtex] |
![]() |
R. Girdhar, D. F. Fouhey, M. Rodriguez, A. Gupta _Learning a Predictable and Generative Vector Representation for Objects_ECCV 2016 (Spotlight)[Publication (PDF)] [Bibtex] [Project Page] |
![]() |
D. F. Fouhey _Factoring Scenes into 3D Structure and Style_Doctoral Dissertation[Dissertation (PDF)] [Bibtex] [Defense Slides (PDF)] |
![]() |
D. F. Fouhey, A. Gupta, A. Zisserman _3D Shape Attributes_CVPR 2016 (Oral - Watch the presentation on Youtube)[Publication (PDF)] [Bibtex] [Project Page] [Poster (PDF)] [Talk (PPTX)] [Talk (PDF)] |
![]() |
R. Girdhar, D. F. Fouhey, K. M. Kitani, A. Gupta, M. Hebert _Cutting through the Clutter: Task-Relevant Features for Image Matching_WACV 2016[Publication (PDF)] [Bibtex] |
![]() |
D. F. Fouhey, W. Hussain, A. Gupta, M. Hebert _Single Image 3D Without a Single 3D Image_ICCV 2015[Publication (PDF)] [Bibtex] [Poster (PDF)] [Supplemental (PDF)] [Bonus Details (PDF)] |
![]() |
X. Wang, D. F. Fouhey, A. Gupta _Designing Deep Networks for Surface Normal Estimation_CVPR 2015[Publication (PDF)] [Bibtex] |
![]() |
D. F. Fouhey, A. Gupta, and M. Hebert _Unfolding an Indoor Origami World_ECCV 2014 (Oral - Watch the presentation on VideoLectures.net)[Publication (PDF)] [Bibtex] [Project Page] [Extended Results (PDF)] |
![]() |
D. F. Fouhey and C.L. Zitnick _Predicting Object Dynamics in Scenes_CVPR 2014[Publication (PDF)] [Bibtex] [Poster (PDF)] [Supplemental (PDF)] |
![]() |
D. F. Fouhey, V. Delaitre, A. Gupta, A. Efros, I. Laptev, and J. Sivic. People Watching: Human Actions as a Cue for Single View Geometry.IJCV (extended version of ECCV 2012 paper)[Preprint (PDF)] [Final version (via Springer)] |
![]() |
D. F. Fouhey, A. Gupta, and M. Hebert _Data-Driven 3D Primitives for Single Image Understanding_ICCV 2013[Publication (PDF)] [Bibtex] [Project Page] [Poster (PDF)] |
![]() |
D. F. Fouhey, V. Delaitre, A. Gupta, A. Efros, I. Laptev, and J. Sivic. People Watching: Human Actions as a Cue for Single View Geometry.ECCV 2012 (Oral - Watch the presentation on VideoLectures.net )[Publication (PDF)] [Bibtex][Project Page] |
![]() |
V. Delaitre, D. F. Fouhey, I. Laptev, J. Sivic, A. Gupta, and A. Efros. Scene Semantics from Long-term Observation of People.ECCV 2012[Publication (PDF)] [Bibtex] [Project Page] |
![]() |
D. F. Fouhey, A. Collet, M. Hebert, and S. Srinivasa Object Recognition Robust to Imperfect Depth Data.CDC4CV 2012 Workshop at ECCV 2012[Publication (PDF)] [Bibtex][Supplemental(PDF)] [Supp. Video 1] [Supp. Video 2] |
![]() |
M. Costanza-Robinson, B. Estabrook, and D. F. Fouhey Representative elementary volume estimation for porosity, moisture saturation, and air-water interfacial areas in unsaturated porous media: Data quality implications (Sorry for not posting a pre-print!)In Water Resources Research, Volume 47, 2011[Official Version] [Bibtex] |
![]() |
D. F. Fouhey, D. Scharstein, and A. Briggs. _Multiple plane detection in image pairs using J-linkage._ICPR 2010[Publication (PDF)] [Bibtex]Implementation (Python and C) [Code (Zip)] [Poster (PDF)] |
Miscellaneous
**About Joining My Group:**Due to the volume of emails I receive and my limited time, I unfortunately do not have time to respond to inquiries about joining my group and as a rule do not. I want to post information that, I hope, addresses questions you may have:
- If you are looking to become a NYU PhD student: I am not taking students in the Fall 2025 PhD application season (i.e., submitting an application in December 2025 and starting in September 2026). However, there are many other researchers in AI, ML, and computer vision at NYU, in both the CS and ECE departments.
Regardless of the year, there is no need to email me or reach out. I look at all applications when they come, after the application deadline, looking at the full application pool. I will then reach out directly as needed. Emailing me in advance genuinely does not improve your chances of admission and emaily me has never played a factor in who I work with as a PhD student. - If you are a NYU student looking for research: I am not taking on direct advisees at this point, including for directed study / advanced research projects. If positions open up, I will update this website or advertise. Please feel free to email any of my PhD students if you think there is a good research fit. However, they have their own work and may not have capacity.
- If you are a UM student: I am not taking new students at UM.
- Otherwise: at the moment, I am not generally looking for postdocs, visitors, or new collaborators except via people I already have an existing connection with.
Writing (arranged in chronological order):
- My Ph.D. dissertation, Factoring Scenes into 3D Structure and Style(Dissertation and Defense Slides)
- A note on some practical considerations when evaluating surface normals
- My A.B. thesis,Multi-Model Estimation in the Presence of Outliers(Thesis andPoster andPresentation)
Miscellaneous
- CMU RI Thesis Template (zip) based off of a CMU RI tech report template from Daniel Morris.
Joke Papers
Sometimes when I feel a creative itch, the end result is a joke publication (which despite the name often have a serious point to make). These are all done with the one and only Daniel Maturana.
- Keras4Kindergartners.com: Check out our secret backup plan for if research doesn't work out.
- Deep Excel: Everybody knows that deep learning brings about synergy and so does Excel, so Daniel Maturana and I released ExcelNet, a break-through technology that merges the power of Deep Learning with Excel.
The spreadsheet The whitepaper The pitch slides Protips (actually quite helpful) - Visual Rank Estimation: Visually Identifying Rankwith Daniel Maturana proves that linear algebra can be replaced with machine learning. It also shows that if you are a CNN, the much-hated jet colormap is actually the best colormap. Winner of the ``Most Frighteningly Like Real Research'' award at SIGBOVIK 2016.
- Cat Basis Purrsuit: I've been told that this is the highlight of my research career and it can only go downhill from here:Cat Basis Pursuit
- Celebrity Learning: You may also know my award-winning work with Daniel Maturana on celebrity-themed learning, making money at home from Hilbert's Nulstellensatz, and more from OneWeirdKernelTrick.com
- Kardashian Kernel: The original, rarely imitated, never duplicated. Originator of the alphabetically-related-work section:The Kardashian Kernel