Panos Achlioptas (original) (raw)

Notable Updates


February 2025: Oumi.ai is number one! trending repository on Github. Let's do this!

January 2025: We’re officially out of stealth! The press is taking notice—check out our coverage in VentureBeat, GeekWire, and The New Stack.

June 2023: I have two first-author papers appearing in CVPR-2023, Vancouver. Affection and ShapeTalk. A big shout-out to all my co-authors who have made these works a reality shared at the top-tier level of our research community.

April 2023: Our third workshop at the intersection of 3D scenes and natural language for common objects (L3DS) will be part of the ICCV-2023 workshop series.

December 2022: My first last-author paper is now a reality (ScanEnts3D). In this work, we exploit dense correspondences between 3D objects in scenes and referential language and improve the SoTA in two essential multi-modal tasks in 3D scenes. Big shout-out to my amazing intern Ahmed Abdelreheem for all his hard work.

November 2022: LADIS, a rigorous framework for improving shape editing models of 3D objects via language, will appear in the findings of EMNLP-2022.

October 2022: A big milestone in my "quest" of developing more emotionally aware, and human-centric AI is achieved. Affection is now on arXiv.

April 2022: Dance2Music-GAN, a practical adversarial multi-modal framework that generates complex musical samples conditioned on dance videos, will appear at ECCV-2022.

March 2022: Our second workshop at the intersection of 3D scenes and natural language for common objects (L3DS), will be part of the ECCV-2022 workshop series. We are looking forward to a happy reunion and passionate, productive discussions.

March 2022: NeROIC, a novel object acquisition framework which exploits and extends radiance fields to capture high-quality 3D objects from online image collections, will appear at SIGGRAPH-2022.

March 2022: PartGlot, which opens the door for the automatic recognition and localization of shape-parts via referential language alone, will appear with an oral presentation in CVPR-2022.

November 2021: Gave a talk describing recent trends on Affective Deep Learning at Stanford's STATS 281 Statistical Analysis of Fine Art.

October 2021: I am excited to start my new role as a Research Scientist for the Creative Vision of SNAP Research.

May 2021: The content of our CVPR-21 workshop L3DS concerning language and 3D scenes has been finalized! Among others, we will host a benchmark challenge for ReferIt3D: (here).

March 2021: ArtEmis keeps growing. Now it is featured in Forbes Science.

March 2021: I successfully defended my Ph.D. Thesis titled "Learning to Generate and Differentiate 3D Objects Using Geometry & Language".

March 2021: I will give a lightning talk on "Art and AI" during HAI’s Intelligence Augmentation: AI Empowering People to Solve Global Challenges.

March 2021: Our work ArtEmis: Affective Language for Visual Art is provisionally accepted as an Oral presentation in CVPR-2021.

February 2021: Our recent arXiv report (ArtEmis) attracted some media attention:New Scientist, HAI,MarkTechPost, KCBS-Radio (want to hear me talk about it? check the short interview below):

February 2021: I will co-organize the 1st Workshop on Language for 3D Scenes in CVPR 2021. We hope to spark new interest in this emerging area!

February 2021: I am initiating this "News" section. My intention is to give the gist of my (primarily) professional updates to visitors.