Chih-Ting Liu's Homepage (original) (raw)

Biography

My name is Chih-Ting Liu (劉致廷 in Chinese). I am a Computer Vision Applied Scientist in Amazon Lab126 at Taipei, Taiwan. I work on enabling cutting-edge computer vision features on Amazon devices and Ring devices. In July 2022, I received my Ph.D. degree from National Taiwan University (NTU) at the Media IC and System Lab, advised by Prof. Shao-Yi Chien.

My research interest is mainly on recongnition system in computer vision, including human recognition (re-identification), face recognition and object tracking. Moreover, in order to deploy AI on edge devices, I also focus on techniques related to on-device training, such as federated learning, and on-device inference, such as network pruning. Here are my Google Scholar and CV.

Interests

Education

Recent News

[Ph.D. Dissertation] My Ph.D. dissertation “Learning Efficient and Effective Person Re-identification in Multi-Camera Tracking System and Beyond” can be downloaded here

[Aug. 2022] I start my new career as an computer vision applied scientist in Amazon Lab126 at Taipei!

[Jul. 2022] I pass the oral defense and got my Ph.D. degree from National Taiwan University (NTU)!

Industry Experience

Applied Scientist

Amazon

Amazon

Aug 2022 – Present Amazon Lab126, Taipei, Taiwan

Develop CV triggered featured for Amazon devices containing Alexa and Ring camera devices.

Research Intern

Microsoft

Microsoft

Mar 2021 – Sep 2021 AI R&D Center, Taipei, Taiwan

Develop an end-to-end federated learning face recognition framework that can jointly improve generic face representation and personalized user experience.

Research Intern

MediaTek

MediaTek

Mar 2020 – Jul 2020 Intelligent Vision Processing (IVP), Hsinchu, Taiwan

Develop one-stage real-time multi-object tracking system that integrate detection and tracking into one network.

Research Intern

MediaTek

MediaTek

Jun 2017 – Sep 2017 Video Coding Processing (VCP), Hsinchu, Taiwan

Focus on developing a deep-learning based coding unit (CU) decision policy for the next generation Codec.

Publications