I am a second year CS Ph.D. student at Princeton University working with Professors Adam Finkelstein and Felix Heide. My research spans graphics, vision, and HCI, with a focus on AI for content creation and computational photography. I am interested in exploring methods that combine mathematical models of both problems in image processing and user experience to tackle new applications.

Previously, I completed my undergraduate studies at Cornell University, majoring in Computer Science and minoring in Psychology. I was fortunate to be advised by Professor Abe Davis and spent two wonderful years with the Cornell Vision & Graphics Group, where I became good friends with the Lab Cat.

Publications

Chromaticity Gradient Mapping for Interactive Control of Color Contrast in Images and Video

Ruyu Yan, Jiatian Sun, Abe Davis

UIST, 2024

project website / paper

We present a novel perceptually-motivated interactive tool for using color contrast to enhance details represented in the lightness channel of images and video.

Neural Spline Fields for Burst Image Fusion and Layer Separation

Ilya Chugunov, David Shustin, Ruyu Yan, Chenyang Lei, Felix Heide

CVPR, 2024

project website / paper / code

We propose neural spline fields (NSFs) as a compact flow model, which maps input coordinates to spline control points for producing temporally consistent flow estimates that align with conventional optical flow references.

Ray Conditioning: Trading Photo-realism for Photo-Consistency in Multi-view Image Generation

Eric Ming Chen, Sidhanth Holalkere, Ruyu Yan, Kai Zhang, Abe Davis

ICCV, 2023

project website / paper / code

We propose ray conditioning, a lightweight and geometry-free technique for multi-view image generation. It enables enables photo-realistic multi-view image editing on natural photos via GAN inversion.

ReCapture: AR-Guided Time-lapse Photography

Ruyu Yan, Jiatian Sun, Longxiulin Deng, Abe Davis

UIST, 2022

project website / paper

We present ReCapture, a system that leverages AR-based guidance to help users capture time-lapse data with hand-held mobile devices. ReCapture works by repeatedly guiding users back to the precise location of previously captured images so they can record time-lapse videos one frame at a time without leaving their camera in the scene.