New England Computer Vision Workshop

Dartmouth College, Hanover, NH

Friday, December 1, 2023



The New England Computer Vision Workshop (NECV) brings together researchers in computer vision and related areas for an informal exchange of ideas through a full day of presentations and posters. NECV attracts researchers from universities and industry research labs in New England. As in previous years, the workshop will focus on graduate student presentations.

Welcome to Dartmouth College!

- SouYoung Jin, Adithya Pediredla, and Yu-Wing Tai

Post Messages: Thank you for attending NECV 2023! We appreciate each of your great efforts during the workshop.

Registration

Academic Researchers: Participation is free for all researchers at academic institutions. To secure your spot and enjoy a guaranteed lunch (for early registrants), please register by the early registration deadline on Monday, November 20, 2023.

Register Here for Academic Researchers

Industry Participants: For our industry friends, a limited number of registrations are available for a fee. Please contact Samson Timoner - samson@ai.mit.edu for registration details.


Submission

Please submit a one-page PDF abstract using the CVPR 2024 rebuttal template by email to necv2023dartmouth@gmail.com. Please include the title of your work and the list of authors in the abstract. Abstracts are due by Sunday, November 19 Monday, November 20. Oral decisions will be released by November 22.

You may present work that has already been published, or work that is in progress. All relevant submissions will be granted a poster presentation, and selected submissions from each institution will be granted 12-minute oral presentations. Post-docs and faculty may submit for poster presentations, but oral presentations are reserved for graduate students.

There will be no publications resulting from the workshop, so presentations will not be considered "prior peer-reviewed work" according to any definition we are aware of. Thus, work presented at NECV can be subsequently submitted to other venues without citation.

The workshop is after the CVPR submission deadline, so come and show off your new work in a friendly environment. It's also just before the NeurIPS conference, so feel free to come and practice your presentation.


For Presenters

Oral Presentation. Each presentation is allocated a 12-minute slot, with an additional 3 minutes dedicated to questions and the change of speaker. We kindly request all oral presenters to bring their machines for their presentation. The presentation equipment supports both HDMI and Type-C for screen sharing. Please arrive at least 5 minutes before the scheduled Oral session to test your machine and ensure compatibility with the provided equipment. Similar to regular conferences, we have also allocated poster boards for Oral presenters. Please find your poster ID.

Poster Presentation. The poster session will be held at the ECSC building, adjacent to the oral session venue. Your assigned poster ID can be found on this website. Please locate the correct poster board to display your poster. Easels and foam cores will be provided for mounting posters, accommodating sizes up to 36x48 inches. The foam cores are not attached, allowing flexibility for landscape or portrait orientation. You are welcome to use any format within that size limit.

Poster Competition

We are pleased to announce that we have received a generous donation from NVIDIA. As part of their support, the best poster at the workshop will be awarded a GPU. Further details regarding the award will be confirmed and communicated to the recipient at a later date. We appreciate NVIDIA's contribution to our workshop and look forward to the exciting competition for the best poster. This award is dedicated to graduate student presenters only.

Poster Competition (Best Poster Presentation) Results


Logistics

Schedule

Time Topic Location
08:30-09:30 Registration, Poster Setup & Breakfast Cummings 1F
09:30-09:45 Welcome & Opening Cummings 100
09:45-10:45 Oral Session I Cummings 100
  • [16] ISRF: Interactive Segmentation of Radiance Fields.
    Rahul Goel*, Dhawal Sirikonda* (Dartmouth College), Saurabh Saini, P J Narayanan
  • [03] Improving the perception of fiducial markers in the field using Adaptive Active Exposure Control.
    Ziang Ren (Dartmouth College), Samuel Lensgraf, Alberto Quattrini Li
  • [12] Curvature Fields from Shading Fields.
    Xinran Han (Harvard University), Todd Zickler
  • [17] NIVeL: Neural Implicit Vector Layers for Text-to-Vector Generation.
    Vikas Thamizharasan (UMass Amherst), Difan Liu, Matt Fisher, Cherry (N.X.) Zhao, Evangelos Kalogerakis, Mike Lukáč
10:45-10:55 Break
10:55-11:55 Oral Session II Cummings 100
  • [34] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations
    Fengyu Yang* (Yale University), Chao Feng*, Ziyang Chen*, Hyoungseob Park, Daniel Wang, Yiming Dou, Ziyao Zeng, Xien Chen, Rit Gangopadhyay, Andrew Owens, Alex Wong
  • [33] DISCount: Counting in Large Image Collections with Detector-Based Importance Sampling
    Gustavo Perez (UMass Amherst), Subhransu Maji, Daniel Sheldon
  • [15] GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis
    Yiqing Liang (Brown University), Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, Lei Xiao
  • [11] AnyHome: Open-Vocabulary Generation of Structured and Textured 3D Homes
    Rao Fu (Brown University), Zehao Wen, Zichen Liu, Srinath Sridhar
12:00-13:00 Lunch ECSC 116
13:00-14:30 Poster Session ECSC 1F
  • [01] Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation
    William Shen*, Ge Yang* (MIT), Alan Yu, Jansen Wong, Leslie Pack Kaelbling, Phillip Isola
  • [02] Language-Driven Appearance and Physics Editing via Feature Splatting
    Rizhao Qiu*, Ge Yang* (MIT), Weijia Zeng, Xiaolong Wang
  • [05] Snapshot Lidar: Fourier embedding of amplitude and phase for single-image depth reconstruction
    Sarah Friday (Dartmouth College), Yunzi Shi, Yaswanth Kumar Cherivirala, Vishwanath Saragadam, Adithya Pediredla
  • [06] The GAN is dead; long live the GAN!
    Nick Huang (Brown University), Aaron Gokaslan, James Tompkin
  • [07] Underwater Camera Calibration: N-Sphere Camera Model and Extensions
    Monika Roznere (Dartmouth College), Adithya K. Pediredla, Samuel E. Lensgraf, Yogesh Girdhar, and Alberto Quattrini Li
  • [08] On Human-like Biases in Convolutional Neural Networks for the Perception of Slant from Texture
    Yuanhao Wang, Qian Zhang (Brown University), Celine Aubuchon, Jovan Kemp, Fulvio Domini, James Tompkin
  • [09] Toward Physically-based 360° Intrinsic Decomposition from RGBD Images
    Qian Zhang (Brown University), James Tompkin
  • [13] Direct Superpoints Matching for Robust Point Cloud Registration
    Aniket Gupta (Northeastern University), Yiming Xie, Hanumant Singh, Huaizu Jiang
  • [14] FT2TF: First-Person Statement Text-To-Talking Face Generation
    Xingjian Diao (Dartmouth College), Ming Cheng, Wayner Barrios, SouYoung Jin
  • [18] OmniControl: Control Any Joint at Any Time for Human Motion Generation
    Yiming Xie, Varun Jampani, Lei Zhong, Deqing Sun, Huaizu Jiang (Northeastern University)
  • [19] PlatoNeRF: 3D Reconstruction in Plato’s Cave via Single-View Two-Bounce Lidar
    Tzofi Klinghoffer (MIT), Xiaoyu Xiang*, Siddharth Somasundaram*, Yuchen Fan, Christian Richardt, Ramesh Raskar, Rakesh Ranjan
  • [20] Preserving Tumor Volumes for Unsupervised Medical Image Registration
    Qihua Dong (Northeastern University), Hao Du, Ying Song, Yan Xu, Jing Liao
  • [21] SPASM: Small PArallax Structure from Motion
    Fabien Delattre (UMass Amherst), David Dirnfeld, Zhipeng Tang, Pedro Miraldo, Erik Learned-Miller
  • [22] SurfsUp: learning fluid simulation for novel surfaces
    Arjun Mani*, Ishaan Chandratreya* (MIT), Elliot Creager, Carl Vondrick, Richard Zemel
  • [23] Toward Perceptually-guided Environment-adaptive AR Visualization
    Hojung (Ashley) Kwon (Brown University), Yuanbo Li, Xiaohan (Chloe) Ye, Praccho Muna-McQuay, Liuren Yin, James Tompkin
  • [25] ViHOPE: Visuotactile In-Hand Object 6D Pose Estimation with Shape Completion
    Hongyu Li (Brown University), Snehal Dikhale, Soshi Iba, Nawid Jamali
  • [26] Vision Beyond Borders: Transforming Single-View inputs into Multi-View vision
    Mingyuan Zhang (Northeastern University), Chang Liu, Yue Bai, Yun Fu
  • [27] TASK2BOX: Box Embeddings for Modeling Asymmetric Task Relationships
    Rangel Daroya (UMass Amherst), Aaron Sun, Subhransu Maji
  • [28] Self-supervised Learning using Hypercube Embeddings
    Deep Chakraborty (UMass Amherst), Erik Learned-Miller
  • [29] Rewrite the Stars
    Xu Ma (Northeastern University), Xiyang Dai, Yue Bai, Yizhou Wang, Yun Fu
  • [30] Latent Graph Inference with Limited Supervision
    Jianglin Lu (Northeastern University), Yi Xu, Huan Wang, Yue Bai, Yun Fu
  • [32] FLARE: Film Language and Audiovisual Representation Engine for Movie Audio Description
    Wayner Barrios (Dartmouth College), Henry Scheible, SouYoung Jin
  • [35] A Unified Framework for Domain Adaptive Object Detection
    Justin Kay (MIT), Timm Haucke, Suzanne Stathatos, Siqi Deng, Erik Young, Pietro Perona, Sara Beery, Grant Van Horn
  • [36] Improved Zero-Shot Classification by Adapting VLMs with Text Descriptions
    Oindrila Saha, Grant Van Horn, Subhransu Maji (UMass Amherst)
  • [37] Is CLIP Fooled by Optical Illusions?
    Jerry Ngo (MIT), Swami Sankaranarayanan, Phillip Isola
  • [38] Frame Flexible Network
    Yitian Zhang (Northeastern University), Yue Bai, Chang Liu, Huan Wang, Sheng Li, Yun Fu
  • [39] See Beyond Vision: Layout Trajectory Sequence Prediction From Noisy Mobile Modality
    Haichao Zhang (Northeastern University)
14:30-14:50 Coffee Break Cummings 1F
14:50-15:50 Oral Session III Cummings 100
  • [10] 3D Reconstruction of Occluded and Specular Objects using Multi-Bounce Lidar
    Siddharth Somasundaram (MIT), Connor Henley, Akshat Dave, Joseph Hollmann, Ashok Veeraraghavan, Ramesh Raskar
  • [24] Uncovering the Missing Pattern: Unified Framework Towards Trajectory Imputation and Prediction
    Yi Xu (Northeastern University), Armin Bazarjani, Hyung-gun Chi, Chiho Choi, Yun Fu
  • [31] Inferring the Future by Imagining the Past
    Kartik Chandra* (MIT), Tony Chen* (MIT), Tzu-Mao Li, Jonathan Ragan-Kelley, Joshua Tenenbaum
  • [04] Multi-Irreducible Spectral Synchronization for Robust Rotation Averaging
    Owen Howell (Northeastern University), Haoen Huang and David Rosen
15:50-16:00 Closing Remark Cummings 100

Venue

Our workshop will be held at Dartmouth College. For a detailed campus map and directions, please refer to our Campus Map.

Cummings

ECSC

Getting here:

WiFi

Log on to "Dartmouth Public" and accept the terms. Alternatively, if your home institution participates in eduroam, then you can also use eduroam to connect wirelessly while on our campus.


Sponsorship

We are grateful to Dartmouth College for providing the venue. We also thank Dartmouth CS, Neukom Institute, Dartmouth Arts & Sciences - Office of the Science Division, and NVIDIA for their sponsorship of this event. We appreciate Bill Freeman for his support.


Organizing Committee

Workshop Chair: SouYoung Jin, Adithya Pediredla, Yu-Wing Tai

Corporate Relations Chair: Samson Timoner

Logistic Chair: Julia D. Ganson

Photographer: Katherine M. Lenhart

Student Volunteer: Dhawal Sirikonda, Wenjun Liu, Wayner Barrios, Xingjian Diao, Sarah Friday, Joseph DiPalma


Acknowledgements

Thank you to Susan Cable and Andrila Hait Chakrabarti for helping us arrange NECV 2023. Thank you also to the steering committee: Phillip Isola, Pulkit Agrawal, James Tompkin, Benjamin Kimia, Todd Zickler, Yun Raymond Fu, Octavia Camps, Kate Saenko, Margrit Betke, Sclaroff Stan, Erik Learned-Miller, and Subhransu Maji.

We thank our competition judges: Sara Beery, James Tompkin, Todd Zickler, Grant Van Horn, Yu-Wing Tai.

Past Years