mug
Eduard Trulls
Computer Vision lab @ EPFL
Address: EPFL-IC-CVLab, BC 368, Station 14, CH-1015 Lausanne (Switzerland)
E-mail: eduard dot trulls at epfl dot ch / Tel: +41 216 93 76 21

I am currently a post-doc at the Computer Vision Lab at EPFL in Lausanne, Switzerland, working on Deep Learning topics under the supervision of Prof. Pascal Fua. I obtained my PhD from the Institute of Robotics in Barcelona, Spain. My thesis explored novel strategies to enhance local, low-level features (e.g. SIFT, HOG) with global, mid-level data such as motion and segmentation cues, and was co-advised by Francesc Moreno and Alberto Sanfeliu. My work has been published in top computer vision conferences. Before my PhD I worked in mobile robotics.

Links: GitHub / LinkedIn / Google Scholar / Videos

News


Publications

Note: The top three computer vision conferences (CVPR/ICCV/ECCV) are highly competitive, with low acceptance rates: 20-30%. You may also be interested in the Google Scholar Metrics.

[img]
Fracking Deep Convolutional Image Descriptors
E. Simo-Serra(*), E. Trulls(*), L. Ferraz, I. Kokkinos and F. Moreno-Noguer
arXiv:1412.6537v2 (arxiv report, 2015)
bibref

We propose a novel framework for learning local image descriptors in a discriminative manner, with a siamese architecture of Deep Convolutional Neural Networks. We show how to mine the exponentially large number of corresponding/non-corresponding pairs, and demonstrate large improvements over the state of the art.

[img]
Segmentation-aware Deformable Part Models
E. Trulls, S. Tsogkas, I. Kokkinos, A. Sanfeliu, F. Moreno-Noguer
Conference on Computer Vision and Pattern Recognition (CVPR), 2014
code: soon / poster / spotlight (video) / bibref

We combine bottom-up segmentation (SLIC superpixels) with DPMs. We use the superpixels to build soft segmentation masks at every scale and position. We use the masks to "clean up" the HOG features, splitting them into foreground and background channels.

[img]
Dense segmentation-aware descriptors
E. Trulls, I. Kokkinos, A. Sanfeliu, F. Moreno-Noguer
Conference on Computer Vision and Pattern Recognition (CVPR), 2013
code / poster / spotlight (PDF) / bibref / site

We exploit segmentation data to construct appearance descriptors that can deal with occlusions and background motion. We use the segmentation to build soft segmentation masks, and downplay measurements likely to belong to a different region. We integrate this with SIFT and also with SID, a dense descriptor invariant by design to rotation and scaling.

[img]
Spatiotemporal descriptor for wide-baseline stereo reconstruction of non-rigid and ambiguous scenes
E. Trulls, A. Sanfeliu, F. Moreno-Noguer
European Conference on Computer Vision (ECCV), 2012
poster / spotlight (video) / bibref / site

We use temporal consistency to match appearance descriptors and apply it to stereo on very ambiguous video sequences. Previous works define descriptors over spatiotemporal volumes, which is not applicable to wide-baseline scenarios—instead we extend 2D descriptors with optical flow estimates to capture the change around a feature point in time.

[img]
Autonomous navigation for mobile service robots in urban pedestrian environments
E. Trulls, A. Corominas Murtra, J. Pérez-Ibarz, G. Ferrer, D. Vasquez, Josep M. Mirats-Tur, A. Sanfeliu
Journal of Field Robotics, 2011
bibref / site

An extension of our 2010 IROS paper. We switch from 2D to 3D data for localization, and present experiments in a new urban area: a street open to the general public in the city of Barcelona, Spain.

[img]
Efficient use of 3D environment models for mobile robot simulation and localization
A. Corominas Murtra, E. Trulls, J. M. Mirats Tur, A. Sanfeliu
International Conference on Simulation, Modelling, and Programming for Autonomous Robots (SIMPAR), 2010. Also in Simulation, Modelling, and Programming for Autonomous Robots, Lecture Notes in Computer Science, 2010.

This paper provides a detailed description of a set of algorithms to efficiently manipulate 3D models to compute physical constraints and range observation models, used for real-time robot localization.

[img]
Autonomous navigation for urban service mobile robots
A. Corominas Murtra, E. Trulls, O. Sandoval, J. Perez, D. Vasquez, J. M. Mirats Tur, M. Ferrer, A. Sanfeliu
International Conference on Intelligent Robots and Systems (IROS), 2010

We present a solution for fully autonomous navigation on urban, pedestrian environments, designed for highly mobile robots based on Segway platforms.

[img]
3D Mapping for Urban Service Robots
R. Valencia-Carreño, E. Teniente, E. Trulls, J. Andrade-Cetto
International Conference on Intelligent Robots and Systems (IROS), 2009

We present an approach to build 3D maps from 3D range data as the main input, based on the probabilistic alignment of the point clouds using SLAM.

[img]
Combination of Distributed Camera Network and Laser-based 3D Mapping for Urban Service Robotics
J. Andrade-Cetto, A. Ortega, E. Teniente, E. Trulls, R. Valencia, A. Sanfeliu
Workshop on Network Robot Systems, International Conference on Intelligent Robots and Systems (IROS), 2009

An overview of the URUS project.

Collaborators

Sites

For information, supplemental material and videos:

Projects

Some of the projects I have been part of: