Master Thesis

Task-Specific Deep Networks for World Understanding

Work default illustration

Information

  • If you are interested in the proposal, please contact with the supervisors.

Description

Deep convolutional neural networks have been drastically changing the computer vision community in the last few years. They have surpassed traditional computer vision algorithms in many different problems. We are reaching a point of performance in which most simple tasks such as classification can be done with human-like accuracy using these deep networks. Additionally we have started to see the advent of pre-trained deep networks such as the DECAF network [1] or OverFeat network [2] pretrained on Imagenet [3]. These have been proven to provide robust global image features that are then being used in many different problems such as holistic scene understanding [4]. However, there is a lack of ready-available pre-trained models for more specific tasks that can be integrated in holistic frameworks such as pre-trained scene classification networks.

Objectives

The objective of this thesis would be to identify different tasks and their associated datasets that could be learned for deep learning (e.g. scene classification...) followed by the training of models on these networks. This would be done using the state-of-the-art toolbox by Lecunn's group called Torch [5]. As learning code is not readily available, most of the work would concentrate on how to successfully learn one of these networks using all the new state-of-the-art techniques such as dropout, hessian-based learning rate estimation, etc...


Requisites

As this task involves a large amount of programming to interface with the different datasets and models, it is important for the prospective student to have a strong programming background. Knowledge of the Lua programming language is positively evaluated.


References

[1]: http://caffe.berkeleyvision.org/
[2]: http://cilvr.nyu.edu/doku.php?id=code:start
[3]: http://image-net.org/
[4]: http://ttic.uchicago.edu/~fidler/research.html
[5]: http://torch.ch/