Dates and venue

Dates: June 20-21, 2017 (before midsummer break)

 

Location: Stockholm, Royal Institute of Technology (KTH) Brinellvägen 64 (see venue tab).

Programme

June 20

10:30 registration open / poster setup

11:30 lunch

13:00 Short introduction

13:05 Keynote 1: Cordelia Schmid, INRIA

13:55 Talk 1: Devdatt Dubhashi, Chalmers

14:20 Talk 2: Atsuto Maki, KTH

14:45 coffee + poster + industry demos

16:15 Keynote 2: Roberto Cipolla, Cambridge

17:05 Talk 3: Cristian Sminchisescu, LTH

17:30 quick explanation about logistics

(transport to dinner)

18:00 arrival at Teaterskeppet

18:30 departure for dinner at sea

June 21

09:00 Keynote 3: Chehtan Ningaraju, NVIDIA

09:50 Talk 4: Carolina Wählby, UU

10.15 coffee

10:45 Keynote 4: Heiga Zen, Google

11:35 Talk 5: Fahad Khan, LiU

12:00 panel setup

12:05 panel discussion

12:20 closing words

12:30 lunch

Learning to communicate: solving complex tasks and inventing language

Speaker: Devdatt Dubhashi

Recently there has been a surge of interest in multi agent systems implemented as neural networks communicating with each other to solve a common task using reinforcement learning. In addition to communicating with the environment, the novel element is that agents can also exchange messages with each other. In this way, communication is used to enhance learning in a difficult sparse reward environment. A very interesting new line of work that this leads to is that this goal directed communication between agents leads to the emergence of a language which is grounded and compositional. We review this work including some recent work from our own group.

 

Deep Learning Research at KTH/RPL: An Introduction

Speaker: Atsuto Maki

This talk will review some research on deep convolutional networks (ConvNets) performed at RPL lab at KTH, commonly concerned with efficient network designs for different computer/robot vision tasks, including those with transfer learning (TL), multi-task learning (MTL) and reinforcement learning (RL) among others.

First we will consider the utility of global image descriptors given by ConvNets with respect to transfer learning; by optimising several factors for their transferability we see significant improvements across 17 standard visual recognition tasks. We then show the availability of mid-level representations while pruning the networks for increasing model’s efficiency. We also introduce an MTL model to speed up simultaneous transactions of multiple tasks, with an example of object detection and semantic segmentation. Finally, we will visit a deep predictive policy training framework based on reinforcement learning, an architecture that allows direct mappings from uncalibrated image observation to trajectories of motor activations in skilled robot task learning. Joint work with current/former members of Robotics, Perception and Learning Lab.

Matrix Backpropagation for Learning Deep Structured Models

Speaker: Cristian Sminchisescu

I will introduce methodologies for learning large-scale, deep structured models based on a recently developed matrix backpropagation methodology. This allows the integration of global computational layers for SVD or eigendecomposition, as well as matrix projector operations, within deep computational architectures. I will also sketch how the calculation of such layer variations, as required to efficiently compute derivatives in reverse accumulation mode, can be conveniently derived using closed form expressions. The proposed layers have broad applicability — here we illustrate them for computer vision, for the end-to-end training of deep normalized cuts and second-order pooling models (e.g. using log-tangent space metrics defined over the manifold of symmetric positive definite matrices), to jointly fine tune the hierarchical feature extraction architecture and the structured layers. This is joint work with Catalin Ionescu and Orestis Vantzos (matrix backpropagation).

Deep learning in life science and microscopy; challenges and possibilities

Speaker: Carolina Wählby

Microscopy is one of the most important tools for understanding the mechanisms of life, and has been so for the past 350 years. Pharma companies search for new drugs and novel active compounds by imaging morphological responses in cells and model organisms, and thanks to robotic sample handling and high-speed scanners, larger molecular parameter sweeps can be made, and image data is produced in ever increasing amounts. Hospitals all over the world are replacing traditional light microscopes with digital scanners to make assessment of pathology samples more flexible and available to remote expertise. Academic labs push the limits of resolution, develop new molecular detection methods, and design microscopes that can observe the processes of life as they take place inside living organisms.

Visual observation of the resulting images is limited, and microscopy data has been the targeted application area for the development of digital image processing and analysis algorithms since the 1960s. In the current era of deep learning, microscopy data is again in focus, and the hopes are high. Microscopy data is different from natural images, and combinations of imaging and staining techniques make it possible to create large sets of training data in unique ways. I will focus my talk on how I believe the life science and deep learning communities can benefit from one another.

Boosting Visual Object Tracking Using Deep Features

Speaker: Fahad Khan

The talk will focus on how to use Deep Features for enabling state-of-the-art results in visual object tracking. Visual object tracking is a challenging task in three respects, since a) it needs to be performed in real-time, b) the only available information about the object is an image region in the first frame, and c) the internal object model needs to be updated in each frame.

The use of carefully chosen Deep Features provides significant improvements regarding accuracy and robustness of the object tracker, but straightforward frame-wise updates of the object model become prohibitively slow for real-time performance. Also, state-of-the-art results require an appropriate fusion of multi-level Deep Features. By introducing a compact representation of Deep Features, smart fusion, and updating mechanisms, real-time performance is achievable without jeopardizing tracking quality.

List of Posters

  1. Automatic Gleason Grading of H&E Stained Microscopic Prostate Images using Deep Convolutional Neural Networks, Lund University, Anna Gummeson, Ida Arvidsson, Mattias Ohlsson, Niels Christian Overgaard, Agnieszka Krzyzanowskaz, Anders Heyden, Anders Bjartell and Kalle Åström
  2. Attentional Masking in Pre-trained Deep Networks, Linköping University, Marcus Wallenberg and Per-Erik Forssén
  3. Detection of Humans in Thermal Images using Segmenting Deconvolutional Network, Swedish Defence Research Agency, Linköping, Erik Valldor and David Gustafsson
  4. Towards Context-Preserving Human to Robot Motion Mapping, KTH, Taras Kucherenko and Hedvig Kjellström
  5. Imitation Learning for Autonomous Driving, Chalmers, Volvo Cars, Zenuity, Christopher Innocenti, Henrik Lindén , Ghazaleh Panahandehy, Lennart Svensson and Nasser Mohammadiha
  6. Sensor Error Prediction and Anomaly Detection Using Neural Networks, HiQ, Chalmers, Zenuity, Alireza Tashvir, Jonas Sjöberg and Nasser Mohammadiha
  7. A Short Review of Deep Learning Applications for Autonomous Driving, Zenuity, Chalmers, Nasser Mohammadiha
  8. Residual Connections in Light-Weight Convolutional Neural Network Object Detectors, Chalmers, Volvo Cars, Zenuity, Lucia Diego Solano, Donal Scanlan, Ghazaleh Panahandehy and Nasser Mohammadiha
  9. Neural Ctrl-F: Segmentation-free Query-by-String Word Spotting in Handwritten Manuscript Collections, Uppsala University, Tomas Wilkinson, Jonas Lindström and Anders Brun
  10. Semi-automatic Training Data Generation for Cell Segmentation CNN:s using an Intermediary Curator Net, Uppsala University, David Ramnerö, Sajith Kecheril Sadanandan and Petter Ranefall
  11. Semantic Labeling using Convolutional Networks coupled with Graph-Cuts for Document binarization, Uppsala University, Kalyan Ram Ayyalasomayajula and Anders Brun
  12. Automatic Segmentation of 3D Knee MRI using Fully Convolutional Network Cascade, KTH, Felicia Aldrin Bernhardt, Örjan Smedby and Chunliang Wang
  13. Predicting the OFDM Frame Error Probability Using Neural Networks, KTH, Ericsson Research, Vidit Saxena, Joakim Jaldén, Hugo Tullberg and Mats Bengtsson
  14. StemNet: A Temporally Trained Fully Convolutional Network for Segmentation of Muscular Stem Cells, KTH, Martin Isaksson and Joakim Jaldén
  15. A Multitask Deep Learning Model for Real-Time Deployment in Embedded Systems, Universitat Politècnica de Catalunya, KTH, Miquel Martí and Atsuto Maki
  16. Deep Predictive Policy Training using Reinforcement Learning, KTH, Ali Ghadirzadeh, Atsuto Maki, Danica Kragic and Mårten Björkman
  17. A Case Study of Filter Pruning on Effectiveness of Transferred Mid-Level CNN Representations, KTH, Yang Zhong and Atsuto Maki
  18. Evaluation of R-FCN Features for Object Segmentation, KTH, Hakan Karaoguz, John Folkesson and Patric Jensfelt
  19. Object Classification using Convolutional Neural Networks with 3D Intensity Voxel Matrices, Semcon, Zenuity, Axel Bender, Elias M. Thorsteinsson, Peter Nordin and Nasser Mohammadiha
  20. Human Pose Estimation using a Deep Convolutional Neural Network, Chalmers, Sven Abelsson Runing
  21. Predicting Radiative Transfers using a Deep Neural Network, KTH, Stockholm University, Adam Alpire, Ying Liuy, Jim Dowling, Joy Monteiro and Rodrigo Caballero
  22. Deep Convolutional Neural Networks for Massive MIMO Fingerprint-Based Positioning, Lund University, Joao Vieira, Erik Leitinger, Muris Sarajlic, Xuhong Li and Fredrik Tufvesson