Speakers list agenda

From Images to Graphs: Modeling Invariances with Deep Learning

15:50 - 16:30, 9th of May (Thursday) 2019/ DeepTech
for Conference Passes+ only

In recent years Deep Learning has become a dominant paradigm to learn representation for images and sequential data. Such a 'revolution' has started with the remarkable results on the ImageNet competition with AlexNet and has continued with more modern architectures like ResNet. Similarly, Recurrent Neural Networks are often used to represent language. Both types of architectures use different inductive biases that encode weight symmetries either on the grid (images) or on the chain (language), and more recently on arbitrary graphs. However, the key insight is the same -- to capture suitable invariances in the design of architectures known as inductive biases.

In this talk, I will go over different domains -- images, language, multimodal, graphs -- and show different approaches that have been shown successful in these domains owing to the already mentioned inductive biases.

TOPICS:
DeepTech ML/DL NeuralNetworks Tech