Zeynep Akata

Professor of Computer Science

I am looking for PhD students and post-docs (fully funded by the university or ERC). 

for PhD candidates: please follow the application link here. The deadline is November 6th!

for Post-Doc candidates: please e-mail me.

Short Bio

Zeynep Akata is a professor of Computer Science within the Cluster of Excellence Machine Learning at the University of Tübingen. After completing her PhD at the INRIA Rhone Alpes with Prof Cordelia Schmid (2014), she worked as a post-doctoral researcher at the MPI for Informatics with Prof Bernt Schiele (2014-17) and at UC Berkeley with Prof Trevor Darrell (2016-17). Before moving to Tübingen in October 2019, she was an assistant professor at the University of Amsterdam with Prof Max Welling (2017-19). She received a Lise-Meitner Award for Excellent Women in Computer Science from Max Planck Society in 2014, a young scientist honour from the Werner-von-Siemens-Ring foundation in 2019 and an ERC-2019 Starting Grant from the European Commission. Her research interests include multimodal learning and explainable AI.

Experience

  • Professor of Computer Science, University of Tübingen, 2019 – Now
  • Senior Researcher at Max Planck Institute for Informatics, 2017 – Now
  • Assistant Professor at University of Amsterdam, 2017 – 2019
  • Visiting Researcher at UC Berkeley, 2016-2017
  • Post Doctoral Researcher at Max Planck Institute for Informatics, 2014-2017

Education

  • PhD: University of Grenoble, France, 2014
  • MSc: RWTH Aachen, Germany, 2010
  • BSc: Trakya University, Turkey, 2008

External Activities

  • Area Chair for Conferences:
    • WACV 2016, IJCAI 2018, ECCV 2018, BMVC 2019, CVPR 2019, ICCV 2019, CVPR 2020, ICML 2020, ECCV 2020
  • Reviewer for Conferences:
    • ICCV 2015-2017, ECCV 2016, ACCV 2016, CVPR 2015-2018, EMNLP 2018
    • NIPS 2016-2018, AISTATS 2017, AAAI 2018, ICLR 2018, ICML 2018-2019
  • Associate Editor for Journals:
    • Pattern Recognition Journal (Appointment between 2018-2020)
  • Reviewer for Journals:
    • TPAMI 2015-2017, IJCV 2015-2017
  • Tutorial on Embeddings and Metric Learning, GCPR 2016, Organization Committee
  • Gender Diversity in STEM fields, Springboard training program, 2015, Organization Committee
  • Tutorial on Zero-Shot Learning, CVPR 2017, Organization Committee
  • CD-Make 2018, Cross Domain Conference on Machine Learning and Knowledge Extraction, Organization Committee
  • WiCV@ECCV 2018, Women in Computer Vision Workshop, Organization Committee

Awards and Funding

  • Lise-Meitner Award for Excellent Women in Computer Science from Max Planck Society in 2014
  • Explainable Artificial Intelligence (XAI) grant from DARPA in 2017
  • ERC Starting Grant from the European Research Council in 2019
  • Young Investigator honour from Werner-von-Siemens-Ring Foundation in 2019

 Selected Publications

2019

2018

2017

2016

2015 and Earlier

Research

Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. As decision makers, humans can justify their decisions with natural language and point to the evidence in the visual world which led to their decisions. In contrast, artificially intelligent systems are frequently seen as opaque and are unable to explain their decisions. This is particularly concerning as ultimately such systems fail in building trust with human users

Explanations are valuable because they enable users to adapt themselves to the situations that are about to arise while allowing users to attain a stable environment and have the possibility to control it. Explanations in the medical domain can help patients identify and monitor the abnormal behaviour of their ailment. In the domain of self-driving vehicles they can warn the user of some critical state and collaborate with her to prevent a wrong decision. In the domain of satellite imagery, an explanatory monitoring system justifying the evidence of a future hurricane can save millions of lives. Hence, a learning machine that a user can trust and easily operate need to be fashioned with the ability of explanation. 

While deep neural networks lead to impressive successes, e.g. they can now reliably identify 1000 object classes, argue about their interactions through natural language, answer questions about their attributes through interactive dialogues, integrated interpretability is still in its early stages. In other words, we do not know why these deep learning based visual classifications systems work when they are accurate and why they do not work when they make mistakes. Enabling such transparency requires the interplay of different modalities such as images and text, whereas current deep networks are designed as a combination of different tools each optimising a different learning objective with extremely weak and uninterpretable communication channels. However, deep neural networks draw their power from their ability to process large amounts of data in an end-to-end manner through a feedback loop with forward and backward processing. Although interventions on the feedback loop have been implemented by removing neurons and back propagating gradients, a generalizable multi-purpose interpretability is still far from reach. 

Apart from the lack of an integrated interpretability module, deep neural networks require a large amount of labeled training data to reach reliable conclusions. In particular, they need to be trained for every possible situation using labeled data. For instance, the system needs to observe the driver’s behavior at the red light to be able to learn to stop at red light both in a sunny and rainy weather, both in daylight and in night, both in fog and in snow, and so on. This causes a significant overhead in labelling every possible situation. Hence, our aim is to build an explainable machine learning system that can learn the meaning of “red light” and use this knowledge to identify many other related situations, e.g. although red light may look different in darkness vs daylight, the most important aspect in such a situation is to identify that the vehicle needs to stop. In other words, we would like to transfer the explainable behaviour of a decision maker to novel situations.

In summary, we propose an end-to-end trainable decision maker operating in sparse data regime with an integrated interpretability module. Our main research direction to build such a system is two folds: learning representations with weak supervision and generating multimodal explanations of classification decisions

 

Teaching

  • Leren (Introduction to Machine Learning), 2017 (BSc AI, Y2, P2)
  • Leren (Introduction to Machine Learning), 2018 (BSc AI, Y2, P2)
  • Philosophy of AI (Guest Lecture), March 2018
  • Computer Vision 2 (Guest Lecture), May 2018
  • Back2Basic 2018, Annual Computer Science Event, Amsterdam, Invited Speaker
  • DSSV 2018, Conference on Data Science, Statistics and Visualization, Invited speaker
  • BMVC 2018, British Machine Vision Conference, Invited Speaker

Students

  • 2019-Now Lennart van Goten @ KTH Stockholm (with Prof. Kevin Smith) 
  • 2018-Now Marco Federici @ University of Amsterdam (with Dr. Nate Kushman, Dr. Patrick Forre)
  • 2017-Now Stephan Alaniz @ University of Tübingen and Max Planck Institute for Informatics
  • 2016-Now Yongqin Xian @ Max Planck Institute for Informatics (with Prof. Bernt Schiele)
  • 2017-2019 Victor Garcia @ University of Amsterdam (with Prof. Max Welling)
  • 2017-2019 Sadaf Gulshad @ University of Amsterdam (with Prof. Arnold Smeulders)
  • 2018-2019 Maartje ter Hoeve @ University of Amsterdam (with Prof. Maarten de Rijke)
  • 2018-2019 Rodolfo Corona @ UC Berkeley (with Prof. Trevor Darrell)