Skip to yearly menu bar Skip to main content




Keynotes

Oct. 1, 2024, 6:30 a.m.

Synthesia is one of Europe's newest billion-euro startups. Its core technology is script-to-video: realistic AI avatars delivernig compelling presentations to the virtual camera. Used by more than 50,000 companies worldwide, including 400 of the Fortune 500, it is computer vision technology that operates in the real world.

Lourdes Agapito and Vittorio Ferrari will talk about the development of this technology from computer vision research papers to real-world product, and about the current and future directions of their research.


Vittorio Ferrari

Vittorio Ferrari is the Director of Science at Synthesia, where he leads R&D groups developing cutting-edge generative AI technology. Previously he built and led multiple research groups on computer vision and machine learning at Google (Principal Scientist), the University of Edinburgh (Full Professor), and ETH Zurich (Assistant Professor). He has co-authored over 160 scientific papers and won the best paper award at the European Conference in Computer Vision in 2012 for his work on large-scale segmentation. He received the prestigious ERC Starting Grant, also in 2012. He led the creation of Open Images, one of the most widely adopted computer vision datasets worldwide. While at Google his groups contributed technology to several major products (with launches e.g. on the Pixel phone, Google Photos, Google Lens). He was a Program Chair for ECCV 2018 and a General Chair for ECCV 2020. He is an Associate Editor of IEEE Pattern Analysis and Machine Intelligence, and formerly of the International Journal of Computer Vision. His recent research interests are in 3D Deep Learning and Vision+Language models.

Lourdes Agapito

Lourdes Agapito holds the position of Professor of 3D Vision at the Department of Computer Science, University College London (UCL) and is co-founder of Synthesia. Her research in computer vision has consistently focused on the inference of 3D information from single images or videos acquired from a moving camera. She received her BSc, MSc and PhD degrees from the Universidad Complutense de Madrid (Spain). In 1997 she joined the Robotics Research Group at the University of Oxford as an EU Marie Curie Postdoctoral Fellow. In 2001 she was appointed as Lecturer at Queen Mary University of London where she held an ERC Starting Grant to focus on theoretical and practical aspects of deformable 3D reconstruction from monocular sequences. In 2013 she joined the Department of Computer Science at UCL and was promoted to full professor in 2015. Lourdes has served as Program Chair for CVPR 2016 and ICCV 2023, serves regularly as Area Chair for the top Computer Vision conferences (CVPR, ICCV, ECCV) and was keynote speaker at ICRA 2017 and ICLR 2021. In 2017 she co-founded Synthesia, a recent generative AI unicorn and the world’s largest AI video generation platform that allows users to create professional videos directly in the browser, removing the physical constraints of conventional production.

Oct. 2, 2024, 6:30 a.m.

AI is increasingly used to make automated decisions about humans. These decisions include assessing creditworthiness, hiring decisions, and sentencing criminals. Due to the inherent opacity of these systems and their potential discriminatory effects, policy and research efforts around the world are needed to make AI fairer, more transparent, and explainable.

To tackle this issue the EU recently passed the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring as well as transparency tools. But is Europe ready for this task?

In this session I will examine several EU legal frameworks and demonstrate how AI weakens legal recourse mechanisms. I will also explain how current technical fixes such as bias tests - which are often developed in the US - are not only insufficient to protect marginalised groups but also clash with the legal requirements in Europe.

I will then introduce some of the solutions I have developed to test for bias, explain black box decisions and to protect privacy that were implemented by tech companies such as Google, Amazon, Vodaphone and IBM and fed into public policy recommendations and legal frameworks around the world.


Sandra Wachter

Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford where she researches the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law.

At the OII, Professor Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects emerging technologies.

Professor Wachter is also an affiliate and member at numerous institutions, such as the Berkman Klein Center for Internet & Society at Harvard University, the IEEE, the World Bank and UNESCO, and she also serves as a policy advisor for governments, companies, and NGO’s around the world on regulatory and ethical questions concerning emerging technologies.

Oct. 3, 2024, 6:30 a.m.

Distribution shifts describe the phenomena where the deployment performance of an AI model exhibits differences from training. On the one hand, some claim that distribution shifts are ubiquitous in real-world deployments. On the other hand, modern implementations (e.g., foundation models) often claim to be robust to distribution shifts by design. Similarly, phenomena such as “accuracy on the line” promise that standard training produces distribution-shift-robust models. When are these claims valid, and do modern models fail due to distribution shifts? If so, what can be done about it? This talk will outline modern principles and practices for understanding the role of distribution shifts in AI, discuss how the problem has changed, and outline recent methods for engaging with distribution shifts with comprehensive and practical insights. Some highlights include a taxonomy of shifts, the role of foundation models, and finetuning. This talk will also briefly discuss how distribution shifts might interact with AI policy and governance.


Sanmi Koyejo

Sanmi (Oluwasanmi) Koyejo is an Assistant Professor in the Department of Computer Science at Stanford University and an adjunct Associate Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. Koyejo leads Stanford Trustworthy AI Research (STAIR), working to develop the principles and practice of trustworthy machine learning, focusing on applications to neuroscience and healthcare. Koyejo completed a Ph.D. at the University of Texas at Austin, and postdoctoral research at Stanford University. Koyejo has been the recipient of several awards, including a best paper award from the conference on uncertainty in artificial intelligence, a Skip Ellis Early Career Award, a Sloan Fellowship, a Terman faculty fellowship, an NSF CAREER award, a Kavli Fellowship, an IJCAI early career spotlight, and a trainee award from the Organization for Human Brain Mapping. Koyejo spends time at Google as a part of the Brain team, serves on the Neural Information Processing Systems Foundation Board, the Association for Health Learning and Inference Board, and as president of the Black in AI organization.