27 November 2025 – 16:30 – 18:30 (UTC+1) (Zoom)
Pr. Bernie HOGAN (Oxford Internet Institute, University of Oxford) : “The Autoencoder’s Dilemma: Semantic Superposition and Learning from Machine Learning”
Abstract:
Generative AI systems use inference in order to provide responses to users. Whether in the text, audio, or image space these technologies, the answers provided are synthesized from the existing model and provide a ‘best guess’ to the users. Yet, public framing of AI has shown considerable dissatisfaction with AI systems that confuse dates, facts, and make untoward assumptions. Yet what is an appropriate assumption? If I ask an image generator for an American President, how likely am I to get an image of a woman or a person of colour? What would be the right proportion?
References:
Key contextual readings:
Dretske, F. (1981) Knowledge and the Flow of Information. MIT Press. (Chapters 1-3).
Elhage, N. Et al. (2022) Toy Models of Superposition. https://transformer-circuits.pub/2022/toy_model/index.html
Goh, G. (2016) Decoding the Thought Vector. https://gabgoh.github.io/ThoughtVectors/
Nagel, T. (1974) What it’s like to be a bat. Philosophical Review. 83(4): 435-450. https://doi.org/10.2307/2183914
Registration link : https://univ-amu-fr.zoom.us/meeting/register/K9gsy-XyRTe14QWyA2qnyg