Lecturers
Each Lecturer will hold two/three lessons on a specific topic. The Lecturers below are confirmed.
Topics
NeuroscienceBiography
Professor Karl J. Friston MB, BS, MA, MAE, MRCPsych, FMedSci, FRSB, FRS
Scientific Director: Wellcome Centre for Human Neuroimaging
Institute of Neurology, UCL
12 Queen Square
London. WC1N 3AR UK
Bio-sketch
Karl Friston is a theoretical neuroscientist and authority on brain imaging. He invented statistical parametric mapping (SPM), voxel-based morphometry (VBM) and dynamic causal modelling (DCM). These contributions were motivated by schizophrenia research and theoretical studies of value-learning, formulated as the dysconnection hypothesis of schizophrenia. Mathematical contributions include variational Laplacian procedures and generalized filtering for hierarchical Bayesian model inversion. Friston currently works on models of functional integration in the human brain and the principles that underlie neuronal interactions. His main contribution to theoretical neurobiology is a free-energy principle for action and perception (active inference).
Additional information
Friston received the first Young Investigators Award in Human Brain Mapping (1996) and was elected a Fellow of the Academy of Medical Sciences (1999). In 2000 he was President of the international Organization of Human Brain Mapping. In 2003 he was awarded the Minerva Golden Brain Award and was elected a Fellow of the Royal Society in 2006. In 2008 he received a Medal, College de France and an Honorary Doctorate from the University of York in 2011. He became of Fellow of the Royal Society of Biology in 2012, received the Weldon Memorial prize and Medal in 2013 for contributions to mathematical biology and was elected as a member of EMBO (excellence in the life sciences) in 2014 and the Academia Europaea in (2015). He was the 2016 recipient of the Charles Branch Award for unparalleled breakthroughs in Brain Research and the Glass Brain Award, a lifetime achievement award in the field of human brain mapping. He holds Honorary Doctorates from the University of Zurich and Radboud University.
https://www.fil.ion.ucl.ac.uk/~karl/
Prof. Friston’s Google Scholar
https://en.wikipedia.org/wiki/Karl_J._Friston
https://www.fil.ion.ucl.ac.uk/team/theoretical-neurobiology-team/
Lectures
Abstract: This presentation considers deep temporal models in the brain. It builds on previous formulations of active inference to simulate behavior and electrophysiological responses under deep (hierarchical) generative models of discrete state transitions. The deeply structured temporal aspect of these models means that evidence is accumulated over distinct temporal scales, enabling inferences about narratives (i.e., temporal scenes). We illustrate this behavior in terms of Bayesian belief updating – and associated neuronal processes – to reproduce the epistemic foraging seen in reading. These simulations reproduce these sort of perisaccadic delay period activity and local field potentials seen empirically; including evidence accumulation and place cell activity. These simulations are presented as an example of how to use basic principles to constrain our understanding of system architectures in the brain – and the functional imperatives that may apply to neuronal networks.
Key words: active inference ∙ insight ∙ novelty ∙ curiosity ∙ model reduction ∙ free energy ∙ epistemic value ∙ structure learning
Abstract: This talk offers a formal account of insight and learning in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how agents learn from a small number of ambiguous outcomes to form insight. I will use simulations of abstract rule-learning and approximate Bayesian inference to show that minimising (expected) free energy leads to active sampling of novel contingencies. This epistemic, curiosity-directed behaviour closes `explanatory gaps’ in knowledge about the causal structure of the world; thereby reducing ignorance, in addition to resolving uncertainty about states of the known world. We then move from inference to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries in their generative models of the world. The ensuing Bayesian model reduction evokes mechanisms associated with sleep and has all the hallmarks of aha moments.
Key words: active inference ∙ insight ∙ novelty ∙ curiosity ∙ model reduction ∙ free energy ∙ epistemic value
Topics
Quantitative Neuroscience, Mathematical Neuroscience, Computational Neuroscience, Neural Coding, Systems NeuroscienceBiography
I studied mathematics at Cambridge University, did a PhD in robotics at UCL, then moved to Rutgers University in the United States for postdoctoral work in neuroscience. Before returning to UCL in 2012, I was Associate Professor of Neuroscience at Rutgers, and Professor of Neurotechnology at Imperial College London. I am currently Professor of Quantitative Neuroscience in the UCL Institute of Neurology, and together with Matteo Carandini direct the Cortexlab.
Lectures
Topics
Computational NeuroscienceBiography
Rosalyn Moran a Professor of Computational Neuroscience at the IOPPN, King’s College London and the Deputy Director of King’s Institute for Artificial Intelligence. Her work spans engineering and cognitive and computational neuroscience. In her lab she uses the Free Energy Principle as a principle to develop new methods in artificial intelligence and in modelling the brain’s normative and pathological function. She has previously held faculty positions at Virginia Tech and the University of Bristol.
Lectures
The era of Generative AI is certainly upon us. In this talk, I will present a theory of cortical function, and beyond, known as the Free Energy Principle which offers an alternative rationale and implementation for a Generative AI based on the brain. The Free Energy Principle, has been proposed as an ‘all-purpose model’ of the brain and human behavior that crucially closed the loop with action informing inference. As a formal and technical ‘first principles’ mathematical account of how brains work, it has garnered increasing attention from computer science to philosophy. The theory is based on the mathematical formulation of surprise minimization, to do so a brain can minimize its Free Energy (a computable bound on surprise), and drive, not only perception and cognition but crucially also actions. As a framework, the Free Energy Principle and its corollary ‘Active Inference’ thus represents a fundamental departure from current systems in Artificial Intelligence, as it calls for the implementation of a top-down system, rather than a bottom-up system (driven by masses of training data) that are currently the state-of-the-art frameworks in AI research. In this talk I will demonstrate how we utilised the Free Energy Principle and Active Inference as an AI solution to simulated real-world problems.
The era of Generative AI is certainly upon us. In this talk, I will present a theory of cortical function, and beyond, known as the Free Energy Principle which offers an alternative rationale and implementation for a Generative AI based on the brain. The Free Energy Principle, has been proposed as an ‘all-purpose model’ of the brain and human behavior that crucially closed the loop with action informing inference. As a formal and technical ‘first principles’ mathematical account of how brains work, it has garnered increasing attention from computer science to philosophy. The theory is based on the mathematical formulation of surprise minimization, to do so a brain can minimize its Free Energy (a computable bound on surprise), and drive, not only perception and cognition but crucially also actions. As a framework, the Free Energy Principle and its corollary ‘Active Inference’ thus represents a fundamental departure from current systems in Artificial Intelligence, as it calls for the implementation of a top-down system, rather than a bottom-up system (driven by masses of training data) that are currently the state-of-the-art frameworks in AI research. In this talk I will demonstrate how we utilised the Free Energy Principle and Active Inference as an AI solution to simulated real-world problems.
Topics
Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data SciencesBiography
Panos Pardalos was born in Drosato (Mezilo) Argitheas in 1954 and graduated from Athens University (Department of Mathematics). He received his PhD (Computer and Information Sciences) from the University of Minnesota. He is a Distinguished Emeritus Professor in the Department of Industrial and Systems Engineering at the University of Florida, and an affiliated faculty of Biomedical Engineering and Computer Science & Information & Engineering departments.
Panos Pardalos is a world-renowned leader in Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data Sciences. He is a Fellow of AAAS, AAIA, AIMBE, EUROPT, and INFORMS and was awarded the 2013 Constantin Caratheodory Prize of the International Society of Global Optimization. In addition, Panos Pardalos has been awarded the 2013 EURO Gold Medal prize bestowed by the Association for European Operational Research Societies. This medal is the preeminent European award given to Operations Research (OR) professionals for “scientific contributions that stand the test of time.”
Panos Pardalos has been awarded a prestigious Humboldt Research Award (2018-2019). The Humboldt Research Award is granted in recognition of a researcher’s entire achievements to date – fundamental discoveries, new theories, insights that have had significant impact on their discipline.
Panos Pardalos is also a Member of several Academies of Sciences, and he holds several honorary PhD degrees and affiliations. He is the Founding Editor of Optimization Letters, Energy Systems, and Co-Founder of the International Journal of Global Optimization, Computational Management Science, and Springer Nature Operations Research Forum. He has published over 600 journal papers, and edited/authored over 200 books. He is one of the most cited authors and has graduated 71 PhD students so far. Details can be found in www.ise.ufl.edu/pardalos
Panos Pardalos has lectured and given invited keynote addresses worldwide in countries including Austria, Australia, Azerbaijan, Belgium, Brazil, Canada, Chile, China, Czech Republic, Denmark, Egypt, England, France, Finland, Germany, Greece, Holland, Hong Kong, Hungary, Iceland, Ireland, Italy, Japan, Lithuania, Mexico, Mongolia, Montenegro, New Zealand, Norway, Peru, Portugal, Russia, South Korea, Singapore, Serbia, South Africa, Spain, Sweden, Switzerland, Taiwan, Turkey, Ukraine, United Arab Emirates, and the USA.
Lectures
Despite considerable progress in recent years, our understanding of the fundamental principles and mechanisms that govern complex brain function and cognition remains insufficient. Network neuroscience presents a novel perspective to tackle these persistent challenges by explicitly embracing an integrative approach to investigating the structure and function of the brain.
Topics
Neuroscience, Computational Neuroscience, Emotion, Memory, VisionBiography
Edmund T. Rolls MA, DPhil, DSc, Hon DSc is at the Oxford Centre for Computational Neuroscience, Oxford, and at the Department of Computer Science, University of Warwick, UK, where he is a Professor in Computational Neuroscience, and is focussing on full-time research. Edmund Rolls is also a specially-appointed Professor at the Institute of Science and Technology for Brain-Inspired Intelligence at Fudan University, Shanghai.
Before this, Edmund Rolls was Professor of Experimental Psychology at The University of Oxford, and Fellow and Tutor in Psychology at Corpus Christi College, Oxford (1973-2008; Vice President of Corpus Christi College 2003-2006).
Edmund Rolls has published more than 670 papers, and 16 books, on neuroscience, with research on computational neuroscience, emotion, memory, vision, taste, olfaction and the auditory system, and their disorders. He is building a biologically plausible approach to understanding the computations performed by the primate including human brain, in order to understand human brain function and its disorders. His 16 th book, Brain Computations and Connectivity is on these topics, and will be published Open Access by Oxford University Press in June 2023. It will be available at https://www.oxcns.org, where his papers are also available.
Edmund Rolls is the 18th most cited scientist in the UK, and the 150th most cited scientist in the world out of 6,880,389 in any field of science who have published more than 5 papers across every scientific field (i.e. in the top 0.002%) (composite indicator c, Ioannidis et al 2019 A standardized citation metrics author database annotated for scientific field. PLoS Biol 17(8): e3000384). Edmund Rolls is also the 20th most cited neuroscientist in the world, and 3rd in the UK (composite indicator c for Neurology and Neurosurgery, Ioannidis et al 2019).
https://www.oxcns.org
Lectures
Tutorial Speakers
(TBA)