A Researcher Live Series: 5th - 18th April 2023

AI in Neuroscience


AI is taking the world by storm: from open-source tools like GPT-2 and image generators to medicine and biosciences. Neuroscience is not an exception - as you may have seen from our AI in Neuroscience report, which you can find here.

We are not making the discussion live. Join our specially selected collection of interdisciplinary events where we explore the fascinating ways in which AI intersects neuroscience to provide new opportunities for research and innovation.





Domain-general cognition in brains and artificial neural networks

with Jascha Achterberg, PhD Student at University of Cambridge

Wednesday, 5 April

4 pm BST / 3pm GMT

Predicting neurological disease severity using machine learning and brain connectomics 

with Dr Ceren Tozlu, Weill Cornell Medicine

Thursday, 6 April,

3 pm BST / 2 pm GMT

Self-supervised learning: A new lens on animal visual development

with Dr Shahab Bakhtiari, University of Montreal

Tuesday, 18 April,

5 pm BST / 4 pm GMT


Hear from industry experts

Join our speakers for discussions and Q&A about their latest research and discoveries

Domain-general cognition in brains and artificial neural networks

with Jascha Achterberg, PhD Student at University of Cambridge

Jascha Achterberg

Humans’ ability for flexible domain-general cognition has long been of interest in neuroscience. We are extremely capable problem-solvers, able to quickly adapt to new environments and challenges. During the last decades neuroscience has made progress in understanding domain-general cognition in humans, highlighting a multimodal core brain network used to understand complex problems and generalise our skills across scenarios. Very recently, we have also seen astonishing progress in computer science, creating artificial neural network models which increasingly develop multimodal cognitive abilities. This opens the possibility to synergistically study domain-general cognition in both brains and artificial neural networks to undercover the computational mechanisms supporting abstract computations in both biological and artificial networks. 

In this presentation, we will review the basics of domain-general cognition in the brain with a specific focus on how its systems-level architecture links to the neuronal codes facilitating abstract cognition. We will then compare this to current approaches for multimodal cognition in artificial neural networks. Ultimately, we will outline a path on how we can study biological and artificial systems side-by-side which will not only allow us to understand how domain-general cognition works but also how systems can achieve these computational capabilities while using minimal resources. 

About the speaker

Jascha Achterberg is a PhD student at the University of Cambridge, studying the connection of biological and artificial intelligence with the goal of identifying the core principles underlying domain-general cognition and multimodal computations in neural networks - may these be biological or artificial. The goal of this is not only to understand the principles of cognition but to also learn if and how these may inform innovations in hardware (neuromorphic computing chips) and software (network algorithms). He pursues these goals by using large scale electrophysiological, neuroimaging, and behavioural data, recorded in humans and non-human primates, to work out which features underlie highly functional brains and to then translate them into neuroscience-inspired artificial neural networks. Collaborating with partners across academia and industry (Intel Labs, Google DeepMind), he is passionate about large scale open-source collaborations bridging neuroscience and artificial intelligence (NeuroAI). In this effort Jascha is supported by the Bill & Melinda Gates Foundation through a Gates Cambridge scholarship. 

Key Takeaways:

  1. Domain general cognition refers to everything which happens after a single domain has been passed through, allowing for abstract problem solving.
  2. The brain has a multiple demand system (MD system) which is active whenever you solve a complex problem.
  3. The MD system is in a unique anatomical position in the brain, allowing it to communicate with and integrate information from all over the brain.
  4. The MD system is a part of a network of brain regions activated whenever you solve a complex problem, regardless of domain.
  5. The MD system allows for domain-general abstract cognition, meaning humans can handle complex tasks in an open way.
  6. AI is gradually approaching domain-general cognition, with challenges like Minecraft agents requiring abstract problem-solving.
  7. The MD system requires a complex problem-solving algorithm to function, which is still being studied in both neuroscience and AI.
  8. Computational models have been used to study cognition and cellular interactions since the 1950s, with increasing complexity over the years.
  9. Developments in artificial neural networks have significantly informed model building in neuroscience, particularly for visual tasks.

Predicting neurological disease severity using machine learning and brain connectomics 

with Dr Ceren Tozlu, Weill Cornell Medicine

Ceren Tozlu

Lesion type, size, and location are very heterogenous among people with multiple sclerosis (MS), making the prediction of disability very challenging. Brain connectivity analysis provides a promising tool with which to map the effect of multiple sclerosis-related pathologies on physical and cognitive impairment. Advanced imaging techniques such as diffusion and functional MRI are commonly used to quantify structural connectivity (SC) and functional connectivity (FC) networks; however, they are expensive and time-consuming. 

Our study showed that the SC and FC networks can be estimated using deep learning and lesion masks extracted from conventional MRI and can successfully predict disability in people with MS. In this talk, first I will present how we use deep learning to estimate functional connectivity from estimated structural connectivity. Second, I will introduce the machine learning approach that we perform to predict motor and cognitive impairment by using the brain’s structural and functional connectivity networks. Our work demonstrates that lesion masks, coupled with deep learning, can estimate SC and FC networks and can be a viable alternative to collecting advanced MRI, bringing the connectome one step closer to the clinic. Creating an alternative approach to advanced MRI techniques is also important to be able to use conventional MRI in a larger number of MS patients who are regularly visiting the MS clinics. 

About the speaker

Dr. Tozlu received her Master’s and Ph.D. degrees in the Department of Biostatistics, Bioinformatics, Biomathematics, and Health at the University of Claude Bernard, Lyon 1 in France. During her Master’s and Ph.D. work, she focused on prediction models using multi-modal neuroimaging in neurological disorders including multiple sclerosis and stroke. In 2018, Ceren started to work as a postdoc in the Computational Connectomics laboratory directed by Dr. Amy Kuceyeski in the Department of Radiology at Weill Cornell Medicine. Her postdoctoral research focuses on the application of statistical approaches and machine learning to better predict disability and cognition in neurological disorders using brain connectomics. During her postdoctoral research, Ceren received a pilot grant from the Cornell University MRI Facility in 2020, a three-year postdoctoral fellowship from the National MS Society in 2021, and a Career Transition Award from the National MS Society in 2023. 

Self-supervised learning: A new lens on animal visual development

with Dr Shahab Bakhtiari, University of Montreal

Shahab Bakhtiari

In recent years, significant advances in artificial intelligence have been made possible by a combination of supervised and self-supervised learning. While supervised learning has been effective in many domains, it is limited by the expensiveness and unavailability of labelled data. Self-supervised learning has emerged as a powerful alternative, allowing for learning from unlabelled data without explicit supervision. This approach has been at the forefront of recent advances in areas such as large language models, robotics, and vision.

Just as self-supervised learning can take advantage of the ocean of data around us to learn about our world, animal brains also learn about the world during development from the large stream of input through different sensory modalities without much explicit teaching or supervisory information. By examining the core principles of self-supervised learning and exploring how they are being applied in the context of animal vision development, we can gain a deeper understanding of brain development, and how we can leverage this understanding to develop more effective AI systems. In this talk, I will review how self-supervised learning is being applied to model and understand the visual system in animal brains, and how it can provide insights into how animals learn to see the world around them. We will also discuss important next steps that need to be taken to continue advancing this field.

About the speaker

Shahab Bakhtiari is an Assistant Professor at the University of Montreal, whose research focuses on the intersection of neuroscience and artificial intelligence, also known as NeuroAI. His work aims to understand how visual perception and learning occur in both biological brains and artificial neural networks. He uses deep learning as a computational framework to model learning and perception in the brain, and leverages our understanding of the nervous system to create better biologically-inspired artificial intelligence. Shahab received his undergraduate and graduate degrees in Electrical Engineering from University of Tehran, and went on to earn his PhD in Neuroscience from McGill University. He completed his postdoctoral fellowship at the Mila - Quebec AI Institute, where he developed his expertise in AI and its applications to neuroscience.

AI in Neuro footer