Cristina Savin

Since November 2013, I am an IST fellow in the Institute for Science and Technology, Austria; I am based in the lab of Gasper Tkacik. I will be moving to the Center for Neural Science (CNS) and the Center for Data Science (CDS) at NYU as an Assistant Professor next June.

Previously, I obtained a Ph.D. in Computational Neuroscience at the Frankfurt Institute for Advanced Studies (in the lab of Jochen Triesch). As a postdoc, I worked in the Computational and Biological Learning Lab at Cambridge University (with Mate Lengyel), and briefly in the Group for Neural Theory, at ENS, Paris (with Sophie Deneve).

In very generic terms, my research focuses on learning and memory at the level of neural circuits in the brain. I use a combination of theoretical modelling, computer simulations and data analysis to study how different plasticity mechanisms subserve these functions. On the theory side, I construct probabilistic models that describe biologically-relevant computations and then use techniques borrowed from machine learning to work out how neural circuits could approximate the optimal solution for these tasks. On the data analysis side, I build statistical models describing the joint activity of neurons recorded experimentally, then use information theoretic measures to asses how this activity is shaped by learning.

The results of this work follow three major themes:

  • The approach allows us to functionally link circuit properties across multiple scales, from single synapses, to the integration of signals in the dendritic arbor (NIPS talk, 2013), to circuit dynamics (new PLoS Comp Biol paper), to the system's level architecture of the brain, and to behaviour (NIPS, 2011). Hence, we can use different types of experimental data to constrain the models and make predictions for both electrophysiological and behavioural experiments.
  • This work provides a normative account for the richness and diversity of synaptic and neural plasticity. In particular, it shows that homeostatic plasticity plays critical roles in circuit function: when learning efficient representations of sensory inputs [Savin et al, 2010] [Keck et al, 2012] or during efficient memory recall [Savin et al, 2011] [Savin et al, 2014].
  • Ambiguity and noise plague virtually all biologically-relevant computation. Hence, neural circuits need to represent and appropriately deal with uncertaintly. I study how this can be achieved in computational models (identifying general principles for representating uncertainty in neural circuits via sampling, [Savin and Deneve, 2014] and specific applications, in particular representing uncertainty about information retrieved from memory [Savin et al, 2014]) and use data analysis to validate these representations experimentally (in ferret V1, data from the Fiser lab).

    Recent and ongoing projects:

    Statistical description (max ent models) of the activity of neural populations in area CA1 of the hippocampus; in collaboration with Gasper Tkacik and Jozsef Csicsvari.

    Circuit and systems' level solutions for effective autoassociative memory recall; in collaboration with Mate Lengyel and Peter Dayan, see our NIPS 2011 and 2013 papers and [Savin et al, 2014].

    Distributed codes for sampling from multi-dimensional, real-valued distributions; in collaboration with Sophie Deneve; contributed talk Cosyne 2014 and NIPS 2014 spotlight.

    Signatures of statistical optimal learning in neural activity; in collaboration with Jozsef Fiser and Mate Lengyel, see our recent technical report on arXiv and Letter to the Editor in reply to Okun et al, 2012, or the abstracts for our two recent SfN talks here.

    The role of homeostatic mechanisms in learning efficient representations of sensory inputs [Savin et al, 2010], [Keck et al, 2012].

    Reward-dependent learning in PFC: how task constraints shape neural representations in working memory circuits; contributed talk Cosyne 2009, [Savin and Triesch, 2014].


    For more details, have a look at my publications.

    Last modified: 20 June 2016