Cosyne 2020 interesting posters

POSTER Session 2

29 Hippocampal replay slows down with experience, as greater detail is incorporated

When traveling thourgh a new environment, animal’s hippocampus replay will first capture the coarse structure of the trajectories in a quick way and gradually “insert” greater details into these sequences, which leads to slower replay.

Thoughts:

Place cells have a “hover and jump” dynamics, which means the replay events do not completely follow the temporal order. Instead, they’re jumping between discontinual states. This aligns with the results of Liu et al. 2019: Hippocampus may use some mechanism to reorganize the experience and form “reasonable” but unseen new sequences.

If the reorganizing process is like the annealing algorithm, which means associations in the state space is randomly distributed and the hippocampus will try to organize them in a “possible” way, then what is the internal energy function in this process?

Question: why is replay always faster than the actual place cells firing pattern?

Possible answer: it’s subsampling. And the reason why there’re still the same set of neurons firing is that the spatial encoding process is not one-hot.

42 Write information bottleneck theory into the form of 3 factor hebbian learning.

may be relevant to Talk 24

3-factor Hebbian learning is an instance of the information bottleneck approach (Ma et.al. 2019)

Global signal: e.g. classification error

Why pre and post synaptic activity is enough? IBT only requires successive layer representation: maximize the mutual information between current layer and output, minimize the mutual information between current layer and input.

Question: how to implement this without external memory?

44 Experience dependent context discrimination in the hippocampus

Hippocampus “remapping is equivalent to an optimal estimate of the identity of the current context under that prior”.

Try to link this with poster 29 session 2.

神经正切核 efficient coding

35 The role of hippocampal sequences in deliberative control: a computational and algorithmic theory

HPC-mPFC interloop: mPFC for higher level planning, dHPC for refining trajectories.

Sounds very like GOLSA paired with hierarchy navigation.

Also, in their abstract, they mentioned specific roles of SWR & theta rhythm in their framework.

Maybe we should contact them and ask about things they finished.

121 Optimal dendritic processing of spatiotemporal codes

104 Using noise to probe network structure and prune synapses

Notice the network here means linear recurrent neural network.

“Determining the importance of a synapse between two neurons is a hard computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them”.

They constructs “a simple local anti-Hebbian plasticity rule that prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons.”

“The plasticity rule is task-agnostic–it seeks only to preserve certain properties of network dynamics.”

They proves that “for a class of linear networks the pruning rule preserves multiple useful properties of the original network (including resting-state covariance and the spectrum)”

Question:

How could this trick be applied to task-relevant learning?

Nonlinear case?

Core idea: peturbation and stability determines the importance of synapases.

65 Differential covariance

Differential covariance: A new method to estimate functional connectivity in fMRI

64 Goal-directed state space models of mouse behavior

Has something to do with inverse reinforcement learning.

50 Biologically plausible supervised learning via similarity matching

Motivation: observations in the ventral visual pathway and trained deep networks.

In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories becomes less similar.

Paper here: https://arxiv.org/pdf/2002.10378.pdf

46 The interplay between randomness and structure during learning in RNNs

Paper here: https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResearch.2.013111

Question: “when considering a specific task, the network’s connectivity contains a component which is related to the task and another component which can be considered random.” Why is this decomposble?

47 Meta-learning Hebbian plasticity for continual familiarity detection

Model: forward network with ongoing plasticity in the synaptic weight matrix. Task: recoginize if one stimulus has been presented before in a continuous data strem.

Result: An antiHebbian plasticity rule (co-activated neurons cause synaptic depression) enables repeat detection over much longer intervals than a Hebbian one, and this is the solution most readily found by meta-learning.

Question: what kind of meta-learning algorithm is used in the exploration of local learning rules?

51 Disentangling the roles of dimensionality and cell classes in neural computations

Train RNNs with dimensionality constraints: theoretically tractable recurrent networks: low-rank, mixture of Gaussian RNNs.

“In these networks, the rank of the connectivity controls the dimensionality of the dynamics, while the number of components in the Gaussian mixture corresponds to the number of cell classes”.

Important: Minimum rank and number of cell classes needed to implement tasks of increasing complexity.

Implification: Can be used to measure the complexity of sequence tasks.

60 Efficient and sufficient: generalized optimal codes

62 Evolution of firing patterns among different microcircuits in the cortical network

Paper here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5130073/pdf/f1000research-5-12941.pdf

78 Meta-learning biologically plausible learning algorithms with feedback and local plasticity

Paper here: https://openreview.net/pdf?id=HklfNQFL8H

Question:

It seems this algorithm will lead to increased generalization ability, but why?

POSTER Session 3

49 Continual learning, replay and consolidation in a forgetful memory network model

Inserting replay to new task datastreams improve the consolidation of old memories.

Their model shows “for the first time how a recurrent neural network can continuously learn and store selected information for lifelong timescales. We show that stochastic nonlinear replay of learned information results in an advantage for memory capacity.”

Question: Just inserting replay?

56 Recurrent neural network dynamics are dependent on learning rule

“We trained RNNs using four learning rules: a genetic algorithm (GA), gradient descent via backpropogation-through-time (BPTT), first order reduced and controlled error (FORCE) (Sussillo & Abbott, 2009), and a Hebbian-inspired learning rule (Miconi, 2017).”

First, we used tensor component analysis (TCA) to show that RNNs trained with different learning rules have differences in task representation and learning dynamics.

Look at the abstract.

78 Canonical correlation analysis of the cortical microcircuit:

88 Efficient coding in large networks of spiking neurons with tight-balance

94 NeuroCaaS

Paper: https://www.biorxiv.org/content/biorxiv/early/2019/11/18/837567.full.pdf Website: neurocaas.com

14 A theory of learning with a constrained weight distribution

Beautiful and interesting.

113 Brain-inspired replay in artificial neural networks for multi-task learning

36 Grid cells

22 Natural gradient learning for spiking neurons

TALKS

9 Human planning

Weiji Ma’s task: 4-in- a-row

Similar task: 2048

24 Gradient-based learning with Hebbian plasticity in structured and deep neural networks

Paper: Structured and Deep Similarity Matching via Structured and Deep Hebbian Networks