Hebbian Learning and Plasticity I: The and Maths Behind How Your Brain Learns (and Unlearn)

--

Do you know that your brain rewires itself all the time when you learn new things?

And have you ever find yourself asking the question:

“What exactly is going on in our brains when we try to learn something?”

If truth be told, I ask myself this very question every day. I think that the attempt to answer this question helps us reflect on what we are, and more importantly, how special we are. The process of learning comes very naturally for us but do you ‘really’ know what is really going on in your brain during the learning process? By the end of this article, I hope you will understand what it means to learn; from a theoretical perspective to the computational neuroscience point of view where the process of learning is mathematically explained in terms of neural activities. The Hebbian learning rule is implemented in many machine learning algorithm today, thus, this article will serve as the building block for my next series in Spiking Neural Network and is also a part of my Hebbian Learning and Plasticity series

Before reading this, you should be familiar with how neurons communicate. If you’re not, I suggest you read the first section of my article on ANN

And Lastly, before diving in, I want you to remember one thing:

“Neurons That Fire Together Wire Together”

What is Learning?

Definitions of learning vary widely across disciplines, influenced largely by different approaches in the assessment. At the core, learning could be defined as a process that results in a change in knowledge or behaviour as a result of experience. Many learning activities make use of a reward system of the brain. You know we should not play with fire because you got burnt as a little kid. You know you will be very happy if someone gives you a box of chocolate (maybe not you but definitely me). You know how to learn the Norwegian language based on your knowledge in Swedish. All of this “past experience” is the result of your brain taking in information and store it into the memory where it could be, hopefully, use to apply to new knowledge, which then leads to an update of your current state of knowledge. Thus, learning and memory are strongly correlated, particularly declarative memory, which contains the memory of facts (e.g. name of the prime minister) and events (e.g. your hiking trip last summer).

https://human-memory.net/types-of-memory/

Synaptic Plasticity and Long term Potentiation

Recall, in short, that when an action potential of neuron A arrives at the synapse it either causes an excitatory or inhibitory behavior of receiving neuron B, where this change can be measured as an Excitatory or Inhibitory Postsynaptic Potential (EPSP or IPSP). The synaptic strength is said to be stronger when it shows an increase in the EPSP, meaning that a postsynaptic neuron is more likely to fire an action potential.

Neuroplasticity or simply Plasticity is defined as the ability of the brain to physically change its connectivity and neuronal synaptic strength through selectivity. Just like plastic, your brain also goes through a series of changes throughout life, forming new connections when it needs to. According to Gerstner (2011), different forms of learning is actually the result of the dynamical changes in the strength of synapses. This goes back to the quote I wrote at the beginning of the article — Neurons that fire together wire together, which is basically the rough summarisation of the Hebbian Theory. How plasticity is differentiated is based on “how long” the stimulus from the pre-synaptic neurons A can increase the EPSP, or simply put, how good at neuron A at exciting neuron B.

Jobs in AI

Simply, the synaptic plasticity can be divided into:

(1) Short-term Plasticity (STP): where the increase only lasts for one or few seconds

(2) Long-term Plasticity (LTP): in which the increase can last for seconds or months. Or you can say neuron A is good at exciting neuron B for a longer period of time! In reverse, if neuron A suppress neuron B for a long period of time, it is called Long-term Depression (LTD). Sometimes, only a 3 seconds long stimulus can stimulate EPSP for minutes or hours.

Gerstner (2011)

Both LTP and LTD are thought to be the building blocks of how learning happens in the brain.

Hebbian Learning

The majority of the existing synaptic theories of learning today are, in some ways, influenced by the Hebbian Learning, which arose from the Hebbian Theory, a theory that attempts to explain synaptic plasticity, introduced by Donald Hebb in 1949.

When an axon of cell A is near enough to excite cell 𝐵 and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A ’s efficiency, as one of the cells firing 𝐵 , is increased.

Hebbian learning is powerful when it comes to studying the process of learning since it implies that the relationship between two neurons reflects past correlated activity and the change in the strength of that connection represents the association. Let us use the example below:

Gerstner (2011): Representation of Hebbian Learning in Human

Let’s say the person in the figure above sees a banana for the first time and that he has 10,000 neurons in a network that have to work together to learn about this banana (let’s depict all those thousands of neurons as 4,5,6, and 9). Those neurons might be working on smells, shape, texture, taste, colour, or the environment associated with that banana. Now, to learn about this banana and to be able to store it in the memory, neurons 4,5,6, and 9, have to be switched on together, and according to Hebbian learning, this co-activation will lead to a strengthening of their connections. At this point, the memory concept of “banana” has been formed.

Next, let’s say the person encounter cue(s) that resembles a banana on the next day; maybe something with similar smell or color and shape. Those neurons that are responsible for such cues and also are a part of the “banana” concept (e.g. neuron 5) will become active and will fire an action potential to the neighboring neurons. Those that were previously associated with this “banana” concept (4,6, and 9) will also become activated and mutually fire in a cascade manner due to the strengthened past connection where the remaining irrelevant neighbor neurons remain inactive, or not as active, due to a weaker connection. Repeat this a few times (also known as iteration) and you get a solid memory of a “banana” that can be retrieved.

The Mathematical Formulation

This section is intended for readers that are familiar with calculus and differential equations and may not be suitable for layman readers.

In order to find a mathematically formulated Hebbian learning rule, we focus on a single synapse with efficacy 𝑤𝑖𝑗, from presynaptic neuron j to postsynaptic neuron i. The activity of the presynaptic neuron is denoted by Vj and that of the postsynaptic neuron by Vi

Trending AI Articles:

1. How To Choose Between Angular And React For Your Next Project

2. Are you using the term ‘AI’ incorrectly?

3. 6 AI Subscriptions to keep you Informed

4. Tutorial: Stereo 3D reconstruction with openCV using an iPhone camera

There are two very important aspects of Hebb’s postulate; Locality and Correlation

(1) Locality: the change of the synaptic efficacy can only depend on local variables (such as pre- and postsynaptic firing rate, and the actual value of the synaptic efficacy, EXCLUDING activity of ANY other neurons). The general formula is defined in terms of a differential equation below

a general formula based on the locality of Hebbian plasticity

(2) Joint Activity or Correlation: require both pre- and postsynaptic neurons to be active for a synaptic weight change to occur. With this property, we can make some assumption on the undetermined function F by expanding it in a Taylor series about Vi = Vj =0, assuming F is well-behaved. If you recall, the Taylor series is used to approximate (guess) what a certain function may look like.

Taylor Expansion of the rate of change of synaptic efficacy. Here, the term C2(corr) act as the ‘AND’ condition for correlation between two neurons. The value of C2(corr) must be >0 for Hebbian learning rule to be applied

The simplest choice for our function is to fix C2(corr) at a positive constant while setting all other terms in the Taylor expansion to zero. This will result in a prototype of Hebbian learning:

If one could see, if F were to be independent of 𝑤𝑖𝑗, then the synaptic efficacy could grow infinitely when repetitively stimulated by the same stimulus over time. To avoid an explosion of weights, we can make the parameter C2(corr) tends to zero as 𝑤𝑖𝑗approaches its maximum value, say w(max) = 1, e.g., where upsilon Υ is a positive constant

Achieving saturation of synaptic weights

Important note: we note that a learning rule with C2(corr)= 0 and only first-order terms (such as C1(post)=/= 0 or 1(pre)=/=0) would be called non-Hebbian plasticity or anti-Hebbian, because pre- or postsynaptic activity alone influence the change of the synaptic efficacy, which ignored the correlation aspect of Hebb’s principle. Therefore, for the formulation to be qualified as Hebbian, the C2(corr)must be positive

In the next part, we will dive deeper into other Hebbian Learning rules such as The Bienenstock-Cooper-Munro Rule and the application of Hebbian Learning and development of receptive field

Don’t forget to give us your 👏 !

--

--

AI research engineer @ Eedi. My passion spans deep learning, NLP, knowledge representation and reasoning, epistemology, and logic.