Quantifying barcodes of dendritic spines using entropy-based metrics (original) (raw)

Spine motility analysis has become the mainstay for investigating synaptic plasticity but is limited in its versatility requiring complex, non automatized instrumentations. We describe an entropy-based method for determining the spatial distribution of dendritic spines that allows successful estimation of spine motility from still images. This method has the potential to extend the applicability of spine motility analysis to ex vivo preparations. Known since the time of Cajal, dendritic spines are an interesting subdomain present in most neurons, thought to play a role in synaptic plasticity and memory storage 1. In fact, interrogation of spine density (number of spines per unit length) has become one of the most widely used contemporary morphological correlates of plasticity in neurobiology 2,3. However, this approach contrasts with two-photon microscopy data demonstrating, in vivo, that synaptic plasticity may occur without modification of the total number of spines. Empirical data have now documented that it is the number of 'stable' spines over time rather than the total number of spines that is reliable in studying synaptic plasticity 4. Therefore, methods that measure spine motility/turnover promise better insight into the physiology of dendritic spines 5. Unfortunately, these methods require complex instrumentations and a technically demanding setup, which limit the possibility to extend their application to high throughput systems. This has resulted in a widespread failure to address spatial distribution of spines. Here, we introduce the notion that spatial distribution of spines along a dendrite can be represented as a bar code, carrying information about the local network, which, in turn, is related to the spine motility. We present a new method to indirectly assess changes in spine dynamics, based on measures of entropy from information theory, which takes into consideration the spatial distribution of the spines rather than their abundance. Entropy is a useful measure of the disorder and complexity in data series 6 , and similar quantities such as sample entropy, are better suited for short noisy time series 7. It should be noted that the length of data series may, in fact, represent an important limitation when the aim is to calculate the amount of information (entropy) stored in the data; in the case of dendritic spines, the length of the data series is limited by the finite size of the dendrites and therefore their analysis require appropriate algorithms. Entropy (H) is a measure of the amount of information (classically measured in bits of information) required to describe a system. A sequence of random data shows high entropy, whereas a stream of very uniform, repetitive data shows low entropy (since less information is needed for their description). Methods To explore if entropy can be a suitable approach to measure the distribution of dendritic spines, we used a transcranial two-photon imaging system to follow, in a time window of 30 min, identified spines of cortical neurons expressing GFP. Spine motility/turnover (we use these terms interchangeably within our text)was determined by measuring changes in spine length between consecutive time frames (see