Evolution is one clever fellow. Next time you’re strolling about outdoors, pick up a pine cone and take a look at the layout of the bract scales. You’ll find an unmistakable geometric structure. In fact, this same structure can be seen in the petals of a rose, the seeds of a sunflower and even the cochlea bone in your inner ear. Look closely enough, and you’ll find this spiraling structure everywhere. It’s based on a series of integers called the Fibonacci sequence. Leonardo Bonacci discovered the sequence while trying to figure out how many rabbits he could make starting with just two. It’s quite simple — add the right most integer to the previous one to get the next one in the sequence. Starting from zero, this would give you 0-1-1-2-3-5-8-13-21 and so on. If one was to look at this sequence in the form of geometric shapes, they can create square tiles whose sides are the length of the value in the sequence. If you connect the diagonal corners of these tiles with an infinite curve, you end up with the spiral that you saw in the pine cone and other natural objects.

Source via Geocaching

So how did mother nature discover this geometric structure? Surely it does not know math. How then can it come up with intricate and sophisticated structures? It turns out that this Fibonacci spiral is the most efficient way of squeezing the most amount of stuff in the least amount of space. And if one takes natural selection seriously, this makes perfect sense. Eons of trial and error to make the most copies of itself has stumbled upon a mathematical principle that permeates life on earth.

fb_02
Source via John Simmons

The homo sapiens brain is the product of this same evolutionary process, and has been evolving for an estimated 7 million years. It would be foolish to think that this same type of efficiency natural selection has stumbled across would not be present in the current homo sapiens brain. I want to impress upon you this idea of efficiency. Natural selection discovered the Fibonacci sequence solely because it is the most efficient way to do a particular task. If the brain has a task of storing information, it is perfectly reasonable that millions of years of evolution has honed it so that it does this in the most efficient way possible as well. In this article, we shall explore this idea of efficiency in data storage, and leave you to ponder its applications in the computer sciences.

EFFICIENCY

The following is a thought experiment meant to illustrate how seven million years of evolution might make data storage more efficient. Some will notice similarities with data compression techniques and programming strategies for embedded systems with minimal resources. However, the point of this exercise is to demonstrate the thought process behind efficiency to people of all skill levels.

Let us right click on our computer and create two text files. It doesn’t matter what we name them, but the contents in the first shall be “THIS IS FILE ONE” and the contents in the second shall be “THIS IS FILE TWO”. Let us now save them each to a location on our hard drive. These two files are now stored in non-volatile memory, with each occupying a separate memory space. The inefficiency is obvious. While it is easy to think nothing of bytes of data when we have terabytes of storage; if we want to take the idea of efficiency seriously, we need to see this through. We must ask ourselves – how can we make this process more efficient?

Let us now create a look-up table that has the complete word set of our language. Each word is given a label. For this thought experiment, let us keep things simple and label them as such:

  • 01 – THIS
  • 02 – IS
  • 03 – FILE
  • 04 – ONE
  • 05 – TWO

Now the contents of our text files will be “01 02 03 04” and “01 02 03 05”.  This is certainly an improvement in efficiency, fb_03but there is an issue with our look-up table. Consider the words “THIS” and “IS”. The word “THIS” contains the word “IS”.  Moreover, all of our words appear to be made of combinations of 26 individual symbols. Thus, we must make another look-up table within our first to increase efficiency.

  • 06 – T
  • 07 – H
  • 08 – I
  • 09 – S

The label of 01 for the word points to the look-up table that gives us a label for the individual letters. Now, the word “THIS” is “06 07 08 09” and the word “IS” is “08 09.” This is better. But we still have more inefficiency in our alphabet table. The letter “T” is composed of a single vertical line and a single horizontal line, while the letter “H” is composed of two vertical lines and one horizontal line. For ease of explanation, let the letter “S” be composed of arcs. Since letters’ “T” and “H”consist of the parts of the same line segments, we can create yet another look-up table to deal with this.

  • 10 –
  • 11 – |
  • 12 – c   (‘c’ represents the arcs in the letter “S”)

If you map out our little thought experiment, you get a clear hierarchical structure. In this way, all words are only stored as line segments and arcs. The data in the letter “T” is composed of the same data in the letter “H”. So there is no sense in storing them in separate locations. Instead, you simply store the label to the line segment. This hierarchical process is repeated all the way up the chain until you get to the entire memory (or data segment). In this case – line segments come together to form letters, which come together to form words, which come together to form the sentence in our text file.

This same process can work with images. One can break down the image of a truck to polygons, then to simple shapes, then to line segments and arcs. The circles of the wheels are the same data that would represent the circle of a dinner plate, or the full moon. The straight lines that run along the doors and windows are the same data that you see in the text you’re reading right now. It is a remarkably efficient way to store information, and what one would expect to see after millions of years of natural selection.

INVARIANCE

Many neuroscientists will agree that the brain stores memories in an invariant form. Songs, for instance, are stored as pitch-invariant memories. The memory of a song in the key of C is the same as that of the song in the key of G. When you hear the song, the memory recalled is the same despite the different incoming frequencies. This is true no matter the key, instrument or style of the song. It turns out that storing data in a hierarchical form as described above is related to invariance. All you have to do is provide a little feedback between the hierarchical layers, and allow higher layers to make changes to lower layers in order to meet a prediction. This process will create what are known as invariant representations. Let us explore this concept with another thought experiment.

Imagine there is a piece of paper in front of you with “THIS IS FILE ONE” written in crayon by a young child. The letters are distorted and oddly shaped. Your job is to read the sentence using our hierarchical structure. The identification process starts off at the very bottom with looking at the individual line segments. Imagine you make out the first three letters of the word “THIS” but the letter “S” comes up the hierarchy as the number “8”. The child wrote the letter backwards, and then tried to correct it. So now we have a problem; where there should be “THIS” is instead “THI8”.

The hierarchical structure of our data storage can deal with this ambiguity by using prediction, and then providing feedback to the layers below it. The layer that’s responsible for identifying words is able to predict (via past experience) that the combination of the letters’ “T”, “H” and “I” should be followed by an “S”, and not an “8”.  The ‘word’ layer will either tell the layer below that it should be passing up an “S” and not an “8”, or it will pass up the word “THIS” to the layer above it despite what’s coming from the layer below. The method of hierarchical feedback combined with prediction creates invariant representations. You can read “THI8 I8 FILE ONE” without any issue because of this process. You see “THI8 I8”, but the words recalled are “THIS IS” because the words’ “THIS” and “IS” are stored in an invariant form. Note that it would be more difficult to recall the correct word memories without the rest of the sentence: “FILE ONE”. This is an example of prediction; the brain is using the entire sentence in context to pull the correct word memories.

We have shown that natural selection should lead to efficiency, and that invariance and efficiency are closely related. These are powerful concepts in the arena of data storage, and not even close to how modern computers store data.

YOUR THOUGHTS

Is it possible to replicate nature’s method of memory storage in a computer? One must also consider the resources currently available. Let’s face it — memory is plentiful and cheap. Do we really need to be this efficient with memory storage? I will leave it up to you to answer these thought provoking questions in the comments below.



Source link