MIT Technology Review has an interesting article on the nexus of computing and artificial intelligence.
The DeepMind project attempts to reproduce the human short term memory in silico.
DeepMind’s breakthrough follows a long history of work on short-term memory. In the 1950s, the American cognitive psychologist George Miller carried out one of the more famous experiments in the history of brain science. Miller was interested in the capacity of the human brain’s working memory and set out to measure it with the help of a large number of students who he asked to carry out simple memory tasks.
Miller’s striking conclusion was that the capacity of short-term memory cannot be defined by the amount of information it contains. Instead Miller concluded that the working memory stores information in the form of “chunks” and that it could hold approximately seven of them.
That raises the curious question: what is a chunk? In Miller’s experiments, a chunk could be a single digit such as a 4, a single letter such as a q, a single word or a small group of words that together have some specific meaning. So each chunk can represent anything from a very small amount of information to a hugely complex idea that is equivalent to large amounts of information.
But however much information a single chunk represents, the human brain can store only about seven of them in its working memory.