My favorite definition of human consciousness is simply our access to a representation of parts of our internal mental states. In this post we’ll elaborate on this definition, from cognitive psychology’s point of view, and discuss a bit about possible applications for machine learning.
I do not have any concrete knowledge. Most of what I gather here raises from my on thoughts on the topic, and I’ll probably revisit this in a few years and laugh at my own naivety.
When people speak of consciousness, they often speak of the “subjective experience of life” or something of the sort. The problem with that definition is that while I can empathize with it, I can not even verify it’s existence as it has absolutely no effect on the world. If I can’t measure it, it does not exist..
Given that consciousness, as defined above, has 0 scientific implications but an immense emotional importance, we have 2 choices. We can call it quits, and concede that consciousness is an illusion, or alternatively we can try to alter the definition a bit to get something which is measurable but remains semantically close to the usual definition.
I think that the option I like best is the first, mainly because the above definition of consciousness is well defined, albeit non-existent, but also because I don’t like the delusion of the significance of consciousness.. However, the practical definition I gave at the introduction just resonated with my own intuitions and feelings of what the word “consciousness” should mean, so I think that there is also value in this “abuse of notation”.
Practical definition of consciousnesses
The divided mind
Our brain is composed of many connected modules, operating simultaneously in synchronization, without a single commanding ruler. It usually seems like the brain is guided by a linear, coherent, cognitive patterns because our brain makes a lot of effort to seem this way, using confabulations, emotions and even changing our own memory. This claim is important and not trivial, so I’ll give two justifications below, but you should go ahead and read Jonathan Heidt’s works (the first chapter of “Happiness Hypothesis”, titled “the divided mind”, is a great start, and elaborates on the justifications below).
- Tests in split brain patients shows that both hemispheres act independently, but that the person is incredibly creative in conjuring (clearly false) explanations for the actions he made, even though the language center is located in the different hemisphere from the parts that made the action. For example, in an experiment on split brained patients, the right hemisphere is shown a picture of a naked person and then the patient giggled. When asked “why are you laughing”, the patient replied “because you ask funny questions”.
- There is a big disconnect between the prefrontal cortex, which is where most logical thoughts lie, and the limbic system, which is in charge of our automatic and emotional thoughts. We all have many examples of our own of situations where we acted in one way even though we knew that it was wrong. How is it possible to think (or even speak) one way while acting to the opposite, if there is a single controller? Well, I guess it is possible, but it definitely is more consistent with the alternative explanation. This is an example of Cognitive Dissonance.
Consciousness as the coherent self
So while the brain has many parts which has to cooperate (or possibly compete) with one another, it does not seem that we experience the world in that way. It is clear to me now that I’m working on my computer, feeling some pressure to finish this post (I thought it would take a few minutes), feeling curious, feeling warm in some places but cold in other, feeling a slight pain in the wrist, and so on.. The fact that my language center is capable of all this information is entirely non trivial. Think about how little we know about what is happening in our digestive system, that when we move our fingers we do not explicitly know what muscles are involved, and then think of how little of the brain’s decision making is known to us. The way I sense my own thought while writing these words, is by having the words and sentences coming out of nowhere, and then having a feeling of whether those are the right words or not. If the feeling is positive, then my hands will take care of the rest and immediately transcribe the words to text. It absolutely seems like the majority of the thought process is subconscious, where by that I simply mean that I cannot sense it (but maybe I can throw out some words explaining it which feel right).
This discussion leads to the definition of consciousness as something along the lines of
A coherent mental representation of some states of different parts of the brain, including itself, which can be “exported” to our output modules (the language center, for example).
I feel that this definition requires a more careful formulation, but the intuitive sense should be clear now.
Applications for ML
Benefits of consciousness
I am not sure in what way the consciousness module described above is important for our survival, but as evolution kept it, there is probably a good reason (Even though it is possible that it just happened and there is no reason to remove it, or it is a by-product of something else, or maybe the reason is completely irrelevant for ML). So it is an important question, especially if we’d like to apply it to make machines smarter, and I am sure it is one that is being investigated by neurologists and psychologists. I do not intend to ponder it thoroughly for the moment, but here are some possible benefits for consciousness:
- Easier to communicate internal states to other people.
- The information of the consciousness can help make better logical decisions, since a compact model of many different states of subroutines in the brain allows for decision making which involves different parameters. For example, while hunting an animal, the decision of when to sprint and attack depends on many parameters, including sensory inputs, a model of the self’s physical state, a model of the animal’s physical state, a model of the environment, a model of the self’s emotional and mental state, a model of the plausible actions of other hunters in the group and so on.. I think that the main point to notice is that there are many connections between different brain regions, but consciousness is a way to organize a lot of them such that the part of the brain capable of logical thoughts and representations could have an easy access to them.
- Another important concept is that of focus. What are we focusing on, which modules will be in charge for the analysis of the situation, . . .. It seems likely that consciousness plays a role in the allocation of focus.
I think the general idea should be clear. We can add a sort of a “consciousness NN” submodule that receives as inputs all of the neurons in all of the different parts of the model, and outputs a small representation of the internal state. This can be passed on to other submodules to be used for one of the reasons mentioned in the section above.
I think that the next step is to take a specific problem and see how it can be improved using consciousness, but it will have to wait. This idea seems a bit naive, so I should also look for what other people had in mind.