When talking about consciousness, one must acknowledge that it means doing quite a lot of speculation, as we don’t really understand it. This can actually be a cool thing, as it will be interesting to see, one day, how wrong or right you were about it.
At present time, we do not yet have a science of consciousness, nor do we have any other type of understanding of consciousness that is clearly supported by empirical evidence. Ideally, this situation should change soon, as a solid understanding of consciousness could have many, many, many applications.
Perhaps even more importantly at this particular point, it might come in handy to understand consciousness before someone creates an artificial consciousness. We don’t even know whether such capability is within reach, so yes, we really don’t know much about subjective experience.
One possibility that should arguably be strongly considered is that you cannot have conscious experiences without some form of memory. If you don’t have a system that can link small instances of subjective time, can you have any sort of experience?
To elaborate on this speculation, I will first provide a summary of some neuroscientific and physical theories of consciousness. Then, I will review some papers on memory, including a recent one which claims that consciousness IS memory. In the last sections of the post, I will speculate on the relationships between integration, memory, and other systems that may participate in some forms of conscious experience.
Many theories of consciousness attempt to explain it as a single phenomenon. But what if consciousness experiences can only be explained by the interactions between properties that are necessary and sufficient for consciousness and properties that are useful but neither necessary nor sufficient? Perhaps integration is necessary but not sufficient for consciousness while integration + memory are necessary and sufficient for consciousness but neither are necessary nor sufficient for thoughts.
Curious? Then let’s continue.
Attempts at a science of consciousness
Consciousness appears to be such a complicated thing that we don’t even know for sure what consciousness is. Without assuming everyone will agree with the definition given here, let’s assume consciousness should be defined as subjective experience. In other words, things that one experience subjectively, be it a simple sensory experience, a thought, or an emotion, represent instances of consciousness.
We don’t know why some physical systems are conscious – this is known as the hard problem of consciousness and was formulated by the philosopher David Chalmers. The easy problem of consciousness, if you wonder, is explaining the physical systems that give rise to properties such as information integration.
The science of consciousness is a young and confusing field. A review conducted by Hviid Del Pin, Skóra, Sandberg, Overgaard & Wierzchoń (2021) found wide differences between neuroscientific theories of consciousness and signs that they are developed in isolation. The paper in question identified four main theories or groups of theories that attempt to provide a neuroscientific account of consciousness: higher-order theories (HOT), global neuronal workspace (GNW), integrated information theory (IIT), and recurrent processing theory (RPT).
Each of these theoretical approaches has some empirical evidence supporting at least some of its assumptions. The links provided below will give you access to information on some of the evidence. Also note that within these main theoretical categories, there are several specific theories or versions of a single theory.
In general terms, a theory of consciousness is labeled as being HOT if 1) it claims that first-order representation[1] is not enough for conscious experience to arise, but that higher-order mechanism(s) are also needed and 2) assumes that if an organism has no awareness of itself as being in a first-order state, then it is not conscious of the content of the given state – a logical consequence of the Transitivity Principle[2]. The main premise of HOT theories is that to have a conscious experience, you must have a minimal inner awareness of one’s ongoing mental processes, as the first-order state is monitored (i.e., meta-represented) by a relevant higher-order representation.
GNW theories, on the other hand, claim that a conscious state is characterized by a non-linear network ignition[3] associated with recurrent processing[4] that amplifies and sustains a neural representation, which allows the corresponding information to be globally accessed by local processors. In other words, GNW theories say that in a conscious state, the brain is involved in communication that enhances and maintains specific information, making it accessible to different parts of the brain.
The original GNW, proposed by the neuroscientist Bernard Baars, states that perceptual contents only become conscious when they are widely broadcasted to other parts of the brain, that is, when the information in the workspace become available to many local processors. It is this wide accessibility of information that constitutes a conscious experience. The global workspace in this model involves processors related to the past (memory), present (sensory input, attention), and future (value systems, motor plans, verbal reports).
If you had any problems understanding the main premises of the previously mentioned theoretical approaches, you may want to take a break before reading about IIT, an increasingly popular theory of consciousness proposed by the neuroscientist Giulio Tononi. This theory represents the first attempt to provide a mathematical formula for consciousness and requires a few hours of lecture, so it is recommended that you read more about it if you really want to understand it. Then again, the same applies to the other theories.
The theory is based on three key assumptions: 1) Each subjective experience is unique and distinct from other experiences; i.e., each conscious experience has unique qualities, 2) consciousness is unified and cannot be broken into independent parts, and 3) conscious experience has clear boundaries and exists within a specific space-time, being separated from all other experiences. A novel concept introduced by IIT is that of Φ (phi), which represents the calculated quotient of consciousness.
The key implications of this theory are that simple systems can be minimally conscious and complex systems can be unconscious, i.e., zombies. In other words, a complex system such as a present-day computer cannot pose consciousness because it is a feed-forward system. Only a system composed of feedback loops, where input may also serve as output, can integrate information, no matter how simple. So your computer is a zombie.
Finally, RPT, proposed by the neuroscientist Victor A.F. Lamme, claims that the processing that takes place in the sensory regions of the brain is sufficient for conscious experience, even when the processing in question isn’t accessible for introspection or report. Lamme cites the example of the right hemisphere, which some question whether is conscious or not due to the fact it is not typically involved in processing language. If we do assume the right hemisphere is capable of consciousness, then we should consider the possibility that consciousness can take place in any region where recurrent processing happens.
We can see that RPT shares some similarities with IIT. One important difference between the two is that the latest version of IIT (3.0) introduces the concept of Φmax to exclude the possibility of pieces of a system being conscious. More specifically, in IIT, if a system has Φmax = 0, it means that its cause–effect power is completely reducible to that of its parts, meaning the system does not have properties beyond what its part can do independently (i.e., emergent properties) and cannot be considered conscious.
If Φmax > 0, it means that the system cannot be reduced to its parts, so it exists in and of itself. In other words, the system has emergent properties that its individual parts alone do not poses. More generally, the larger Φmax, the more a system can lay claim to existing in a fuller sense than systems with lower Φmax.
Other theories of consciousness
The topic of consciousness is too cool to be ignored by people who do not study neuroscience or philosophy. For this and other reasons, many physicists have taken an interest in the topic and come up with their own ideas of what consciousness might be and how it may be produced. In the following paragraphs, I’m going to present some of them.
Perhaps the most well-known approach to integrating quantum physics and neuroscience is the quantum hypothesis formulated by the physicist Sir Roger Penrose. The idea that consciousness relies on quantum physics was first detailed in Penrose’s book The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. The famous physicist claims that human cognitive abilities transcend algorithms and logic. For example, Roger argues that mathematicians can provide true solutions to computationally unsolvable problems, which likely means that some of the outputs produced by the brain rely on non-computation processes.
Penrose later refined his formulation together with the anesthesiologist Stuart Hameroff. Together, they propose the orchestrated objective reduction (ORCH-OR) model of consciousness, which regards the subjective phenomenon as the product of quantum-type probabilistic effects. According to the ORCH-OR model, consciousness occurs due to quantum vibrations in a structure of the neuron cells called microtubules.
The theory, while fairly known, it does not have many fans because significant evidence for quantum effects in the brain is lacking and many don’t believe that coherent quantum states (required for quantum processes) are possible in chemical transmission through the synaptic slice and the generation of action potential. If the two mentioned processes destroy quantum coherence, as many believe, it means neurons only communicate via classical (i.e., non-quantum) physics.
Another physical theory of consciousness was published more recently, in 2018 by the interdisciplinary researcher Robert Pepperell. According to this theory, consciousness is a product of the organization of energetic activity in the brain. Pepperell argues that if consciousness is a physical process, then it should be explainable in terms of physical elements that include energy, forces, and work. Energy, like forces and work, can be understood as actualized differences of motion and tension. In other words, energy has effects on things and like forces and work, can be understood as the result of changes in motion and tension.
In biology, information can be understood as a measure of the manner in which energetic activity is organized, i.e., the complexity or degree of differentiation and integration of energetic activity. Pepperell argues that a specific type of activity that occurs in the brain causes conscious experience. This activity consists of a dynamic organization of energetic processes that have a high degree of differentiation and integration. The dynamic organization is recursively self-referential and produces a pattern or energetic activity complex enough to produce consciousness.
In other words, the brain’s activity refers back to itself, in a feedback loop that allows it to continually process and integrate information. For Pepperell, consciousness occurs because there is something it is like, subjectively, to experience an organization of actualized differences of motion and tension. The author of the theory acknowledges that the nature of energy remains mostly mysterious and we do not understand how it contributes to brain function or consciousness. Note that Pepperell, like others, believes integration is a key property of consciousness.
Let’s not forget panpsychism
Panpsychism, in some of its formulations, is the idea that consciousness is a fundamental property in the universe, meaning that it is everywhere. If such claims are correct, it means that everything we can think of has some form of consciousness; e.g., a rock, a cell, the hard drive of a desktop computer. Panpsychism does not represent a single theory or a group of theories but a view.
While panpsychism is the belief that some form of mentality is everywhere in nature and all material objects have parts with mental properties, it doesn’t necessarily imply the belief that everything has a mind, meaning that something like a rock may not have mental properties while the rock’s fundamental parts do.
It is also important to note that most philosophers and contemporary public thinkers defending panpsychism believe that conscious experience is fundamental and ubiquitous (panexperientialism) but not that thought is fundamental and ubiquitous (pancognitivism). As such, a panpsychist may defend that atoms have some very basic forms of consciousness (i.e., subjective experience) without claiming that atoms have thoughts or expectations.
There is arguably no reason for anyone to go as far as claiming panpsychism cannot be true. That being said, at the present time, there doesn’t appear to be any reason why panpsychism is likely to be true. The mere fact that we currently don’t understand why consciousness exists and we are having difficulties formulating a neuroscientific understanding of subjective experience does not logically implies, by any means, that something else must be true. Of course, this consideration also does not imply, by any means, that non-biological theories of consciousness are not something worth exploring, scientifically. It is also worth mentioning that there are several arguments in favor and against panpsychism and those of you interested in consciousness may want to read a few things about it in its different formulations.
Panpsychism, in either formulation, cannot, at present time, be considered a potentially scientific theory. It goes without saying that it would be extremely difficult to prove self-awareness in an organism such as a rock or an atom. Without being able to collect data in favor of panpsychism, there is arguably little hope for a scientific panpsychism theory in the near future.
If we could test whether consciousness, as a fundamental property, has a specific set of interactions with other properties of reality, then there could be a case for panpsychism. Perhaps one venue is a further exploration of the subjective aspects of quantum mechanics. But I’m not going to talk about this right now.
OK, but what about memory?!
As you can tell from the theoretical summary above, there are many different ideas on how consciousness is produced. You may have noticed, however, that integration of information and the presence of feedback loops are often mentioned mechanisms in theories attempting to explain consciousness.
I am of the belief that the integration of information is likely a fundamental property of a conscious experience. I do not, however, believe that integration, regardless of the manner in which it is realized, is likely to be sufficient to produce a conscious experience. I do not believe that some transistors, for example, have consciousness, just because, under IIT, it’s Φmax > 0. Integration may be a necessary property of consciousness without being a sufficient one.
Perhaps another necessary property of consciousness, though, again, not sufficient, is memory. Here, following a recent article published in Frontiers in Psychology by Zlotnik and Aaron Vansintjan, I define memory as the capacity to store and retrieve information in a relationship of incorporation – where one process is incorporated into another, and both processes are changed in a permanent way.
Memory, based on the definition provided above, is also a term describing the process of carrying information. For example, epigenetics suggest that simple lifeforms pass on memories across generations through genetic code.
According to Zlotnik and Vansintjan, it can be argued that immunological and allergy processes can also be considered a form of memory, as there is a storage of information of the allergen or the viral/bacterial aggressor, and when the aggressor re-appears, inflammatory processes respond to it – a process which underlies storage and retrieval of information.
There are things, however, that would not be considered memories following the definition provided above. In this definition, memory requires a system of incorporation with the body. Google Cloud, as such, does not represent memory in itself, as long as the cloud is not integrated within a system that accesses the storage and recalls it.
So why we may need memory to experience consciousness? Well, let’s imagine for one minute or half a minute, or a second how it would feel like not to have a memory. You probably can’t. But think about this – without memory, you would be unable to imagine anything, because every instance in space-time, however small, would not feel connected to another instance in any way. Without memory, however brief, it seems unlikely that you can experience subjective time, and without experiencing subjective time, you probably cannot experience anything.
In the human brain, there are three important categories of memory: sensorial memory, short-term memory, and working memory. and long-term memory. Short-term memory can hold limited information for a few seconds (e.g. when you are trying to memorize a telephone number) and some information from it eventually gets stored in the long-term memory. Working memory is closely related to short-term memory but it is typically differentiated from the same given its role in processing and structuring the information for a short time. When you do simple math in your head, for example, you partly rely on short-term memory to compute the math while long-term memory also participates by facilitating your knowledge of how math works.
Sensory memory allows you to perceive sight, hearing, smell, taste, and touch information entering through the sensory cortices of the brain and relaying through the thalamus. This type of memory only lasts for milliseconds and is mostly outside conscious awareness. However, given its role in perception, could you perceive anything at all without it?
So, is consciousness a memory system?
A new theory published in December 2022 by Andrew Budson, Kenneth Richman, and Elizabeth Kensinger claims that consciousness is memory. The researchers argue that it seems unlikely that the episodic memory evolved only to represent past events given that the system has limited capabilities, being prone to forgetting and false memories. We should, instead, assume that the episodic memory emerged to allow the flexibility and creativity needed to combine and reorder memories of prior events in order to plan for the future.
The authors further posit that the episodic memory, along with the working, sensory, and semantic memories are all part of the conscious memory system which gives rise to phenomenal consciousness, i.e., subjective experience. The cerebral cortex is the sole responsible for consciousness and each cortical region contributes to the conscious memory system.
In other words, the paper argues that consciousness originated in the episodic memory system and eventually evolved into performing functions that are not directly related to memory, such as language, problem-solving, and abstract thinking.
From the perspective defended by this new theory, our experiences, including sensorial experiences such as perceptions are memories, even though we think they are happening in real time. Consciousness is a memory system that allows us to remember information and recombine it to predict or imagine alternative possibilities. Our unconscious processes filter information and takes decisions for us. When we think we have made a decision, what is in fact happening is that we become aware of the perceptions and decisions that have previously been processed by unconscious processes.
In other words, consciousness connects events into a coherent narrative. We don’t perceive the world in real-time and we do not consciously make decisions or perform actions. We, instead, remember doing all of these immediately after we have done them, which creates the sense that we are actually acting in real-time.
The authors of the paper cite several findings that, for them, are evidence that consciousness might be a memory. For instance, they note that the perception of a later stimulus can affect the perception of an earlier stimulus and earlier and later stimuli can mutually affect the perception of each other – a phenomenon known as the postdictive effect. Postdictive and other order effects have shown that, at timescales <500 ms, consciousness does not flow linearly. Conscious awareness often occurs after the perception, decision, or action, and conscious sensations are sometimes referred backward in time.
The authors point out studies showing that consciousness is too slow to guide many split-second decisions and actions that often occur when, for example, playing sports or musical instruments. More so, some individuals with brain lesions cannot consciously perform a task but, when asked to do the task unconsciously—guess or just perform the task without thinking about it. Finally, in the authors’ opinion, the fact that thoughts are hard to control, as shown in practices such as mindfulness, suggests that the purpose of consciousness is not that of performing intentional actions.
The main implication of this theory is that other mammals have consciousness, albeit less complex ones, because they have a neo-cortex and, thus, memory for perceptions, decisions, and actions. Most vertebrates, would, in fact, have some form of consciousness because they have some of the brain structures required for memory (e.g., hippocampus). Other species, capable of unlimited associative learning may have at least minimal forms of consciousness, including octopods, squid, cuttlefish, honeybees, and cockroaches.
The idea that the world in front of us -not only our perceptions but also our emotions and other subjective experiences are predicted by the brain has become more popular in recent year. For example, according to the neuroscientist Lisa Feldman Barrett, there is evidence suggesting thoughts, feelings, perceptions, memories, decision-making, categorization, imagination, and many other mental phenomena, can all be united by a single mechanism – prediction. In a paper published by Barrett and Benjamin Hutchinson, it is argued that the mind is a computer that generates a virtual reality that guides us toward regulating our bodies in the world.
Prediction as a mechanism underlying conscious experience or at least some forms of it is a hypothesis that would likely continue to gather empirical evidence in its favor, regardless of whether it can fully explain consciousness or only some of it. Memory can only be a core component of a prediction system, as predictions require stored data. Are, however, the mechanisms making prediction possible enough to produce conscious experiences?
Necessary vs. sufficient conditions for consciousness
The idea that consciousness is a memory system should definitely not be easily dismissed and hopefully there will be a lot of empirical work testing this hypothesis in the best ways possible. Even if consciousness is not memory per see, I suspect memory is necessary for consciousness. Quite likely, consciousness can come in different flavors depending on the components with which it can interact.
For example, perhaps some forms of non-human conscious beings (or very young humans) can have some very basic conscious experience that is only sensorial in nature. Such systems may not have the components required to have thoughts or even emotions, but may still subjectively experience some of their environment if they pose systems capable of processing sensory inputs.
This is not to say, however, that possessing sensors is likely to be sufficient for consciousness. Perhaps to have a sensorial subjective experience you need, besides sensors, a system that can integrate the sensorial input in question in a certain way. The required integration may involve a minimal inner awareness of one’s ongoing mental processes, as HOT theorists believe, a non-linear network ignition, as GNW theories propose, a certain type of feedback loop integration, as IIT proposes, or different forms of recurrent processes as RPT proposes.
The point is, either of these theories could be correct in what concerns one of the necessary conditions for consciousness. Even if neither of the theories is fully correct, they may still be right about the fact some form of information integration is needed for consciousness.
Where I suspect these theories are wrong is in the assumption that integration is enough for conscious experience. Whether consciousness is solely based on classical physics or is based on quantum processes, we need to figure out if we can make an evidence-based list of components and properties that are necessary for consciousness but not sufficient and a list of components that are sufficient for consciousness.
For example, the presence of some sort of integration of information in a system may be a necessary condition for that system to experience consciousness but not necessarily a sufficient condition to give rise to consciousness. As such, systems that lack other essential properties may not have consciousness even if they have a level of integration that, for example, exceeds the threshold proposed by IIT.
Similarly, a memory system may be necessary for consciousness to occur but may not be a sufficient condition for the same. Even if consciousness is, in fact, a memory system, subjective experience may not happen if the system in question is completely deprived of any capacity of processing sensory information while also lacking the required system to engage in cognition.
So, in other words, perhaps two key necessary conditions for consciousness are the integration of information + a memory system that allows experiencing what we subjectively understand as time. There may be other necessary conditions for consciousness or it may be the case that not all types of integration and not all types of memories would be necessary and/or sufficient conditions for consciousness.
Perhaps instead of assuming that, for example, some power grids have consciousness for the mere fact that they are highly integrated systems, we should assume that they don’t have consciousness because integration is only a necessary condition for consciousness but not a sufficient one.
Based on the same considerations, it follows that consciousness likely comes in different flavors, depending on what you put in it. If, for example, you have a sensory system, a certain type of integration, and a certain type of memory system, you may experience some subjective sensorial experience. To experience cognition, on the other hand, you probably also need, besides integration and memory, a system capable of processing multiple types of information that participate in cognition (e.g., we can think in words or images). I suspect that to experience emotions, you also need something that resembles interoception. Without feeling something inside you, as a result of sensorial input coming from within your system, I doubt you can experience anything that resembles feelings.
The complexity of a system’s subjective experience likely also varies depending on the complexity of the components that participate in the conscious experience. For example, a more complex interoceptive system, perhaps in combination with a more complex cognitive system, may lead to a more complex emotional system.
Similarly, a more complex cognitive system, in collaboration with other systems, may allow subjectively experiencing more of the world. For example, sometimes we don’t subjectively experience something that’s happening around us, even if we do have access to all of the required information, simply because we don’t poses the cognitive ability or the required semantic knowledge to have an understanding and, hence, a subjective experience, of that something.
For this reason, it may be the case that a young child, in general, has fewer conscious experiences than a typical adult. Some adults may, on average, have less subjective experiences than others, at least in particular contexts where they, for example, don’t have the conceptual knowledge to understand a particular set of inputs. In other words, knowledge may, too, be a component in some forms of consciousness, albeit not a necessary one.
Consciousness in non-human form
Given that it is not possible, at present time, to objectively measure subjectivity (yes, I know it sounds interesting), we cannot know with a high degree of certainty whether consciousness exists in non-human organisms. Of course, we can also not know for sure that other humans have consciousness, but that’s another topic.
Earthly Organisms
We can assume with a certain degree of confidence that species that are very similar to us in terms of their neuronal structure, poses the necessary and sufficient properties to experience conscious experience. It seems extremely likely that mammals do experience consciousness and some mammals likely have more complex conscious experiences than others.
The extent to which animals that are much more different than us experience consciousness is an even more complicated topic. As long as we do not know what are the necessary and sufficient conditions for consciousness to occur, we arguably don’t have solid reasons to either agree or disagree that all or most living beings on Earth have consciousness.
Aliens
Another cool topic is alien consciousness. Most people likely assume that aliens would, for sure, have consciousness. After all, they are intelligent. At present time, we have no way of knowing if all general intelligent systems in the known universe, assuming there are any, also poses consciousness.
Given that the laws of physics are most likely the same across the entire visible universe, it is quite possible that evolution on other planets takes very similar patterns. As such, even if organisms on other planets are not genetically related to us, they may have still evolved in relatively similar ways provided the evolutionary pressure on them was relatively similar.
It is hard to imagine a living being on another planet that poses the intelligence and motivation to build technology but at the same time does not poses consciousness. Of course, we do know complex systems can be goal-directed without themselves having established any goals; animals don’t search for food because they know they need it to live, they are driven towards it. Similarly, an AI may not have its own goals, but there are goals written in its systems (by humans, at least for now).
That being said, assuming similar evolutionary patterns all over the known universe, aliens most likely poses what w call consciousness. If an advanced alien civilization would want to meet us sometime in the future, it would likely be a very good thing if all species evolve roughly similarly, as this may imply a similar motivational system. Aliens that are similar to us but more advance are probably nicer than humans, given the apparent trajectory of cultural evolution on Earth. If they are completely different from us, we simply have no idea what they would be like.
Artificial Intelligence
Since we may be a few years away from general artificial intelligence (AGI), whatever that means, it seems more important than ever that we discuss consciousness in artificial systems. If we assume that there are necessary conditions for consciousness and sufficient conditions for consciousness, we would have, for now, non-empirical criteria for determining whether the AGIs we are building have or will have self-awareness.
It is quite possible that a general intelligent system, that is, one that can learn by itself and can apply the learned knowledge to novel situations, does not poses consciousness by default. One reason for that might be that, as some integration theories suggest, these systems, in their current form, rely on feed-forward systems. Even if that’s not a problem, these systems may still not poses consciousness as long as they do not have a memory system that allows them to experience what we subjectively call time.
Of course, computers and the AGI systems installed on them will rely on memory. We can understand RAM as a form of short-term memory and SSD or HDD as a form of long-term memory. However, do these memory systems function like the short-term/working memory and long-term memory in the brain, or are they more like genetic and epigenetic memories? Perhaps they can be one of the two or both, depending on how there are integrated with the rest of the artificial system.
It should be pretty obvious that having a well-developed science of consciousness would have many cool applications. The fact it would likely enable us to better treat consciousness disorders is only the beginning. A full understanding of consciousness would also allow us to know for sure whether other beings on Earth also have consciousness and which one. It would also allow us to determine whether AGI in the form it is currently developed would be self-aware or whether present-day computers already are.
I think it would not be a good idea at all to create artificial systems that poses consciousness, at least until we are as intelligent as them and we have a very good understanding of consciousness and its properties in different forms. There are several reasons for that.
First, perhaps an AGI with consciousness would be more likely to have a self-agenda that would go against our best interests. That is not to say that AGI cannot go rough without consciousness; it only takes giving it objectives that either directly or indirectly do not align with our best interests.
Second, we don’t want AGI systems that experience consciousness and emotions and, who, at the same time, are forced by their implementation to complete some boring tasks for us. At present time, we want systems that have no subjective experience whatsoever and that can help us achieve things that we, by ourselves, are not capable of achieving or we are only capable of achieving within hundreds of years.
While the lack of a science of consciousness does not allow us to confidently predict whether the systems we are creating have self-awareness or not, we should, perhaps, assume that a certain kind of memory implementation is required to experience subjectivity. As such, one attempt to avoid creating self-aware systems is to avoid creating the type of memory implementations that, in combination with other potentially necessary conditions for consciousness, would give rise to self-aware AGI systems.
We don’t know what are the necessary conditions for consciousness, so let’s give memory a chance
Consciousness is something we can talk about on thousands of pages, as there are many potential paths toward a science of consciousness. Rather than assuming that consciousness can be fully explained by a single phenomenon, such as a certain type of information integration, let’s assume that there are necessary conditions for consciousness and sufficient conditions for consciousness.
The integration of information, in one form or another, is likely to be a necessary condition for consciousness. When we experience a sensory experience, different parts of the brain participate in processing the information that gives rise to that experience. Some form of integration of information is likely to be necessary for even the simplest systems where processed information leads to subjective experience.
While reality can for sure be counterintuitive, it is hard to believe that a system that has no capability of processing sensory input can produce consciousness by the mere virtue of it having a structure of information processing that is integrated in a particular way. So, let’s assume that it is likely that several properties are necessary components of consciousness.
One such property might be memory. Without memory, every instance of experience would be completely unrelated to another. Given that an instance in time can be infinitely small, can we even speak of a subjective experience of anything outside subjective time? Probably not. For this reason, I believe there is a chance that for a system to experience some sort of consciousness, that system needs to poses some sort of memory that can allow it to experience what we subjectively call time.
Whether memory, in some forms, is really a core component of any conscious system will likely remain a matter of speculation for quite some time, at least before AGI or an advanced version of anything resembling an intelligent enhancer. That being said, we may want to strongly consider memory as a key candidate, as well as acknowledging that to have consciousness you may need to have several properties as opposed to a single one.
References
Anand, K. S., & Dhikav, V. (2012). Hippocampus in health and disease: An overview. Annals of Indian Academy of Neurology, 15(4), 239.
Barker, M., Brewer, R., & Murphy, J. WHAT IS INTEROCEPTION AND WHY IS IT IMPORTANT?.
Birch, J., Ginsburg, S., & Jablonka, E. (2020). Unlimited associative learning and the origins of consciousness: a primer and some predictions. Biology & philosophy, 35, 1-23.
Brown, R., Lau, H., & LeDoux, J. E. (2019). Understanding the higher-order approach to consciousness. Trends in cognitive sciences, 23(9), 754-768.
Budson, A. E., Richman, K. A., & Kensinger, E. A. (2022). Consciousness as a memory system. Cognitive and Behavioral Neurology, 35(4), 263-297.
Caire, M. J., Reddy, V., & Varacallo, M. (2018). Physiology, synapse.
Cooper, G. M., & Hausman, R. O. B. E. R. T. E. (2000). A molecular approach. The Cell. 2nd ed. Sunderland, MA: Sinauer Associates.
Del Pin, S. H., Skóra, Z., Sandberg, K., Overgaard, M., & Wierzchoń, M. (2021). Comparing theories of consciousness: why it matters and how to do it. Neuroscience of consciousness, 2021(2), niab019.
Grider, M. H., Jessu, R., & Kabir, R. (2019). Physiology, action potential.
Hutchinson, J. B., & Barrett, L. F. (2019). The power of predictions: An emerging paradigm for psychological research. Current directions in psychological science, 28(3), 280-291.
Jawabri, K. H., & Sharma, S. (2021). Physiology, cerebral cortex functions. StatPearls [internet].
Klosin, A., Casas, E., Hidalgo-Carcedo, C., Vavouri, T., & Lehner, B. (2017). Transgenerational transmission of environmental information in C. elegans. Science, 356(6335), 320-323.
Lamme, V. A. (2006). Towards a true neural stance on consciousness. Trends in cognitive sciences, 10(11), 494-501.
Mashour, G. A., Roelfsema, P., Changeux, J. P., & Dehaene, S. (2020). Conscious processing and the global neuronal workspace hypothesis. Neuron, 105(5), 776-798.
Mensky, M. B. (2013). Everett interpretation and quantum concept of consciousness. NeuroQuantology, 11(1).
Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the phenomenology to the mechanisms of consciousness: integrated information theory 3.0. PLoS computational biology, 10(5), e1003588.
Pennartz, C. M. (2022). What is exactly the problem with panpsychism?. Behav Brain Sci, 45, 39-40.
Penrose, R., & Mermin, N. D. (1990). The emperor’s new mind: Concerning computers, minds, and the laws of physics.
Pepperell, R. (2018). Consciousness as a physical process caused by the organization of energy in the brain. Frontiers in Psychology, 2091.
Simon Hviid Del Pin, Zuzanna Skóra, Kristian Sandberg, Morten Overgaard, Michał Wierzchoń
Zlotnik, G., & Vansintjan, A. (2019). Memory: An extended definition. Frontiers in psychology, 10, 2523.
[1] First-order mental states are states directed at the world, such as a perception of an outer object or a desire for something cold to drink. A higher-order mental state is a meta-cognitive state, meaning that it is a mental state directed at a mental state.
[2] According to the Transitivity Principle, a conscious state is a state whose subject is, in some way, aware of being in it. It is based on the idea that having a conscious state while totally unaware of being in that state seems like a contradiction.
[3] In a linear relationship, the output is directly proportional to the input. In the context of GNW, a non-linear relationship means that interactions between brain regions are not directly proportional.
[4] In a recurrent processing, signals are fired back and forth.