Crimson Publishers Publish With Us Reprints e-Books Video articles

Full Text

Psychology and Psychotherapy: Research Studys

Making the Unconscious Conscious

Dennis*

Independent Scholar, USA

*Corresponding author: Dennis, Independent Scholar, USA

Submission: June 28, 2024;Published: October 04, 2024

DOI: 10.31031/PPRS.2024.08.000689

ISSN 2639-0612
Volume8 Issue3

Abstract

Consciousness is discussed using a conceptual space in which physical and intentional operations are equally natural and interdependent. David Chalmers defined “the hard problem of consciousness” as explaining the subjective aspect of consciousness, which he claimed to be functionless. However, it would be useless for analysts to make unconscious contents conscious were that so. Observations confirming that consciousness has physical effects are touched upon. Two reasons Chalmers’ problem is hard are the fundamental abstraction of physical science and the dualistic conceptual space typically employed. Criticisms of introspection as a method are discussed. Qualia are shown to be a red herring. Understanding consciousness requires understanding what it produces-knowledge as acquaintance. The intentional nature of knowledge and its relation to physical information are discussed. I show how intentional operations can transform physical information into conscious contents.

Keywords:Consciousness; Semiology; Intentionality; Mind-body problem

Introduction

Previously, I argued for transcending the Standard Model (SM) of the mind, which sees behavior as entirely neuro physical in origin, ignoring falsifying data [1-5]. The SM is a consequence of representational artifacts due to dualistic conceptual space and a restrictive abstraction underlying physical science. In place of the SM, I proposed a framework in which humans are unified organisms capable of both physical and intentional acts. This article applies that framework to the dynamics of psychoanalysis’s essential move: making unconscious contents conscious. Doing so requires the capacity to make the knowable known, Aristotle’s agent intellect. Sigmund Freud’s topographic model divides mind it into conscious, pre- or subconscious, and unconscious contents. Whether one accepts or rejects it, it is clear that we are aware of only a small portion of neurally encoded data, and that analysis seeks to foster awareness of critical psychoactive contents. Yet, the dynamics by which we become aware of neurally encoded contents, the so-called “hard problem of consciousness,” remains deeply mysterious [6].

David Chalmers wrote the classic statement of the problem: The easy problems of consciousness are those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural, mechanisms [7-10]. The hard problems are those that seem to resist those methods.…The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect.… the hard problem is hard precisely because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is explained. He explains that “‘function’ is not used in the narrow teleological sense of something that a system is designed to do, but in the broader sense of any causal role in the production of behaviour that a system might perform.”

This statement is both accurate and inaccurate. It is accurate because consciousness has two aspects: contents and awareness of contents. The standard methods of cognitive science are suited to the study of neurally encoded contents but unsuited to studying the act of awareness. It is inaccurate in assuming that consciousness is functionless or epiphenomenonal [11]. Were that so, it would be pointless to help patients become conscious of unconscious contents, for it could have no effect. Despite so-called Principle of Causal Closure, which posits that only physical states and processes can cause physical effects, neurophysiology is directly affected by consciousness. Just as Galileo could not speak of Jupiter’s moons were they not part of a causal chain ultimately modifying Broca’s area to form speech, so we could not report consciousness could it not do the same. That intentional acts can modify neural processes has been confirmed for cognitive behavioral therapy in obsessive compulsive disorder. Studies using a variety of neurobiological methods all show that such therapy alters brain activity patterns.

Further, the claim that consciousness is physically impotent is incompatible with the claim that it has evolved. Evolution works because some biological variations are both inheritable and increase an organism’s reproductive success. So, the evolutionary hypothesis requires:
A. That consciousness has physical effects-for if it did not, it could not increase reproductive success.
B. That consciousness has a physical basis-for if it did not, it could not be encoded in DNA to be inherited.

These points show the inconsistencies of present thinking and the need to clarify the relation between consciousness and physicality.

Inadequate foundations

While Chalmers is wrong about functionality, the hard problem exists precisely for the reason he mentions: it resists explanation in terms of computational or neural mechanisms [12-16]. As I have shown, this is because of the Fundamental Abstraction (FA) of physical science and a Cartesian conceptual space skew to reality. While all knowing requires a knowing subject and a known object, the initial moment of physical science is the abstraction of the object from the subject-the choice to attend to physical objects to the exclusion of our inseparable subjectivity. Physical scientists attend to what is experienced, not the act of experiencing. Thus, physical science is, by design and appropriately, bereft of data and concepts on knowing subjects and their mental acts. Yet, these data and concepts are required to connect physical processes to consciousness. In other words, physics lacks intentional effectsnot because physicality and intentionality are independent, but because we have abstracted their interdependence away in laying its foundation. When psychologists or philosophers adopt physics’ third person methodology, they inherit its limitations, but in researching mind, the result is not only incomplete, it is wholly inadequate. As we sense to know and conceptualize to act, an adequate model of human psychology must integrate physical and intentional operations [17-23].

A conceptual space is the set of concepts used to represent experience. Learning a science or philosophical tradition is largely a matter of learning a conceptual space and its application. Cognitive science typically employs a Cartesian conceptual space disconnecting mind from body. René Descartes’s dualism-the idea that humans are composed of two kinds of stuff-res cogitans (thinking stuff) and res extensa (extended stuff)-creates a mindbody problem that cannot be solved within its framework. If humans are made of two independent substances, one thinking and the other extended, thought and matter can’t interact. However, Descartes’s division is wrong headed. It has been known since Aelius Galen (129-216?) treated gladiators that brain trauma compromises thought. Since the mind is at least partly extended stuff, it is better to think, as Aristotle and St. Thomas Aquinas did, of humans as organic unities capable of both physical and intentional acts.

This requires rethinking metaphysical naturalism, which maintains that nature is solely physical, “asserting that reality has no place for ‘supernatural’ or other ‘spooky’ kinds of entity.” Yet, it is natural for humans to think, and foolish to denigrate intentional operations as supernatural or spooky because of physical science’s methodological limitations. Physical and intentional processes are equally natural. So, a more complete understanding of human nature requires us to supplement the physical sciences, which are limited to the object side of the subject-object relation of knowing, with investigations of the subject side.

Introspection is the sole means of doing so. Despite thought being natural, introspection has been repeatedly criticized. The behaviourists rightly criticized George Romanes’s analogous introspection of other species, but humans are not another species. Logical positivists claimed that statements not reducible to measurements with rods and clocks were meaningless-but that claim was not so reducible.

Rudolf Carnap attacked introspection because it is not intersubjectively confirmable, but the scientific method only requires replicability. Underlying Carnap’s criticism is the demand that observers share the same token observation, but the scientific method only requires that observers be able to replicate the type of observation. It does not matter that Galileo observed Jupiter’s moons in solitude. What matters is that anyone following his procedure can make similar observations. Similarly, it does not matter that an observer can know only her own thoughts. What matters is that others can observe the same types of mental processes [23-30].

Gilbert Ryle argued that introspection was impossible because it requires two acts of awareness-one directed to some subject matter and a second directed to the first. However, introspection is a not a separate act. Consciousness is self-reflective. It takes no more or different experience to say, “One plus one is two,” than to say, “It is true that one plus one is two.” Yet, the first statement is about the relation of quantities and the second is about the adequacy of our understanding. Because no additional experience is needed, knowing our mental states must be implicit in the states themselves. So, the two statements do not reflect separate acts of apprehension, but separate articulations of the same apprehension. In sum, introspection is a methodologically acceptable means of transcending the limitations of physical science.

Qualia

Because the SM treats consciousness as an annoying, but inconsequential, epiphenomenon, contemporary philosophy generally agrees with Chalmers, and approaches consciousness as a phenomenological, rather than a functional, problem. For example, Rick Grush and Patricia S. Churchland assert that “Consciousness is almost certainly a property of the physical brain. The major mystery, however, is how neurons achieve such effects as being aware of a toothache or the smell of cinnamon.” This is an unanswerable question, for qualia are simply the contingent forms of certain experiences-just as gravity is a contingent property of inertial mass. We cannot even know if my blue quale is your blue quale, for there is no way to compare them. In fact, since the question assumes the possibility of an impossible comparison, it is meaningless.

Further, qualia, while showing the limits of physicalist accounts of the mind, are not essential to consciousness. We are conscious of abstract facts such as the square root of two is a surd without a hint of qualia. Blindsight is another example. The brain has two visual processing centres: the visual cortex and the more primitive optic tectum (the superior colliculi in humans), which is responsible for visual orientation and also maps the visual field. Nicholas Humphrey recounts his work with Helen, a rhesus monkey with her primary visual cortex removed. With patient encouragement Helen was able to navigate and interact with her environment visuallypresumably using the optic tectum.

Similarly, a human patient, “with extensive damage to the visual cortex,” and who “believed he was blind, and reported that he was having no visual sensation, … could still guess the position and shape of objects.” Since to know x is to be conscious of x, consciousness does not require qualia. Of course, a person who was normally sighted and later deprived of visual qualia will be unsure she is really seeing. Still, visual information is present and available. So, with experience, she could know that she knows, being conscious of information lacking qualia. Thus, however interesting and physically unexplainable qualia may be, they are a red herring in pursuing the dynamics of consciousness. Its function is to know neurally encoded contents as opposed to merely having and processing such contents. Investigating this function will allow us to make progress.

Knowledge and Intentionality

To understand how consciousness works we need to understand what it produces, and it produces knowledge by transforming neural states into known contents. Analysis seeks to make patients conscious of unconscious contents to know the causes of undesired behavior. Knowledge is often defined by some variant of “(causally) justified true belief,” but the immediate product of consciousness is not belief or any other form of affirmation. Another meaning of “knowledge” is know-how-the ability to attain a goal. This does not require consciousness, for computers can execute complex, goal-oriented procedures. Consciousness produces knowledge as acquaintance. In affirming we know the house on the hill, we means are acquainted with it, not that we know facts about it or how to build it. Consciousness makes us aware of neurally encoded contents, whether those contents are imaginary or veridical. So, consciousness makes us aware of intelligibility.

To say an object is “intelligible” means it can be known, not that it belongs to an ideal order of existence as Plato and Aquinas held. Its intelligibility is the information it can provide-normally via our senses, although internal processes can modify our brain in other ways. However it acts, an object modifying our neural state is identically our neural state being modified by the object. (A acting on B is identically B being acted upon by A.) This identity is not between our brain state and the whole object, but between a specific action and the body modification it effects-the action is the modification.

We think of objects as circumscribed matter, but objects modify their environment with a radiance of action. That action is integral to the object’s existence. So, it is part of its existence and may persist long after the circumscribed core is gone-as we see stars that have long since died. In this sense the neural modification caused by sensation is the object’s presence in us. The object projects its power into us to produce a neural effect, and all we know of objects is that effect, i.e. how they act on us. It is biologically appropriate that what matters to us is how reality relates to us. That means that knowledge is appropriately relational. Whether or not mental contents inform us of an environmental state, they inform of something capable of causing them. Seeing pink elephants informs us that we are intoxicated, and, as Freud observed, dreams inform us of the unconscious contents causing them. Being informed does not mean knowing the structure of the informing object. Generally, we do not. Rather, we know that whatever modified our neural state can act as it is acting on us. When I was stung by a bee at age seven, I learned bees can sting, not that they use formic acid in doing so.

This difference is not as obvious as it seems because science has taught us to think in terms of structure rather than action. Consider David M. Armstrong’s proposed explanation of consciousness.… Armstrong compared consciousness with proprioception. … proprioception is a special sense, different from that of bodily sensation, in which we become aware of parts of our body. Now the brain is part of our body and so perhaps immediate awareness of a process in, or a state of, our brain may here for present purposes be called ‘proprioception’. … Thus, the proprioception which constitutes consciousness, as distinguished from mere awareness, is a higher order awareness, a perception of one part of (or configuration in) our brain by the brain itself.

To clarify this, “mere awareness” is not subjective consciousness, but what might be called medical consciousness, i.e. a state of responsiveness. There are two problems with Armstrong’s theory. First, if we could propriocept our brain state, we would learn about its dynamic structure-connectivity, neurotransmitter concentrations and neural firing rates-not the information these encode. The correlative information is based on the thoughts that structure can elicit. Second, there is no explanation of how we become aware of the propriocepted contents-which means the theory does not explain consciousness. Rather, it assumes awareness of sensation-the very fact to be explained. Yet, awareness is not an invariant feature of sensation. We may, for example, adjust our posture after being in one position too long without ever being conscious of discomfort.

Brain states are not thoughts. They are not even normally signs that may be interpreted, for we do not propriocept our brain to know. When brain states are observed they belong to the class of signs that signify while being something other than signs. Other members of this class are texts, which are physical patterns; smoke, which is particulate matter in the air; and sounds, which are pressure waves. If all signs were of this type, the mental result of a sign (the interpretant of Charles Sanders Peirce’s semiology), would be another sign of the same type-leading to an infinite regress in which meaning is signified, but never known. To terminate this regress, to have meaning, we need to elicit thought-a type of sign whose very nature is meaning.

In his Ars Logica John of St. Thomas (John Poinsot) called the first class of signs “instrumental”, and the second “formal signs.” While instrumental signs can have potential significance in isolation, to actually signify they must enter into a ternary relation involving the sign, an apprehending subject, and its consequent meaning. Further, the subject must recognize it to be a sign, say that an ink pattern is the word “apples,” before it can signify. Thoughts signify differently. We do not have to recognize that is idea before it means apples. Rather, ideas signify transparently. We only come to know that there is an idea retrospectively, by reflecting that thinking of apples requires a specific instrument of thought. Instead of a ternary relation, ideas signify via a binary relation: the thinking subject and the meaning being thought.

The whole being of a thought or essential sign is signifying, because that is all they do. While printed words scatter light and spoken words modulate air pressure, ideas only signify. Because instrumental signs do more than signify, they can do those things without signifying. Consequently, they can exist without actually signifying. However, since all formal signs do is signify, if they are not signifying, if they are not being thought, they can do nothingand what can do nothing is nothing.

So, while neural net theory shows how content can be physically represented by the brain, there are no unthought ideas in some ethereal plane. The distinction between instrumental and formal signs, largely lost to modern semiology, is critical. First, it allows us to terminate Peirce’s infinite regress. Instead of each sign endlessly eliciting another as its interpretant, instrumental signs signify by terminating in the formal signs (thoughts) they elicit. Second, it allows us to see why representational and computational theories of mind fail. Neural signals represent data, but they do not usually signify in either of the described senses. Despite Armstrong’s conjecture, we do not propriocept our brain to fathom meaning. So, brain states are not normally instrumental signs, though they may be in brain imaging. Nor are neural signals formal signs, for they do more than signify, and often do not signify at all. Still, they bear information we may become conscious of.

Correlative to the difference between instrumental and formal signs, is that between physicality and intentionality. Physical states differ from intentional states in that physical states lack intrinsic reference. As long as we prescind from consciousness, physics and its allied sciences are adequate to investigate the physical theater of operations, for there is no need to employ concepts such as significance and reference. However, as Franz Brentano notes, an essential characteristic of intentions is their aboutness, the “intentional inexistence” of a target. We do not just know, will or hope, we know, will or hope something.

Brentano’s characterization, while true, needs elaboration, for artificial and natural neural nets may associate one physical state with another. Consider a wildebeest fleeing from unseen predators in response to their scent. Its flight can be explained by the neural net model. It may be that instead of immediately activating a flight response, the scent activates a predator representation, which in turn causes the flight response. That would make the scent a sign about the unseen predator. Yet, this is not the same as intentional “aboutness.”

Predator scents are pheromones, capable of physical operations as well as signifying, while intentions are formal signs, doing nothing more than being about their objects. Parsimony allows us to explain the wildebeest’s flight without invoking a conscious intention or formal sign. The scent need only activate a neural node representing a predator, which in turn activates a response mechanism. So, we are to understand Brentano’s “aboutness” as an intention’s whole being, for intentions, like formal signs, have no other capabilities. Still, there are different flavors of intention. Not all of them are bare signs. We can not only contemplate concepts such as < apples>, we can desire, hope for, commit to, etc. target states or actions. In each case, the whole being of an intention is a subject-object relation. Intentions are diversified by how subjects relate to their objects. Contemplation is bare awareness of is object. Desire is an inclination toward an object that may, but need not, motivate acquisitive action. Hope is a positive response to desire when the object is not in our power. Finally, commitment instantiates a process directed at realizing an object in our power.

None of this explains the functional advantage of being conscious. That was hit upon by Aristotle in distinguishing thought from sensation. In De Anima III, 3 he says that thought, unlike perception, is subject to error. This makes judgement, which alone can be false, a sign of thought. Sensation is simply a neural process. A neural response may be adaptive or un adaptive, but it is not an affirmation that can be true or false. Only judgements can be true or false-adequate or inadequate to reality-and our ability to form judgements allows us to theorize with all the advantages that entails.

Paul M. Churchland opines that no neural structures correspond to propositional attitudes. It may seem that neural net theory can explain judgement, but it does not. It explains association, which David Hume confused with judgement. Connectionism explains how similar inputs activate similar representations. Since oranges are similar to the setting sun, my neural net may activate an orange representation when I view a sunset, so that I associate them. Still, I would not judge the sun to be an orange.

So, how does consciousness produce judgements? It allows us to focus on aspects of a whole as well as the whole. I can judge < mice are animals> because the objects that elicit the concept < mice> also elicit animals>. If one object elicited < mouse> and a different object elicited < animal> I would not be justified in thinking < mice are animals>. Thus, we can form judgements because, in abstraction, consciousness attends to one aspect of an object to the exclusion of others without forgetting that what we are attending to is an aspect of the whole. We can see that abstraction is not a physical operation because it divides what is physically inseparable. For example, the FA divides the known object from the knowing subject, even though neither can occur without the other.

Intentional-physical interaction

This section is not entitled “The Mind-Body Problem” because the very name assumes a non-existent dichotomy. As Aristotle, Galen, Aquinas and Freud all knew, mind depends on body. Further, instead of assuming that the physical and intentional theaters of operation do not interact, we recognize that they do. Thus, the question is not whether we conceptualize to act and sense to know, but how we do so. Unlike Descartes’s res cogitans and res extensa, intentions and physical states relate bi-directionally by exchanging their common currency, information. Physical states can inform intentional states, and intention states can inform physical states. As Claude Shannon, the founder of information theory, has observed, information is a reduction in possibility. Information may be physical or intentional (logical). Despite the tendency to equate these types of information, they are not the same. As we have seen, physical states can be significant without being known while intentional states exist only when being thought.

Consciousness monitors and responds to body state and, in doing so, knows the environment. Aristotle observed that the senses being acted upon by a sensible object is identically the sensible object acting on the senses. Antonio Damasio offers the same insight in evolutionary terms: to ensure body survival as effectively as possible, nature, I suggest, stumbled on a highly effective solution: representing the outside world in terms of the modifications it causes in the body proper, that is representing the environment by modifying the primordial representations of the body proper whenever an interaction between organism and environment takes place. Information is conveyed because every time the object acts on our senses in a specific way the possibility that the object cannot act that way is excluded.

Intentional commitments inform neural modifications. For cognitive behavioral therapy to change brain activity patterns, patients must commit to their course of treatment. To report consciousness, we have to intend to do so. Discussing how intentional commitments modify neuro physics is grist for a different article. Here it suffices that they have known physical effects. To reconcile this with physical science we need only assume what all physicists know, that our descriptions of the laws of nature are approximate. The brain has evolved as a control system and such systems produce large outputs from small inputs. So, intentional commitments need only have miniscule physical effects to control behavior.

Since neurons have nonlinear response functions, the mathematics of brain dynamics is chaotic, meaning that any variation of initial conditions can produce radically different results. Consequently, we cannot reproduce initial conditions with sufficient accuracy to replicate observations. Physicalists may respond that while we cannot prove it, the brain is still physically determined. However, our inability to replicate observations of chaotic systems makes this an untestable hypothesis, and, by Karl Popper’s falsifiability requirement, unscientific. Still, we need not rely on methodological arguments because we already know that consciousness and intentionality modify physical states.

How consciousness works

Since I am leaving the explanation of how intentions inform physical reality for another paper, we are left with the problem of how physical states can inform intentions. Let me be clear. I am not trying to explain how our power of awareness comes to be. We have seen that there are difficulties with the evolutionary hypothesis as it now stands. So, I take it as a contingent, observable fact that human beings are subjectively aware. The problem I am addressing is how physical states can inform intentional states when physics lacks intentional effects.

Given the lack of intentional effects, the answer cannot be physical. Neither can it be computational, for computation produces quantitative results, not intentional states. So, it must be sought in the intentional theater of operations. Recall that consciousness produces knowledge and that knowledge is a subject-object relation. What makes a physical state knowable is that it can enter into such a relation. So, the question is: how are such relations formed?

Consciousness monitors out body state, mostly via the nervous system, but also via the endocrine system. And, as Aristotle and Damasio suggest, our body state encodes information about environmental objects. That information is not a representation like a painting, but the action of its object in us-its presence via a radiance of action. Since the modifying action belongs to the object and the neural modification is the subject’s, the action = modification is case of shared existence.

Such shared existence is adequate to sensation, but more is required for conscious knowledge. We need to bring the object’s presence into an intentional, subject-object relation. That is done when consciousness switches its focus from monitoring the body as a whole to the specific modification caused by the object. That switch, the result of something “attracting our attention,” is an intentional commitment-and act of will in classical parlance. We are not only attracted to some aspect of the field of consciousness, we choose to focus on it. That commitment of conscious resource, which is an intentional act, forms the subject-object relation known as knowledge. The result is not a new, mental representation, but a new relation to the neural modification that is the object’s presence. Thus, mind-brain identity theory is almost right. Mental concepts are neural states, but neural states that have become a relatum in an intentional relation.

Because our commitments have physical effects they can lead to the successive activation of neural contents in support of a line of thought. In psychoanalysis, analyst can guide and induce the patient to a series of intentional commitments that will activate a chain of neural associations or instantiate a new pattern of behaviour.

References

  1. Polis DF (2023) Transcending the standard model. Psychology and Psychotherapy: Research Study 7(2): 1-2.
  2. Aristotle, De Anima III, 5, 430a10-16.
  3. Boag S (2020) Topographical Model. In: Zeigler-Hill V, Shackelford TK (Eds.), Encyclopaedia of Personality and Individual Differences. Springer, USA.
  4. Bargh JA, Morsella E (2008) The unconscious mind. Perspectives on Psychological Science 3(1): 73-79.
  5. Chalmers DJ (1995) Facing up to the problem of consciousness. Journal of Consciousness Studies 2(3): 200f.
  6. , p. 202, n. 1.
  7. Kim J (1998) Mind in a physical world: An essay on the mind-body problem and mental causation. MIT Press, Cambridge, USA, p. 40.
  8. Poli A, Pozza A, Orrù G, Conversano C, Ciacchini R, et al. (2022) Neurobiological outcomes of cognitive behavioral therapy for obsessive-compulsive disorder: A systematic review. Front Psychiatry 13: 1063116.
  9. Polis DF (2023) The hard problem of consciousness & the fundamental abstraction. Journal of Consciousness Exploration & Research 14(2): 96-114.
  10. Kahn C (1966) Sensation and consciousness in Aristotle’s psychology. Archiv Für Geschichte Der Philosophie 48(1-3): 45.
  11. Papineau, D (2023) Naturalism. In: Zalta EN, Nodelmaned U (Eds.), The Stanford Encyclopaedia of Philosophy, USA.
  12. Carnap R (1941) Intersubjective, p. 149 and Physicalism, p. 235 in Runes DD, Dictionary of Philosophy. Philosophical Library, New York, USA.
  13. Ryle G (1949) The concept of mind. Barnes and Noble, New York, USA, p. 164f.
  14. Grush R, Patricia SC (1995) Gaps in Penrose’s toilings. Journal of Consciousness Studies 2(1): 10.
  15. Jackson F (1986) What mary didn't know. Journal of Philosophy. 83(5): 291-295.
  16. Zubricky RD, Das JM (2024) Neuroanatomy, superior colliculus. Stat Pearls. Stat Pearls Publishing, Treasure Island, USA.
  17. Humphrey N (2009) Helen: A blind monkey who saw everything. In: Bayne T, Cleeremans A, Wilken P (Eds.), Oxford Companion to Consciousness, Oxford University Press, Oxford, England, pp. 343-345.
  18. Smart JJC (2017) The Mind/Brain Identity Theory, The Stanford Encyclopedia of Philosophy, USA.
  19. Mick DG (1986) Consumer research and semiotics: Exploring the morphology of signs, symbols, and significance. Journal of Consumer Research 13(2): 196-213.
  20. Wild J (1947) An introduction to the phenomenology of signs. Philosophy and Phenomenological Research 8(2): 217-233.
  21. I use <> to mark instruments of thought as “” marks words.
  22. Pitt D (2022) Mental Representation. In: Zalta EN, Nodelman U (Eds.), The Stanford Encyclopedia of Philosophy, USA.
  23. Brentano F (1874) Psychology from the empirical standpoint, Duncker & Humblot, Leipzig, Germany, p. 124f.
  24. Churchland PM (1981) Eliminative materialism and the propositional attitudes. The Journal of Philosophy 78(2): 67-90.
  25. Shannon CE (1948) A mathematical theory of communication. Bell System Technical Journal 27(3): 379-423.
  26. Aristotle, De Anima III, 2, 425b26-426a4.
  27. Damasio AR (1994) Descartes’ Error. Putnum’s, New York, USA, p. 230.
  28. Birbaumer N, Flor H, Lutzenberger W, Elbert T (1995) Chaos and order in the human brain. Electroencephalogr Clin Neurophysiol Suppl 44: 450-459.
  29. Popper K (1959) The logic of scientific discovery. Routledge, Abingdon-on-Thames, England.
  30. Michael Rescorla (2020) The computational theory of mind. Stanford encyclopaedia of Philosophy Archive, USA.

© 2024 Dennis, This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.

About Crimson

We at Crimson Publishing are a group of people with a combined passion for science and research, who wants to bring to the world a unified platform where all scientific know-how is available read more...

Leave a comment

Contact Info

  • Crimson Publishers, LLC
  • 260 Madison Ave, 8th Floor
  •     New York, NY 10016, USA
  • +1 (929) 600-8049
  • +1 (929) 447-1137
  • info@crimsonpublishers.com
  • www.crimsonpublishers.com