We consider the implications of the mathematical modeling and analysis of large modular neuron-to-neuron dynamical networks. We explain how the dynamical behavior of relatively small-scale strongly connected networks leads naturally to nonbinary information processing and thus to multiple hypothesis decision-making, even at the very lowest level of the brain's architecture. In turn we build on these ideas to address some aspects of the hard problem of consciousness. These include how feelings might arise within an architecture with a foundational decision-making and classification layer of unit processors. We discuss how a proposed "dual hierarchy model," made up from both externally perceived, physical elements of increasing complexity, and internally experienced, mental elements (which we argue are equivalent to feelings), may support aspects of a learning and evolving consciousness. We introduce the idea that a human brain ought to be able to reconjure subjective mental feelings at will, and thus these feelings cannot depend on internal chatter or internal instability-driven activity (patterns). An immediate consequence of this model, grounded in dynamical systems and nonbinary information processing, is that finite human brains must always be learning and forgetting and that any possible subjective internal feeling that might be fully idealized with a countable infinity of facets could never be learned completely a priori by zombies or automata. It may be experienced more and more fully by an evolving human brain (yet never in totality, not even in a lifetime). We argue that, within our model, the mental elements and thus internal modes (feelings) play a role akin to latent variables in processing and decision-making, and thus confer an evolutionary "fast-thinking" advantage.
Submitted to ORA: