Tholonia - 060-INFORMATION
The Existential Mechanics of Awareness
Duncan Stroud
Published: January 15, 2020
Updated: Updated: Jan 1, 2026
Welkin Wall Publishing
ISBN-10:
ISBN-13: 978-1-6780-2532-8
Copyright ©2020 Duncan Stroud CC BY-NC-SA 4.0
This book is an open sourced book. This means that anyone can
contribute changes or updates. Instructions and more information at https://tholonia.github.io/the-book (or contact the
author at duncan.stroud@gmail.com). This book and its on-line version
are distributed under the terms of the Creative Commons
Attribution-Noncommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
license, with the additional proviso that the right to publish it on
paper for sale or other for-profit use is reserved to Duncan Stroud and
authorized agents thereof. A reference copy of this license may be found
at https://creativecommons.org/licenses/by-nc-sa/4.0/. The
above terms include the following: Attribution - you must give
appropriate credit, provide a link to the license, and indicate if
changes were made. You may do so in any reasonable manner, but not in
any way that suggests the licensor endorses you or your use.
Noncommercial - You may not use the material for commercial purposes.
Share Alike - If you remix, transform, or build upon the material, you
must distribute your contributions under the same license as the
original. No additional restrictions - you may not apply legal terms or
technological measures that legally restrict others from doing anything
the license permits. Notices - You do not have to comply with the
license for elements of the material in the public domain or where your
use is permitted by an applicable exception or limitation. No warranties
are given. The license may not give you all of the permissions necessary
for your intended use. For example, other rights such as publicity,
privacy, or moral rights may limit how you use the material.
(Image: Piano score of Faerie’s Aire and Death Waltz, from “A Tribute to Zdenko G. Fibich”.)1
The same patterns that we see in the world around us we can find in the data from the world around us, thus transforming data into information. How, then, do we define data and information?
To answer that, we can deduce that there is no data in the void of nothingness. Data can only exist within a duality, which implies that there is energy, which means oscillation within finite boundaries in accordance with specific laws of existence. So, what is the difference between data and existence itself?
The terms Stateless and Stateful, which we’ll be using to describe data, are borrowed from the names of network protocols upon which the Internet is built, so it’s worthwhile to look at their definitions.
Stateless systems are self-contained and functions independently with only presently available data. Stateless systems are high-speed, can be deployed quickly with the least amount of work, require minimum infrastructure, and are very robust and adaptable. They are not well suited for integrated, intelligent, or data-heavy operations. An example of stateless systems in nature would be inanimate objects such as sub-atomic particles, elements, molecules, rocks, planets, stars, and things that increase in entropy (become more disordered).
Stateful systems depend on previously recorded data and store their own data for future stateful operations. Stateful systems are more efficient for integrated, intelligent, or data-heavy operations and require more resources and memory. An example of stateful systems in nature is living things, or things that evolve, such as DNA, galaxies, and things that decrease in entropy (become more ordered).
Data is static because all data is stateless, meaning data has no dependence on or relationship to anything in the past. The process that generates the data may have such dependencies or relationships, which would make it a stateful process, but the data produced can only represent the state of a system at any given moment.
But what about historical data? That data is not in the present, but it is still data, correct? We can say that data is from the past or predicts the future, but to do that we must not only add a dimension of time to the data, but we must also relate these data points across their time dimensions in an order of time. In other words, we have added statefulness to the stateless data.
Strictly speaking, it is the datum (singular for data) that is stateless, and the data (plural of datum) is a collection of data over time or space, making it potentially stateful, but the datum is not stateful.
Flipping a coin is stateless, as there is a 50/50 chance it will be heads or tails regardless of the result of the last 100 flips. However, the odds of flipping 101 heads in a row is about 4 × 10-31 because we are now viewing all 101 flips as a single stateful system rather than individual stateless events.
In its most basic form, data is numbers, words, units of measure, and other conceptual abstractions we invented to describe the phenomena within the spectrum of existence we can perceive and use.
Suppose we accept the very reasonable premise that everything that exists does so within the laws of creation. Then we must also accept that nothing exists that is not in perfect harmony with these laws, as anything out of harmony simply could not exist. Therefore, everything that can exist does, and everything that cannot exist does not. This implicitly tells us that all data is valid.
Depending on what theory of reality you prefer, “everything” can mean all that exists at this moment. However, if you subscribe to the Block Theory of the Universe, Special Relativity, or Quantum Theory, “everything” must mean everything that did, does, or will exist.
The tholonic view is much simpler. If data is a by-product of existence, then something only exists if it produces data, regardless of whether anyone or anything can detect that data.
This is not strictly a tholonic concept. Science calls this Realism, defining it as “a positive epistemic attitude toward the content of our best theories and models, recommending belief in both observable and unobservable aspects of the world described by the sciences.” In simpler terms, Realism means that things exist and have measurable properties regardless of whether we observe them.
While Realism holds that things have properties independent of conscious observation, this does not conflict with the quantum mechanical view that those properties only become definite through interaction. Everything that ever did, does, or will exist currently exists somewhere in the grand scheme of reality. However, in our corner of reality, the existence of things depends wholly on something’s ability to observe them.
This is true not just on a practical level, as we cannot say something exists if we have no awareness of that thing. It is also true on a technical level according to quantum decoherence theory, which holds that the act of observation (i.e., integration with environment) is what causes the wave functions of quantum probability (archetypes) to collapse into material form.
Some theorists, including Roger Penrose2, propose that consciousness itself plays a role in wave function collapse, suggesting a deep connection between awareness and material existence. The source and nature of this consciousness will be explored in detail later in this book.
The concept of Realism (things exist with measurable properties), quantum decoherence theory (environmental interaction causes wave function collapse), and Penrose’s proposal (consciousness plays a role in collapse) may initially appear to be three incompatible theories. However, as we will explore in detail, these are not contradictory models but rather complementary perspectives of a single underlying framework, which is fundamental to the Tholonic model. Each describes a different aspect of how potential becomes actual, how archetypes manifest as instances, and how awareness relates to existence. Understanding their compatibility reveals a more complete picture of the relationship between data, observation, and reality itself.
Consider the interior of a black hole as a thought experiment about observation and existence. Anything that enters a black hole is theorized to be broken down to its most fundamental components. According to the Standard Model, these fundamental particles (leptons, quarks, and their associated fields) are not composed of smaller particles.
Here is where it gets interesting. These particles behave as wave functions until they interact with their environment. But if the interior of a black hole prevents any interaction or observation (even by other particles), then theoretically everything inside would remain as uncollapsed wave functions rather than definite particles. In this sense, a black hole’s interior may represent the closest thing to nothingness within somethingness.
Where Here When Now
The tholonic view is that the difference between what does exist and what can exist is the difference between the here and now and the not here and not now. We understand the single phenomenon of space-time as having the properties of space and time, which manifest as the instances of where and when.
We will be using the terms stateless and stateful, which are borrowed from network protocols upon which the Internet is built. Let us look at what these terms mean.
Stateless systems are self-contained and function independently with only presently available data. Stateless systems are high-speed, can be deployed quickly with the least amount of work, require minimum infrastructure, and are very robust and adaptable. They are not well suited for integrated, intelligent, or data-heavy operations. Examples of stateless systems in nature include inanimate objects such as sub-atomic particles, elements, molecules, rocks, planets, stars, and things that increase in entropy (become more disordered).
Stateful systems depend on previously recorded data and store their own data for future stateful operations. Stateful systems are more efficient for integrated, intelligent, or data-heavy operations and require more resources and memory. Examples of stateful systems in nature include living things or things that evolve, such as DNA, galaxies, and things that decrease in entropy (become more ordered).
The state of something at any moment describes only that moment, not the past or the future, as data alone is stateless. How it came to produce the present data and the data it will produce in the future depends on the patterns of the data, which is stateful.
Of course, all systems in existence are made of systems within systems, each system depending on other systems to exist. In reality, all systems are a combination of stateless and stateful systems, with each system’s statefulness or statelessness being relevant only to the context and scope of that system. For example, a rock can be considered stateless in the scope of macro objects, even though it depends on the molecules it is composed of (which depend on the atoms). DNA, while being stateful, is composed of stateless molecules, and so on.
This
nested alternation between stateful and stateless systems appears
throughout nature and technology. Consider how the Internet works. A web
application such as Google Maps is not simply “stateful” or “stateless,”
but rather a stack of alternating layers, where each layer treats the
layer below it as stateless while constructing its own form of state on
top.
At the lowest level, the network transports packets as stateless data. Each packet is independent, unaware of any larger conversation.
TCP then imposes a stateful narrative over these packets. Connection established, sequence numbers tracked, acknowledgments exchanged, connection closed.
HTTP, built on top of TCP, deliberately discards this narrative and returns to a stateless interaction model. Each request is independent, each response is independent, and the protocol itself has no memory of what came before.
A web application such as Google Maps then reintroduces state at the application layer. User identity, session information, current map view, zoom level, route history, selected locations, and interaction context are maintained in memory, in cookies, in local storage, and in backend databases.
But that state is again broken into stateless messages whenever it crosses the network. Each API call, each JSON payload, each tile request, each location update is a standalone data object transmitted over HTTP as if no history exists.
On the server side, the process repeats. Incoming stateless requests are integrated into a stateful world model representing the user’s current session, preferences, navigation state, and long-term data.
That internal state is then once again decomposed into stateless responses, sent back across the network, and reassembled by the client into an updated state of the application interface.
Thus the system evolves as a layered oscillation.
stateless transmission → stateful interpretation → stateless encoding → stateful integration → …
At no point does the system remain purely one or the other.
Instead, meaning, continuity, and memory emerge from the recursive alternation between stateless flows and stateful structures.
This is why modern applications feel persistent and continuous to the user, even though the underlying machinery is composed almost entirely of independent, “forgetful” messages.
How does this apply to data and information on a space-time and existential level?
The present has no past or future. This makes the present a stateless instance of space-time, and its only temporal attribute is now. Likewise, the only spatial attribute of now can be here.
In this context, here refers to the system of space-time, a domain that includes the entire Universe in all its dimensions. You may be thinking, “If I am here and you are there, then there is obviously a there there.” That would be correct if here referred to the system, or subsystem, of an observer whose domain is limited by their ability to perceive.
The Universe or even the Multiverse is the first and primal here and there is no there as there (or not-here) is the nothingness of the void. Within that first here are multiple localized heres, and within each of those localized heres are even more local heres, and so on, with each here having its own context.
This is somewhat similar to the principle from Relativity that each observer experiences their own time as passing normally at one second per second, regardless of how much time passes in other reference frames. Someone traveling at relativistic speeds experiences one second as one second, even though that same interval might correspond to years passing for a stationary observer. In the same way, everyone’s here is absolute to them regardless of where their here is. From the perspective of a specific here-now, the other heres appear as there-nows.
Our ability to grasp the concepts of past and future allows us to understand patterns and change, which allows us to plan and predict. However, neither is “real”, at least in how we classically understand these concepts. The stateful conditions of where-when are simply historical records of a once here-now of a past present.
Of course, we will have to expand our ideas of past and present if the experiments that suggest the present can alter the past and the future can alter the present3 prove accurate. This should not be a problem. If we can redefine “real” as the following suggests, then we can also redefine “cause”, “effect”, and “time” to make it all fit.
A common scientific definition holds that something is real when it is a necessary component of a theory that accurately describes what we observe. For example, we cannot see electrons directly, yet we consider them real because they are necessary to explain electrical phenomena, chemical bonding, and countless other observations. We cannot see energy itself, only how it interacts with matter through heat, motion, light, and other effects. Similarly, we accept quarks, black holes, and even space-time itself as real because theories requiring these concepts successfully predict what we observe.
Ironically, this scientific definition leaves the door wide open for very liberal definitions of “real”, and just because we say something is real by this definition does not mean it actually is what we claim it to be. The definition validates the utility of a concept, not necessarily its ontological truth.
Ironically, this scientific definition leaves the door wide open for very liberal definitions of “real.”
Many books, theories, and speculations exist on the true nature of time. We will take a simpler reductionist approach and claim that here and now are not only real, but are the only aspects of space-time that are real.
What has just been said about the temporal aspect of space-time (now) also applies to the spatial aspect (here).
A point of data is a 0-dimensional concept, but let us imagine it as a red dot for this thought experiment. In the image above, we see a single datum and collections of data, but we clearly see patterns that are not dependent on past or future as we are seeing them now.
In this case, the data points are still stateless even though they create patterns that result from the relationships between the data points. These relationships are stateful, but the data itself remains stateless.
In the temporal case, data is produced in the present, the now. In the spatial case, data is produced at a location, which is where the data originates, but from that data’s local reference frame, the location is always here.
This sounds odd because we do not think of data having a location, but data exists in at least two domains. The first domain is that of instantiation, such as matter or energy. If we are measuring a desk, that desk has a location. If we are measuring heat, we are measuring it at some location.
The
other domain is where any piece of data exists in the spectrum of all
data, which we will call the data-matrix.
For example, take any number, such as 7. The number 7 is a piece of data that comes after 6 and before 8. That is its location in the spectrum of data. Every piece of data that can exist is a distinct unit defined by limits and its relationship to other data.
Data must come from that perfectly structured set of values, the data-matrix. Otherwise, it would be meaningless, and mathematical operations would have no consistent foundation. When we measure anything, we are mapping something that exists in some state of chaos to a perfectly ordered set of data that never changes.
Suppose we measure something as 7, be it 7 degrees, 7 feet, or 7 days. In that case, we are saying that whatever we measure equates via some definition, such as temperature, distance, or time, to a particular location in the conceptual landscape of the data-matrix.
The image above shows one way to view the relationship between the chaos of reality, the order of the data-matrix, and the definitions we create that enable us to relate them.
There
is the stateful relationship between stateless data at
any point in time. This collection of data points we will call a
data-frame.
In every moment, a new data-frame is created, and our perception of the differences in these data-frames accounts for reality as we know it. Reality is the difference between relationships in both space (relationships in a data-frame) and time (relationships between data-frames).
We are going to accept the current claim that a “moment” of time is 10-44 seconds, as this is Planck time, the time it takes a photon to travel the shortest length that can exist, which is Planck’s length (1.62 × 10-35 meters).
All this was a fairly wide detour to simply say that what does exist exists in here-now, and what can exist exists in the realm of potential but not in here-now.
In terms of data, we know we can splice the genes of a human toe into mouse genetics to get a mouse with a human toe growing out of its back, but a back-toed mouse does not exist (hopefully) in here-now and therefore is not currently producing data. A back-toed mouse could exist but currently does not exist.
As there is no state of back-toed mouse, and as data is stateless but requires a state to be produced from, no data exists.
What, then, is information? The classic and simplest definition is “data that we can use to understand something”, which is just another way of saying “Information is data that has meaning.” However, we first need to understand what “understand” means before we can understand what “information” means.
What we can say about information, as mentioned earlier, is that information must be stateful because it consists of data of many states, giving it a “memory” of previous states and “knowledge” of their relationships. We can at least say that information is the relationship between data.
What about understanding? Again, the traditional meaning is “the knowledge of why or how something happens or works.” This is pretty unsatisfactory and quite arguably wrong.
At the risk of sounding Clinton-esque, debating what the definition of “is” is, the words “use” and “understand” are entirely subjective and offer no actual meaning. With the word “work” defined as simply “to function or operate according to design”, this entire definition is vague, at best.
Can we come up with a better definition of the words knowledge and understanding?
To know something’s function and purpose is the fundamental goal of science, philosophy, and even religion, as these disciplines require a demonstrative understanding which we can see in their relative forms of reasoning.
Like in the dancing-woman-in-the-dots example, we can perceive something that is little more than subjective projection and has no basis in objective reality. This human ability to “recognize” things in meaningless images brought about the famous Rorschach inkblot test, as it provides some insight into how a person perceives the world by their perception of patterns they project onto the random images.
In the world of philosophy, this knowledge-as-projection is exemplified in the cow in the field problem first posed by American philosopher Edmund Gettier. It goes like this:
A farmer is concerned his prize cow has gotten lost. A neighbor comes to the farmer and tells him he saw the cow in his field. To double-check, the farmer visits the neighbor’s field and sees his cow’s familiar black-and-white shape. Satisfied, he goes home. The neighbor also decided to check. The cow is in the field but hidden behind some large bushes. However, a large sheet of black and white paper is caught in the bushes. It is clear that the farmer saw this and thought it was his cow. The question is then: even though the cow was in the field, was the farmer correct when he said he knew it was there?
This was meant as a criticism of the popular definition of knowledge as justified true belief, meaning if you believe something and it is both factually valid and verifiable, then that is knowledge. This sounds like a misguided idea of knowledge because, by this definition, the farmer’s false observation would qualify as knowledge since it appeared to satisfy all three conditions.
By this definition, understanding becomes vulnerable to subjective perception and belief. While subjective perception has its place in personal experience, it is worthless, even destructive, when it comes to understanding the objective aspects of the laws of existence. If we do not have concepts collectively agreed upon for sharing objective reality, we will quickly revert to confused cave dwellers, each trapped in our own subjective interpretations.
This critique also applies to science because if “Something is real when it is a necessary ingredient of a theory that correctly describes what we observe”, then in the cow in the field problem, scientifically speaking; the cow was in the field… but, it wasn’t.
Simply knowing the details of a situation is not the same as understanding them. We can see this in countless confusing or challenging situations that demand critical decisions. These decisions are inevitably guided by our conscious or unconscious beliefs, desires, and fears rather than by our understanding of the challenge itself.
We can see this difference even in less dramatic situations. Consider the distinction between knowing all the details about camping versus understanding what camping actually entails, or the classic difference between possessing a map and navigating the actual terrain.
A better definition of understanding might be “the knowledge of something sufficient enough to make verifiably accurate statements regarding said thing”.
But this, too, falls short depending on what verifiable means. For example, four people have to solve the following puzzle.
What is the following number in this sequence? 91715
Bob says “1”, and Carol says “3”. Bob defends his answer by showing that 71 is 20 less than 91. Therefore, 51, being 20 less than 71, is the obvious pattern. Carol, however, says it is 3 because 917153 is, in fact, a sequence of numbers in pi. Ted also says “1” because 9+1=10, 7+1=8. Therefore, 5+x = 6, so x must be 1. Alice says “9” because that would result in three prime numbers (11, 13, and 17) using the 2D lattice she made to solve the problem.
All four people have an understanding of the problem and the ways in which it can be solved, and therefore all four answers are verifiable. It is not unlike when you ask a child what 1+1 is, and she confidently answers “6”. When asked why 1+1 = 6, she says, “I had 1 white cat and 1 gray cat, now I have 6 cats. Two white cats, two gray cats, a black cat, and a cat that is all colors” referring to her two cats and their four new kittens. Not only did 1+1 equal 6, but it came in many colors and was soft and cuddly.
As silly as this sounds, her answer was quite accurate given the scope of a child’s understanding. The context of “1 (female cat) + 1 (male cat)” is not at all an irrelevant detail, especially to the cats.
The “error” here is that the child cannot appropriately identify the differences between the overlapping contexts of math and animals. However, we should not judge this child too harshly, because even the greatest thinkers make the same “error” with more complex contexts.
This discernment is a product of our neurology, the brain being a pattern recognition machine constrained by its structure and energy limitations. No brain, no matter how sophisticated, can recognize all possible patterns across all possible contexts simultaneously.
Understanding is contextual and only relevant to the degree to which it applies to the matter in question and why it was asked in the first place. Even the concept of “1” is contextually relevant.
We may have 1 dollar or 1 day, but what does 1 mean if we say 1 puddle plus 1 puddle? We either have 2 puddles, or, if they are connected, we have 1 big puddle. Now “2” is not the unit count of puddles, but the relative volume of the puddle (i.e., 1 puddle that is twice as large).
If we drop 1 rock into a lake, we have 1 wave. If we drop 2 rocks into a lake, we have 2 waves that interfere with each other. So we could say:
“1”=
as this is the general description of a wave.
“2”=
as this is the description of two waves interfering with each other.
What happens if we take 1 particle traveling at x speed and smash it headfirst into another particle traveling at x speed? You might think that the particles smash into each other at the speed of 2x, but you would be wrong if x was the speed of light because, at the speed of light 1+1 = 1, at least according to the Theory of Relativity.4
Other living creatures share our ability to count, though the concept and application of counting differs. For example, fish can count in the sense that they can measure the relative size of a school of fish, so they have a concept of greater than and less than, at least. This is a helpful skill as the size of the school is directly related to their chances of survival.
Frogs can count as well as perform addition. A female frog measures potential mates by the number of croaks they can make in a row, so at some level, she is processing “Contestant #1 croaked 5 times, but contestant #2 croaked 6 times”. Because the males are competing, contestant #2 had to count the croaks of contestant #1, then add 1, and then produce his own 6 croaks.5
In many animals, there are accumulator or counting neurons whose job is to send out a signal every time something is recognized, and by recognized, we mean the information that can be processed by the rest of the brain (i.e., the neurological component of a concept). A monkey will count the bananas in a tree, but it will not count how many people in the plaza are wearing Gucci.
In some creatures, including humans and fruit flies, the timeless, or tim, gene is critical to the counting necessary for managing the inner circadian clock. More sophisticated counting comes from networks of neurons collecting and sharing data.
Knowing how many bananas there are, or knowing that n+1 croaks are better than n croaks, is the application of information to some context, such as food or mating, and that is the definition of knowledge.
Imagine how humans might count if or when our neurology is far more evolved and interconnected. Will numbers seem quaint? Will we one day be able to process information analogously rather than with discrete digits?
At the macroscopic scale where we live and perceive, nature appears continuous and analog. Analog systems can solve certain types of problems extremely quickly, processing information through continuous physical changes rather than discrete computational steps. If our neurology evolves to process information more like these analog systems, perhaps directly modeling continuous relationships rather than breaking everything into countable units, numbers might become unnecessary for understanding. We might perceive and manipulate patterns as fluidly as nature itself operates at our scale.
Reality, as we know it, depends on the context of not only what is happening (stateless) but all that had happened before (stateful), in whatever contexts the contributing events happened in, most of which we can only speculate on. Our understanding of anything can only be relative to our context and valid within that context.
So, let’s modify the meaning of understanding to the following claim:
With this definition, Bob, Carol, Ted, and Alice can make statements based on their understanding, but none are verifiable unless we know why the question was asked. If the point was to see if they had reasoning abilities, they were all right, and so was the little girl, according to their relative reasoning abilities. The specific question itself is irrelevant, as any number of questions could be asked to get the same results. If the point of the question was to try and recover the last digit of a telephone number, there’s a 10% chance any one answer is correct, including the little girl’s answer, and the question was still meaningless.
We can now answer the question, “How do we define information?” If we accept that all data is valid and is, therefore information that is merely undiscovered due to our limited understanding of its relevance to context, information now can be defined as the following claim:
In short, information is relevant data.
Data, being an abstract by-product of everything, is by itself meaningless, just as the existence of matter is by itself meaningless. Data alone is the conceptual equivalent of chaos. It creates nothing and has no energy, meaning, direction, form, or pattern.
The only value that comes from data is in how it relates to a data-frame, and how data-frames relate to each other across time. These relationships within and between data-frames transform chaos into the ordered information we can use. The data-frame itself exists within the ordered system of archetypes that are numbers, the data-matrix.
Information, at least one form of it, is when we can find patterns or laws in the chaos of data or apply data to an existing pattern or law. We can go so far as to say information is the result of energy being applied to data.
There are many examples of this in nature, of order emerging out of chaos. One of the more straightforward examples is the standing wave pattern.
For those who do not know what a standing wave pattern (SWP) is, it is a stable pattern that results from cycles of energy transmitted as waves interacting with matter.
Here is a collection of SWPs created by placing white powder on a drum head and exposing that drum head to various stable sounds, like a single tone or a collection of single tones. This process is called cymatics.
Here are some more complex cymatic patterns from the 12 notes of the 1st octave of a piano.
The difference between data and information is analogous to the difference between chaos and order. This difference manifests through entropy and energy: order emerges when energy acts to reduce entropy.
Above are two examples of this concept. In the upper row, we show an example of random data in the Raw Data image on the far left. In the lower row, we show the evolution of what turns out to be a self-similar pattern (described using the L-System language).
Raw data is a form of low entropy chaos. Why? Because as these values have no context or meaning, they are nothing more than random, abstract symbols. While data contains countless possible microstates, it cannot manifest them without organization. Raw data is potential information, not kinetic information (to borrow the concepts of physics). The potential exists precisely because it is concentrated and complete, but it remains latent until energy is applied to organize it into useful patterns.
Information results from applying energy to analyze the data, producing limits, patterns, and relationships. Applying energy to data takes many forms: the electricity consumed by computers performing calculations, the metabolic energy burned by neurons in a brain recognizing patterns, the physical effort of sorting and organizing, or the computational work of running algorithms. In the example in the image above, the scattered numbers from the raw data have been organized into structured patterns revealing mathematical relationships. Information is analogous to kinetic energy because it is data that can now be used. From our raw data, we discover the relationships between these numbers, and in doing so, we have created order from chaos by applying energy to reduce entropy.
Knowledge is then produced when we apply this information in some meaningful manner. Knowledge is analogous to kinetic energy being applied to perform work, such as using flowing water to turn a motor or using heat to boil water. The information becomes knowledge when we use those discovered mathematical relationships to calculate trajectories, design electrical circuits, or predict relativistic effects. Just as applied kinetic energy accomplishes physical tasks, applied knowledge allows us to solve problems and make predictions.
Understanding is when we extend this knowledge to understand the larger perspective and how it may apply to other scopes and contexts.
Noise is when the growth reaches a point where no more contextually usable information exists. This is a deterioration of information itself, where the amount of contextually unusable data from a system becomes so great that no relevant patterns can be extracted. At this point, the system has become pure chaos. The knowledge we have already gained from this system is not lost, but the system can no longer produce structured information from which to extract further knowledge. This is analogous to a radio signal that degrades into static: the recordings made during the clear transmission remain valid, but no new information can be extracted from the noise.
An example of this might be how when speed, gravity, or momentum become extreme, Newtonian mechanics produces inaccurate results, and you need to move to Relativity or quantum mechanics to find the order in the data. It is worth pointing out that Newton’s laws of motion, Relativity, and quantum theory operate in three different scopes which overlap and interact. As mentioned previously, the laws of one scope may not be valid for another, so applying laws from one scope to another will introduce inaccurate results. Still, given that there are scopes within scopes, we may find the uber-scope that all these scopes exist within, something like the scope of the Unified Field Theory or the Theory of Everything.
Another point to consider is how all data, information, knowledge, and understanding that we acquire from investigating our reality are just as vulnerable to decay due to entropy as the matter that our reality is built from. All these forms require matter to survive: the brain that holds the knowledge, the book it was written in, the DNA it was encoded into, or any other form of recording will eventually deteriorate. This deterioration takes the knowledge with it, eventually returning the data-matrix to the chaos of the infinite pool of meaningless abstractions.
Sadly, history has many examples of the permanent destruction of knowledge, and it is statistically inevitable that at some point in the future, humanity will be reduced to foraging barbarians again, as has happened in the past.
We have moved from data to information to knowledge to understanding in the following manner, more or less:
What comes next? How do all these understandings we have discovered relate to one another?
Attempting to answer this question gave rise to the entire field of Western philosophy. The word philosophy, which literally means “love of wisdom”, was coined approximately 2,600 years ago by the Greek mathematician, philosopher, and religious mystic, Pythagoras, as the field of study dedicated to understanding how reality is put together.
A generation later, Parmenides, perhaps the most profound and challenging thinker of the Greek philosophers, developed the idea of categorizing all that was understood about existence. Today this is called ontology, which is hierarchical in nature and derives from Greek words meaning “the study of that which is.”
Historical note: While Parmenides changed the course of philosophical thinking, many of his ideas were considered preposterous, such as the idea that all of reality is an illusion. Today, with models like the Simulation Theory and the Holographic Theory of reality, this once-radical notion is almost mainstream thinking.
Parmenides described his vision of reality in his philosophical poem On Nature:
… unborn and imperishable, whole, unique, immovable, and without end. It was not in the past, nor yet shall be, since it now is, altogether, one and continuous.
In this vision, true reality is eternal, unchanging, and unified: a single, timeless existence. This means that everything we perceive through our senses (change, motion, multiplicity, time) must be illusion, since true Being is immovable and continuous.
He warned that the senses give us false information as they only sense the illusion of reality. Only reason can be trusted. This branded Parmenides as an “extremist.” Today, he is still classified as an extreme rationalist. Not surprisingly, his arguments were based on the rational premise that something either “is” or “is not”, a recurring theme in history and this book. Parmenides also appears to be the first Western philosopher that introduced a formal concept of nothingness. His idea that the act of becoming is an incoherent concept is explained by his contemporary, Gorgias, in an argument that was reasonable for the time:
What is cannot have come into being. If it did, it came either from what is or what is not. But it did not come from what is, since if it is existent it did not come to be but already is; nor from what is not, for the nonexistent cannot generate anything. ~Gorgias
This argument exemplifies the binary logic at the heart of Western philosophy: something either “is” or “is not,” with no middle ground. Under this framework, change and becoming are impossible, leading to the conclusion that existence must be eternal and unchanging. This binary approach to categorizing reality would shape Western thought for millennia, influencing how we organize and structure all knowledge hierarchically into categories of what “is” and what “is not.”
In other words, Western philosophy, using pure reason, concluded that something cannot come from nothing. The tholonic model, also using reason and grounded in the same Western philosophical framework of “is” and “is not,” arrives at precisely the opposite conclusion: everything comes from nothing. Ironically, despite reaching the opposite conclusion that everything comes from nothing, the tholonic model employs much of the same reasoning as Parmenides.
Needless to say, there has been a lot of discussing, researching, and testing over the past thousands of years on the best way to organize our knowledge of “that which is.”
In 1972, Ervin Laszlo, philosopher, theorist, and two-time Nobel Prize nominee, published “Evolutionary Systems Theory. Introduction to Systems Philosophy: Toward a New Paradigm of Contemporary Thought.”6 In that book, he incorporates Living Systems Theory, the hierarchical structures of Mario Bunge, and the concepts of “holons” and “holarchy” coined by Arthur Koestler in 1967. Bunge was a prominent figure in the fields of semantics, ontology, epistemology, philosophy of science, and ethics, having received twenty-one honorary doctorates and four honorary professorships from universities across the Americas and Europe.
Laszlo’s challenge was to provide a framework for understanding universal structures that span from subatomic physics, biology, chemistry, organisms, and social systems to the cosmos. He describes a hierarchical model of interconnected conceptual entities. When one of these entities is acting as a part of a larger entity, it is called a parton, and when acting as a whole entity with its own parts, or partons, it is called a holon.
The
holon represents the wholeness of its nature, and the
parton represents an integrated part of the greater
holon. The hierarchical ordering of holons/partons is
called a holarchy.
An example of this is the biological cell mentioned previously. The cell itself would be a holon. The transcoder and other components necessary for the cell to function would be considered partons. Likewise, the transcoder is also a holon with its component partons, and the cell itself is a parton to, say, an organ.
This makes the holarchy fractal in nature, as the structure of the entire hierarchy is self-replicated in each holon.
The most fundamental properties of holons are:
The holarchy is meant to be a map of all the concepts of archetypes we have collected, and it attempts to organize these concepts in a hierarchical fashion where each parton is a child of a holon.
We apply our concept of scope as each parent has properties that the child inherits. Within the holon of person, you will only find person things and not planet things. Each holon then has a unique scope, which defines the spectrum of possibilities for the partons of that holon.
Now that we have a scope, we can apply the Bell curve of probability for any particular holon.
The graphs above represent a tiny subsection of the Grand Holarchy of Everything. The chart on the left shows a larger picture of how partons and holons relate. The chart on the right shows one of the many paths that connect subatomic particles to the Multiverse.
With a few tweaks to the previous Super-Duper Graph of Reality Bell curve and using a log scale rather than a linear one, the concepts used in the above holarchy fit nicely.
Some readers may think, “Hey, wait a minute, those are not the same axis! What kind of Gaussian goofiness is going on here?” Well, that is partly true. The first Super-Duper Graph of Reality chart shows the probability (x-axis) of where order (y-axis) will be more likely to emerge across the entire spectrum of existence, while this chart is limited to the scope of human perception.
If we assume that our perceptions of reality are reasonably compatible with reality as it actually exists (with all due respect to Parmenides), then, as probability would have it, we humans happen to be in the part of the spectrum where one would most expect to find life popping up. So, congratulations to us, we are where we are supposed to be… probably.
Because the peak of the curve represents where the most “work” will be done (given the two poles that define the limits of the curve), it is also where energy will most likely be able to form sustainable patterns. The peak of the curve also represents the most efficient expression of a holon’s purpose and function. If we were electrons instead of humans, our archetypal holon would have the electron in the center because, as the electron does exist, it would naturally occupy that point where its existence is most likely, which is at the peak of the curve for the electron holon.
However, because life forms will tend to exist at the peak of the curve that spans from quark to Multiverse, perhaps we can speculate that life itself and the expression of consciousness that evolved from life is a very efficient expression of energy. This is not to say that there are no other forms of life that may excel in this regard, especially where radically different contexts may apply. Still, if there are other forms of life in our corner of the Universe, they will most likely appear within the same range of the curve.
More than appearing in the same area of the curve, they will probably appear similar to earthly life forms. If the aliens ever land a UFO in Central Park, we will all be surprised and a bit disappointed, I suspect, when what emerges looks unimpressively human. At least that is the opinion of the University of Edinburgh’s Astrobiology professor Charles S. Cockell, as he spells out in his book The Equations of Life: How Physics Shapes Evolution. However, by his same reasoning, there is a chance it could look like some of our own highly successful life forms, such as lice, crocodiles, duck-billed platypuses, horseshoe crabs, immortal jellyfish, or the highly intelligent octopus, to name a few.
Human-looking or not, all creations are subject to the three attributes of the holarchy as noted above; negotiation (cooperation or conflict), definition (limitations of environment and resources), and contribution (maintenance of sustainability). Oxford University’s evolutionary biologist Sam Levin sums this idea up quite well in his paper “Darwin’s Aliens”:
[Aliens, like humans] are made-up of a hierarchy of entities, which all cooperate to produce a [lifeform]. At each level of the organism there will be mechanisms in place to eliminate conflict, maintain cooperation, and keep the organism functioning. We should expect [aliens] to have been favoured by similarly restrictive conditions [as us humans]. Thus, we can make specific predictions about the biological makeup of complex aliens. 7
We take the position that it is these same forces that apply to all existence, not simply biological life.
Another excellent example of both a holarchy and its self-similar nature is that of the neurological structure of the brain’s processing ability, along with the structure of language that evolved out of that structure. This also shows examples of the various scopes through which instances are expressed. The 1st column on the image velow (left) is a hierarchical structure of a sentence. The 2nd column shows the speed at which the brain processes each part of the hierarchy. The 3rd column shows what (proposed) part of the brain is contributing to the overall process, and the 4th column shows the scope and size of that part of the brain.
More recently, this holarchic model was applied to Richard Dawkins’s concept of a meme8. A meme is defined as:
an element of a culture or system of behavior passed from one individual to another by imitation or other non-genetic means.
In Dawkins own words9 :
“Memes spread through the culture like genes spread through the gene pool.”
The relevance is in demonstrating how the concept of a meme is incorporated into a holarchy. The original version of the chart above comes from a paper on video games and their potential to increase intelligence.10 The interesting part is how Velikovsky puts the meme at the bottom of the cultural branch. This defines a meme as a 7th-generation descendant component part, a particle, so to speak, of the uber concept of culture. The meme here is analogous to what the electron is to the atom, the atom to the molecule, or the molecule to the object.
He also puts ideas down there as well, which may be consistent with the way he defines an idea. In our case, we are defining Ideas (capital “I”), like forms, as the archetypal blueprint for many instances of ideas. The idea of “Let’s make a video game where people have to shoot each other” is an instance of the archetypal Idea “Individual or tribal competition and survival”, which also spawns such concepts as sports, war, predatory capitalism, the idea of winning, and more.
The idea of “If I sin, I will burn in hell”, a very resilient and popular meme for thousands of years, is an instance of the archetypal Idea “We are judged harshly by our superiors for being self-serving”, which spawns such concepts as karma, judgment day, guilt, original sin, and more.
The holarchy examples shown above are portrayed as two-dimensional bifurcating trees. This model hides a lot of information, for if we zoomed in, we would see that within each holon is a collection of holons that share an idea, purpose, function, and more. Each holon has its own parameters, laws, and context, and each holon, therefore, has its own Bell curve that shows where it is best suited to “work”, such as the Bell curve of the human eye sensitivity, which would be one holon of “human eye”.
For consistency, here is the human eye sensitivity chart, which represents one aspect of a holon of human eye in the bio branch of the holarchy and its natural Bell curve of efficiency, or “work”.
These Bell curves of probability are not shown, or even defined, in the holarchic model. The reason is that the holarchic model does not have the explicit concept of a duality, though it is implied. As a result, the holarchy cannot show how the Bell curve of a holon is made up of the integrated aggregate of the Bell curves of its partons within the context of the holon.
It
does show the parameters of the holon are defined by its parent, but the
ultimate parent of the holarchy is the Multiverse, which is, in our
view, the “bottom” of the hierarchy of reality as we know it, and the
“seed” of the tree of reality begins with the first and
smallest particle which is the 0-dimensional dot. In some way, this is
like arguing which is the “correct” way to view Earth, north at the
“top”, or south at the “top”, but it does play an important role a
little later.
What happens if we incorporate the idea of dualities into the holarchy? Let’s see, but first, let’s look at some important concepts behind structure.
To hear the Colorado State Music Teachers Association performance of this “unplayable” score, visit https://youtu.be/sCgT94A7WgI?t=233↩︎
Lee Smolin, theoretical physicist and founder of Perimeter Institute, has described Roger Penrose as one of the most important physicists to work in relativity theory. Penrose’s Orchestrated Objective Reduction (Orch-OR) theory, developed with Stuart Hameroff, proposes that consciousness arises from quantum processes in brain microtubules and plays a role in wave function collapse.↩︎
Jacques, Vincent, E Wu, Frédéric Grosshans, François Treussart, Alain Aspect, Philippe Grangier, and Jean-François Roch. “Experimental Realization of Wheeler’s Delayed-Choice Gedanken Experiment.” Conference on Coherence and Quantum Optics, 2007. https://doi.org/10.1364/cqo.2007.cwb4., https://arxiv.org/pdf/quant-ph/0610241.pdf↩︎
Hossenfelder, Sabine. “Dear Dr. B: Does the LHC Collide Protons at Twice the Speed of Light?” Backreaction, 1 Jan. 1970, backreaction. https://blogspot.com/2019/04/dear-dr-b-does-lhc-collide-protons-at.html↩︎
Rose GJ. “The numerical abilities of anurans and their neural correlates: insights from neuroethological studies of acoustic communication.” Philos Trans R Soc Lond B Biol Sci. 2017 Feb 19;373(1740):20160512. doi: 10.1098/rstb.2016.0512. PMID: 29292359; PMCID: PMC5784039.↩︎
Laszlo, E., & Clark, J. W. (1973). Introduction to systems philosophy: Toward a new paradigm of contemporary thought. New York, NY: Harper Torchbooks. ISBN: 0061317624 (ISBN13: 9780061317620)↩︎
Levin, Samuel & Scott, Thomas & Cooper, Helen & West, Stuart. (2017). Darwin’s aliens. International Journal of Astrobiology. 1-9. 10.1017/S1473550417000362.↩︎
Dawkins, C. R. (2016). The selfish gene. Oxford: Oxford University Press.↩︎
Just for Hits - Richard Dawkins. (2013, June 22). https://www.youtube.com/watch?v=T5DOiZ8Y3bs↩︎
Velikovsky, J. T. (1970, January 01). Flow Theory, Evolution & Creativity: Or, Fun & Games https://dl.acm.org/citation.cfm?id=2677770↩︎