Tholonia - 050-REASON
The Existential Mechanics of Awareness
Duncan Stroud
Published: January 15, 2020
Updated: Updated: Jan 1, 2026
Welkin Wall Publishing
ISBN-10:
ISBN-13: 978-1-6780-2532-8
Copyright ©2020 Duncan Stroud CC BY-NC-SA 4.0

This book is an open sourced book. This means that anyone can contribute changes or updates. Instructions and more information at https://tholonia.github.io/the-book (or contact the author at duncan.stroud@gmail.com). This book and its on-line version are distributed under the terms of the Creative Commons Attribution-Noncommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license, with the additional proviso that the right to publish it on paper for sale or other for-profit use is reserved to Duncan Stroud and authorized agents thereof. A reference copy of this license may be found at https://creativecommons.org/licenses/by-nc-sa/4.0/. The above terms include the following: Attribution - you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. Noncommercial - You may not use the material for commercial purposes. Share Alike - If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. No additional restrictions - you may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. Notices - You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation. No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.

5: REASON

The ability to understand something is limited by our ability to perceive something. As our perception changes, so does our understanding change, and with that, the reasoning we apply.
Synopsis: Reason is a universal archetype discovered, not invented, by humanity. While human reasoning operates within the limits of perception and context, it aligns with fundamental patterns that exist throughout reality. This chapter explores how the same archetypal patterns appear across different domains: from classical logic and quantum mechanics to human chromosome evolution, bicameral consciousness, and ecological cycles. It demonstrates how effort, intention, and resistance interact analogously to physical forces, whether in quitting smoking or predator-prey dynamics. The holographic principle and simulation hypothesis are examined as modern frameworks for understanding reality’s structure. Ultimately, reason serves to facilitate energy balance within systems, with sustainability emerging as the practical test of reasoning’s validity. The path of full presence and committed effort, embodied in Zen’s “be-here-now,” proves more effective than grandiose unfocused intentions.
Keywords: archetypes, energy, intention, patterns, sustainability, holography, cycles

We arrived at laws through reason, and we humans are rather proud of that claim as we like to believe we invented reason, but we didn’t. We did discover it, however, so at least we can take credit for being doggedly curious.

Reason is what we call our ability to recognize the natural processes that arise from the patterns of creation and is the product of intellectual survival, a skill we evolved to give us fang-less, claw-less, slow, soft, and chewy humans a fighting chance in the Darwinian battlefield… at least that’s one theory.

To be fair, there has been a long-standing debate over whether mathematics and, by association, reason, was invented or discovered. One of the more solid debates is presented by leading astrophysicist Mario Livio in his article “Why Math Works”1.

In that article, he refers to those who believe math was discovered as Platonists since Platonic archetypes include math archetypes, as math is a form of an idea. Livio gets into the idea that math is subject to the same evolutionary forces as species. Math that does not work quickly dies, never again to propagate itself into the mathematical “gene pool”.

The same is true for any idea or theory. Ideas such as the phlogiston theory of fire and Descartes’s theory of the motion of planets are examples of conceptual offspring that were quickly dispatched by the deadly and merciless hand of proof. It’s a bit different on the social level, where we see failed political and economic ideologies that, although their ruin is inevitable from the start, they continue to pop up because, in the realms of ideology, proof is often eclipsed by belief, hope, power and mass hysteria.

A sounder argument that reason was discovered is that the systems we have created with our reasoning abilities, from plumbing to artificial intelligence (AI), have come to the same design conclusion as nature when faced with similar challenges, such as water distribution, blood cell design, language, etc. In the case of AI, armed with our rules of reason, it has repeatedly out-designed human solutions by mimicking nature. These improved solutions were derived by observing the situation and applying the laws of reason that we instructed the AI system with, laws that we derived from observing our reality. The fact that AI is coming up with solutions that not only exceed ours but are more in harmony with nature is pretty good evidence that the laws of reason are objectively verifiable and existed before humans arrived on the scene.

In any case, both sides can agree, for the most part, that math is the language man invented to describe the existing laws he discovered. Likewise, reasoning is the process man invented to describe the unfolding order of existence that he was capable of perceiving. This is a sufficiently satisfactory answer regarding math and reason, but reasoning generally has a lot more gray area than mathematics.

The Limits of Reason

Reason has been remarkably effective in describing our reality, yet it operates within boundaries, at least in how we employ it today.

Consider the traditional Laws of Thought2, which form the foundation of classical logic.

The Law of Non-Contradiction states that nothing can both exist and not exist at the same time and in the same respect, or equivalently, no statement is both true and false.

The Law of Excluded Middle states that something either exists or does not exist, or equivalently, every statement is either true or false.

The Law of Identity states that everything is the same as itself, or a statement cannot remain the same while changing its truth value.

These laws, much like mathematics itself, possess their own algebraic notation. The Law of Non-Contradiction is expressed as ∼(p⋅∼p), the Law of Identity appears as (∀x) (x=x), and the Law of Excluded Middle takes the form F(x)⊃F(x).

It seems reasonable to assert that these laws existed before we articulated them, just as natural processes operated before we understood them. We certainly did not invent the principle that something cannot simultaneously “be” and “not be”. We discovered this as a property of our reality, or more precisely, a property of how we perceive our reality. If we expand our perceptions to encompass scenarios where something can be and cannot be simultaneously, then our reasoning must adapt accordingly.

Different philosophical traditions handle these apparent contradictions in distinct ways. An Aristotelian materialist might assert that the law of non-contradiction specifically forbids anything from existing in both a state of existence and non-existence simultaneously, declaring that if something exists, it cannot be otherwise. A Platonist might counter that something can exist as an archetype while not existing as a material instance. A quantum physicist might point out that particles can exist in superposition, occupying two states or even two locations simultaneously, as demonstrated in countless experiments. A Hindu philosopher might introduce yet another perspective, noting that the One from which all emerges is both Sat (existing) and Asat (non-existing).

How can these laws be universal if different frameworks produce contradictory conclusions? This paradox resembles the 2,500-year-old Buddhist parable of the blind men and the elephant. One man grasping the elephant’s tail describes it as a snake, while another holding the foot insists it resembles a tree. The situation mirrors the brilliant sculptures of Mathew Robert Ortis, such as “Revolution Giraffes”, which appears as a giraffe from one angle but transforms into an elephant from another vantage point.

The blind men and the sculpture’s viewers all perceive a preexisting form that is undeniably “real”, yet they cannot agree on what it is. Each observer can easily define and defend their worldview. They all recognize the reality they have accepted as true, supported by reasoning that makes perfect sense within their frame of reference.

The laws of reason, then, depend fundamentally on the scope of one’s perspective and what one accepts as axiomatic, whether mundane or transcendental.

Labels and Names

Before proclaiming anything is existent, that thing must be defined through observation and/or presumption, and that definition is then given a name. This claim rests not merely on psychology but on neurology as well. We cannot declare something exists if we have no conception of what we are making that claim about. We observe this principle in how languages form ideas and labels, which become the foundation of cultural, social, and objective understanding. In some cultures, such as the Mayans, things are believed to come into being by first being spoken of and named.

I can say my dog exists because I know my dog, I know of dogs, and everyone I know also knows of dogs. But can I say phlimquitz exists when, asked what a phlimquitz is, I respond with “I have no idea”? That would be unreasonable. However, if I claimed phlimquitz explains the 99.97% correlation between U.S. spending on science, space and technology and suicide by hanging, strangulation, and suffocation, then I am asserting knowledge of something previously unrecognized. I could proceed to define and test the Phlimquitz Hypothesis.

Key 36: We name things according to our observations, and once named, they become conceptual realities.

Here is that correlation, presented purely for illustration3.

Understanding works this way in all contexts. Consider how we recognize a pattern in a series of random dots, as illustrated in the dancing-woman-in-the-dots example above. I can see a person dancing in that image, and I can easily demonstrate this by connecting the dots. My perception is not wrong, but neither is someone else who perceives an egg sandwich.

This pattern-recognition ability is embedded in our neurology. However, predictions about where the next dot will appear, or conclusions we draw about the dots’ properties based on our perceived patterns, will most likely fail. Through trial and error, testing, and proofs, we eventually discover that these dots represent no specific pattern at all. Once we recognize the concept of randomness itself, we can define it, name it, and study it.

Humans began recognizing the concept of randomness roughly 3,000 years ago, initially defining and naming it as “fate”, “chance”, and “destiny”. Being unpredictable, these phenomena were typically associated with some form of supernatural justice or punishment. Italian mathematicians did not begin formalizing what we now call randomness until the 16th century, making it quite a young concept in the grand scope of human thought.

Key 37: Our ability to reason is rooted in our ability to see patterns, and connecting these patterns is made possible by our ability to reason.

Gods, angels, spirits, the Djinn, magic, luck, coincidence, mystical forces, kundalini, chi, manna, out-of-body experiences4, and countless other phenomena represent concepts that many people have defined, named, and comprehended within their respective frameworks. Those familiar with these concepts can declare their existence and produce extensive documentation about them. We should neither validate nor invalidate such perspectives relative to any other. Any system’s validity depends entirely on the contextual effectiveness of the reasoning behind it and its ability to produce verifiably accurate statements within its domain.

Dismissing alternative views of reality would be intellectually irresponsible, given that every culture in the world has developed concepts of paranormal or metaphysical realities. Icons of Western science, including Paracelsus, Hippocrates, Plato, Carl Jung, Erwin Schrödinger, and Albert Einstein, have all referred to such realities. Many cultures maintain extremely demanding training and lengthy education programs specifically designed to understand and access these realities.

It appears that virtually anything can be accepted as Truth if people can recognize it and support it with whatever reasoning they employ, even if that reasoning appears incomprehensible to others. We need only examine history and observe certain places on the planet today to witness the many versions of truth that have evolved and how dramatically they differ from each other. This does not mean all reasoning systems are equally effective at describing and predicting natural phenomena. For our purposes in this book, we focus primarily on reasoning based on the laws of nature as revealed through empirical observation.

The Truth about Truth

This raises the big question. What is truth?

This is a question we can never answer absolutely. We cannot speak about ultimate truth because there can never be absolute proof that any one way of looking at something is the “right” way. All we can speak about is what constitutes the most reasonable understanding given the context of our reality and our perceptions of it. What, then, serves as the arbiter of proof regarding whether something is “reasonably” true or not, given our current understanding of reality?

One referee is, of course, power. When Friedrich Nietzsche declared “There is no truth, only power”, he was half right. Truth is a journey of discovery that can lead down many roads, while power is a battle over who gets to navigate that journey.

Sustainability

Beyond that battlefield of culture and politics, sustainability offers a more objective measure. It may be the best test for validity because the mere fact that something exists demonstrates it operates in accordance with the rules of reality.

Anything that exists can only sustain itself if it operates within the constraints of its scope, typically defined by its environment and component parts. All contributing variables must maintain functional balance for the energy within that instance to move in a structured manner, allowing order to emerge from chaos. Energy always seeks balance. Sustainability, therefore, measures the degree to which order and balance can be maintained. When this balance fails, the system collapses back into chaos. Unsustainability will result in chaos and visa-versa.

Key 38: Sustainability measures the degree to which order and balance can be maintained.

This raises an interesting question. If high-entropy chaos is the ultimate state of the Universe, a state where everything is inert, motionless, and dead, then it must also be the most sustainable state as it will never change once arrived at. Oddly, this appears to be true, but this represents an example of two distinct kinds of balance.

The dead state is the state of balance (noun) that has no movement, which opposes the action of balancing (verb), a dynamic and interactive process of energy and movement.

If the dead state of high-entropy chaos, which has no order, is the most sustainable, then order itself can only be a temporary state. This appears to be true, implying that order can only exist in a state of imbalance. When we talk about sustainability, then, we are referring to the sustainability of order, which can only exist in the temporary context of balancing, not in the context of the balanced end state.

All the movement of energy at every level of reality results from traveling the path of least resistance as systems seek balance. We see examples everywhere.

When we mix acids and alkalines like vinegar and baking soda, or the more dramatic (and dangerous) combination of hydrochloric acid and sodium hydroxide, an extremely violent readjustment phase (balancing) releases enormous heat before settling down to the very stable, (balanced) lower energy-demanding products of water and table salt.

Why does this reaction happen? The final combined state of water and salt requires less energy to sustain than the separate solutions. Specifically, the reaction releases 57 kilojoules for about 4 teaspoons of salt water, which equals roughly the energy a person would expend walking about 240 yards. The released energy is precisely what the new solution no longer needs to sustain itself.

The Universe itself follows this same pattern. Starting with the extremely violent Big Bang and ending (according to current cosmological models) as inert matter scattered across infinite space, the cosmic equivalent of water and table salt, it represents the readjustment phase of different states of imbalance seeking equilibrium. Reality is the reaction of this adjustment. Like the chemical reaction just mentioned, the Universe will ultimately balance out and reach a state of maximum entropy, where all motion ceases.

When the great Zen master Shunryu Suzuki said “Life is like stepping onto a boat which is about to sail out to sea and sink”, he was not being darkly poetic. He was explaining how reality works. The ultimate destination of sustainability, on the cosmic scale, is death. At our current moment in existence (i.e., keeping the boat afloat), sustainability means keeping the various parts of the “balancing machine” of nature working properly. In both cases, sustainability means balance, but we must distinguish between short-term (temporal) balance and long-term (supernal) balance. For our purposes, the sustainability we discuss is the temporal kind unless otherwise noted.

Key 39: Sustainability is a gauge of existence, and therefore life itself, at every level of reality.

Pions exist for about 26 nanoseconds, while the most massive black hole known, approximately 66 billion times larger than our Sun, would require 6×1099 years to evaporate. Does this mean black holes are more “valid” and more “true” than pions?

No. The pion’s existence, short as it is, is inevitable given the forces and conditions that bring it into being. The validity of a pion holds for 26 nanoseconds, while the validity of a black hole persists effectively forever, as 6×1099 years may actually exceed the lifespan of the Universe.

By this logic, both pions and black holes are equally true and valid. One’s truth simply remains sustainable for a bit longer.

Key 40: Something exists because its expression of energy patterns adheres to an order that must be maintained for that thing to continue existing.

Tree of Sustainability

Let’s examine a more integrated example from the cellular level, the fundamental building block of life. This is basic biology, but it bears repeating.

Each of the processes listed below has proven to be a more “valid” truth by being more sustainable than any alternative process.

Inside the cell are two meters of tightly wound DNA strands composed of four very specific molecules held together by sugar phosphates. These molecules are C5H5N5 (adenine), C5H5N5O (guanine), C4H5N3O (cytosine), and C5H6N2O2 (thymine). They are arranged in a very specific sequence that encodes the instructions necessary to build every protein in an organism.

The process unfolds as follows.

These processes, and their order, are so specific and complex that they have sparked significant scientific debate. Dean Kenyon, a Professor Emeritus of Biology at San Francisco State University who taught evolutionary theory, gradually shifted his position during the mid-1970s and went on to become a prominent figure in the Intelligent Design movement.

The important point is that if any one thing in the list of processes above changes, if one atom in the C5H5N5 molecule differs, or the “gatekeeper” makes an error, or a thousand other possibilities occur, the cell would either die, mutate, or otherwise become unsustainable, making the larger organism unsustainable. Cancer exemplifies this vulnerability. Mutations that allow cells to bypass normal growth controls lead to malformed reproduction that reduces an organism’s sustainability. Interestingly, cancer cells themselves can be quite sustainable as independent cells and only become problematic as a community of cells forming tumors.

For a cell to be sustainable and to continue existing, only one process, made up of numerous detailed steps that must function properly, will result in successful functioning and reproduction. More importantly, we cannot yet explain why any one particular option in any step, out of the countless theoretical options available at each step, is the one the process “knows” to select.

To put this in even clearer perspective, consider protein formation. For a protein of typical length (around 150 amino acids), there are 20150 possible amino acid sequences. There are only about 1080 atoms in the observable universe, making the number of possible protein sequences incomprehensibly larger. Research by Douglas Axe suggests that perhaps only 1 in 1077 random sequences of this length would fold into a stable, functional protein structure. This represents a probability of approximately 10-77, or one functional sequence in every 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 sequences.

In the world of biology, this exponential growth of possibilities is called combinatorial inflation. Remember that term, as we will return to it later.

To illustrate the scale of improbability, consider the famous thought experiment of monkeys typing Shakespeare. It would take 2,737,850 million-billion-billion-billion monkey-years to produce just the line from Hamlet “RUMOUR. Open your ears”5. The protein formation challenge is even more extreme.

If we assume purely random assembly, generating the proteins needed for 1040 organisms throughout Earth’s history, each with a 1 in 1077 probability of success, would require approximately 10117 total random attempts. Distributed across Earth’s 4.5 billion year history (roughly 1.4×1017 seconds), this would demand 7×1099 successful protein formations per second across the entire planet. If we spread this across all living cells currently on Earth (approximately 1030 cells), each cell would need to successfully generate functional proteins through random assembly at a rate of 7×1069 times per second.

These calculations assume evolution operates through pure random assembly, which is not how evolutionary theory actually proposes the process works. Evolution builds on existing functional structures through incremental modifications, not random assembly from scratch. Nevertheless, even accounting for evolutionary mechanisms, the origin of the first functional proteins and cellular machinery presents a significant challenge to explain through undirected processes alone. The numbers illustrate why some researchers find purely materialistic explanations for life’s origin inadequate.

Consider that your body alone contains 34 trillion cells, represents just one of 4 billion species that have existed, and produces approximately 10,000 different types of proteins (see image of 4 proteins below), each ranging in length from 50 to 2,000 amino acids.6

What we call a cell is a collective of components that work together with remarkable precision. The system appears to possess a form of distributed intelligence in how each component operates with all the other components to preserve and maintain its sustainability, its existence. Likewise, the components themselves, such as chaperonins, ribosomes, and RNA polymerase, are made of smaller molecular components that exhibit similarly coordinated behavior, as if guided by intention and decision-making abilities.

Proponents of conventional evolution maintain that natural selection acting on random mutations gradually refined functional proteins over vast timescales, with non-functional variants being eliminated. Intelligent Design advocates propose that this represents “biochemical predeterminism”, suggesting the process was somehow pre-programmed. Both approaches leave fundamental questions unanswered. For evolution, how did the first functional proteins arise before natural selection could operate? For Intelligent Design, where and how is this historical record of what worked and what did not stored? What mechanism accesses this information and translates it into physical structures?

Let’s examine a couple of macro examples on the social human scale and consider the serious questions they raise about how we understand these phenomena.

Yagé

This ritual psychoactive drink from the Amazon is also known as ayahuasca, uni, nixi pãe, caapi, camarampi, and many other names. One of the more fascinating details about yagé is how it is made and how it was discovered in the first place.

The main active ingredient of yagé is N,N-Dimethyltryptamine (DMT). DMT is a powerful hallucinogenic compound that exists in plants and is also produced naturally in the human brain. Yet it does not affect humans when taken orally because the enzyme monoamine oxidase (MAO) in our digestive system breaks it down before it can reach the brain.

The blood-brain barrier, a selective semipermeable membrane that separates circulating blood from brain tissue, normally blocks many cells, particles, and large molecules from entering the brain. This protective barrier is crucial, as there is much material circulating in our blood that could be harmful to the brain.

For the DMT to reach the brain, it must pass through the blood-brain barrier, and for that to happen, something must inhibit the MAOs from doing their job. That something is called an MAO Inhibitor (MAOI). MAOIs are commonly used in pharmaceuticals to help with ailments such as depression and Parkinson’s disease (which, according to Stanford University, are the same two conditions that tango dancing also helps).

Modern science has known about both of these compounds for a little over 100 years. Indigenous Amazonian shamans have possessed this knowledge for perhaps thousands of years, with estimates ranging from 1,000 to 5,000 years or more.

Somehow, these shamans managed to discover that the leaves of one particular plant, from over 40,000 species in the Amazon, contained substantial amounts of DMT. They also somehow knew that an MAOI was needed, which they found in the bark of another specific plant among those 40,000 species.

On top of all this, they had to know to avoid eating meat, fish, milk, fruit, bread, and anything fermented for some days before taking the MAOI to prevent serious injury or death from dangerous blood pressure spikes. Traditional anthropological explanations suggest discovery through trial and error, but the statistical improbability of this approach raises significant questions even for the most skeptical, and that is putting it mildly..

Of the 1,600,000,000 possible pairing combinations of Amazonian plants, only one produces the basis of yagé (the actual preparation is far more complicated than simply mixing two plants), and even then, only when a particular part of the plant is used. These 1,600,000,000 possibilities do not consider the variables of proportion or the specifics of preparation. For that many trial-and-error tests, even if every man, woman, and child in all of modern-day South America partook in the finding, preparing, and experimenting, it would take hundreds, perhaps even thousands of years, which raises the question of how knowledge of what was already tested could be maintained and transmitted. The statistical improbability suggests alternative explanations deserve consideration, to put it mildly.

Curare

The statistical challenge becomes even more extreme with other substances, such as the paralyzing poison curare used by South American indigenous people for hunting.

Curare presents a remarkable pharmacological puzzle. The poison paralyzes the animal when introduced into its bloodstream, yet the meat remains safe to eat because curare compounds are not effectively absorbed through the digestive system. The hunters had to know this property existed before seeking such a compound. Just imagining that indigenous people identified this specific requirement and then successfully searched for such a substance among tens of thousands of Amazonian species strains conventional anthropological explanations.

The complexity deepens further. Multiple plant species contain curare compounds, but indigenous hunters developed precise knowledge of which species and combinations produce the most effective poison. The preparation process itself presents dangers, as boiling the plant materials releases toxic vapors that can cause serious illness. Anyone exposed to these fumes while processing the poison faces significant health risks. How could trial and error produce this knowledge without numerous casualties among those who attempted it?

Yet when asked how such knowledge was obtained, shamans provide consistent answers across cultures. “The plants told us,” they say. Other indigenous traditions describe learning from ancestors, from the sounds of crickets, from patterns in sacred animal bones, or from cracks in tortoise shells that have been heated and dropped into cold water, their patterns then compared to star positions. The Chinese I-Ching emerged from precisely such a divinatory practice.

Ancient Mysteries

Another example raises questions that extend beyond statistical improbability to touch fundamental debates about human origins. This case examines human chromosome 2, where competing explanatory frameworks invite us to reconsider what we consider “reasonable” given the scale and age of the universe.

Humans possess 23 pairs of chromosomes while our closest primate relatives possess 24 pairs. How could more evolutionarily advanced descendants have fewer chromosomes than their ancestors? The answer lies in chromosomal fusion. Approximately 4.5 million years ago, two separate primate chromosomes fused together to create what is now human chromosome 2, performing the same functions as the ancestral 2nd and 3rd chromosomes combined.

The term “evolved” deserves clarification here. Chromosome number does not indicate evolutionary advancement. If it did, the hermit crab, with 254 chromosome pairs, would represent the pinnacle of evolution on Earth. The significant question centers on understanding how this specific fusion occurred and became established in the human lineage.

The Chromosomal Evidence

Human chromosome 2 presents several remarkable features. Telomeres, which normally cap chromosome ends like knots tied at the end of a string, appear in four locations instead of two. Two telomeres cap each end as expected, while two additional telomeres appear in the middle where the fusion occurred. Imagine placing two pencils end to end, both sharpened at both ends. Four points exist, one at each outer end and two meeting in the center.

Similarly, chromosome 2 contains evidence of two centromeres, one from each of the original chromosomes. Centromeres prove critical to successful cell division, typically appearing near the middle of chromosomes. The presence of two centromeres should create cellular disaster. Telomeres in the center should halt DNA replication while two active centromeres should tear the chromosome apart during cell division. Yet chromosome 2 functions perfectly. One centromere has been deactivated, as have both central telomeres. The precision of these deactivations, occurring in exactly the right locations to permit normal function, invites explanation.

Two Frameworks

Framework 1: Natural Chromosomal Fusion

Robertsonian translocations occur in approximately 1 out of every 1,000 births. In small, isolated populations, such mutations can become fixed through genetic drift and founder effects. While chromosomal abnormalities often associate with genetic disorders, a fusion that maintained genetic function could potentially spread through a population over many generations. Population genetics provides well-established mechanisms for how rare variants can become common, particularly during population bottlenecks.

The challenges this framework addresses include explaining the precision of the centromere and telomere deactivations, and accounting for how carriers of this initially rare mutation successfully reproduced and established a viable lineage that eventually replaced the ancestral 24-pair configuration.

Framework 2: Directed Modification

Consider the scale and age of the universe. Earth formed 4.5 billion years ago in a universe that is 13.8 billion years old. The observable universe contains approximately 2 trillion galaxies, each with hundreds of billions of stars. Given these numbers, the probability that intelligent, technological civilizations have emerged elsewhere approaches near certainty. Some of these civilizations could be millions or even billions of years older than humanity.

A civilization only a few thousand years more advanced than ours would likely possess capabilities we currently classify as impossible. One that is millions of years advanced would possess technologies as incomprehensible to us as quantum computers would be to ancient Sumerians. If such beings exist anywhere in this vast universe, and if even a tiny fraction of them developed interstellar capabilities, their presence in Earth’s history becomes not just possible but perhaps even probable.

This framework suggests that beings who themselves possessed 23 chromosome pairs modified existing primates through precise genetic engineering. This would explain the exact nature of the fusion, the specific deactivations that allow normal function, and how the modification became established in the population. The challenge this framework addresses is the apparent precision and functionality of the chromosomal changes.

Comparing Assumptions

Each framework requires assumptions:

The natural fusion framework assumes that a rare mutation occurred with sufficient functionality to spread through a population via known population genetic mechanisms. This requires fewer extraordinary claims but must explain considerable specificity in the resulting structure.

The directed modification framework assumes that advanced civilizations exist, that at least one achieved interstellar capability, that they visited Earth 4-5 million years ago, and that they possessed sufficient knowledge of genetics to perform precise chromosomal engineering. This requires more extraordinary assumptions but directly accounts for the precision observed in chromosome 2.

The question becomes which set of assumptions is actually more extraordinary. Mainstream science typically favors explanations requiring fewer novel assumptions (Occam’s Razor). However, if advanced civilizations likely exist somewhere in a universe containing trillions of galaxies over billions of years, then perhaps the “novel assumption” of their involvement becomes less extraordinary than assuming multiple precise molecular accidents.

The Real Question

Rather than declaring one framework correct and another impossible, we might ask which framework generates more productive questions and deeper understanding, and whether hypotheses outside the funding mainstream deserve serious investigation before being dismissed.

The natural fusion framework encourages investigation into population genetics, chromosomal evolution, and mutation mechanisms. It benefits from extensive funding, institutional support, and accumulated evidence. The directed modification framework encourages consideration of life’s prevalence in the universe, the likelihood of advanced civilizations, and the possibility that human development involved factors beyond terrestrial evolution. It suffers from virtually no funding, institutional skepticism, and limited investigation.

Both frameworks take the evidence seriously. Both attempt to explain the same remarkable structure. The difference lies not in the evidence but in the background assumptions we bring to its interpretation. Dismissing either framework prematurely may limit our understanding of both human origins and our place in a universe whose scale and age dwarf our current comprehension.

The Economics of Scientific Truth

Yet determining which framework better explains the evidence involves more than pure reasoning. The institutional and economic realities of scientific research play a significant role in which hypotheses receive serious investigation. As noted earlier in this chapter, when Friedrich Nietzsche declared “There is no truth, only power,” he captured a reality that extends into scientific research. Those who control funding largely determine which questions get asked and which answers receive serious consideration.

Research funding overwhelmingly supports investigations within established paradigms. A scientist proposing to study natural chromosomal fusion mechanisms will find numerous grant opportunities, institutional support, and career advancement. A scientist proposing to investigate evidence of genetic modification by advanced civilizations will find virtually no funding, institutional skepticism, and serious career risk. This asymmetry means the natural fusion framework has received thousands of hours of funded research, sophisticated modeling, and detailed investigation, while alternative frameworks receive essentially none.

This creates a self-reinforcing cycle. The well-funded framework accumulates evidence, publishes papers, trains graduate students, and becomes more “established.” The unfunded framework remains speculative, generates no data, publishes nothing, and appears increasingly “fringe” by comparison. After several decades, observers conclude the well-funded framework must be correct because it has so much more evidence supporting it. Yet this evidential advantage may reflect funding allocation rather than inherent explanatory superiority.

The result is that we cannot simply assume mainstream acceptance indicates better evidence or superior reasoning. What we call “mainstream” may partly reflect which hypotheses align with funding priorities, institutional comfort levels, and career incentives. A hypothesis can be correct yet remain marginalized indefinitely if it challenges powerful interests or institutional assumptions. History provides numerous examples. Plate tectonics, continental drift, bacterial causes of ulcers, and the existence of meteorites all faced decades of mainstream rejection despite eventually proving correct.

This economic reality does not prove alternative frameworks are correct. It does suggest that comparing frameworks requires acknowledging that one has received enormous investigative resources while the other has received essentially none. The evidential asymmetry may reflect this resource asymmetry rather than truth asymmetry.

The marketplace of ideas functions poorly when only some ideas receive funding to compete.

This same reasoning applies to many ancient mysteries that remain inexplicable within current mainstream frameworks, encompassing both cosmic mysteries and technical achievements.7 The detailed specificity of the Babylonian creation story (the Enuma Elish), the Vedic scriptures, the Epic of Gilgamesh, and other evidence from cultures worldwide suggest that humanity may have accessed knowledge (or that knowledge accessed humanity) through means beyond what current models typically accommodate.

In contemporary Western culture, claiming to receive knowledge from plants, insects, bones, or non-human intelligences typically triggers psychiatric intervention. Yet nearly every other indigenous culture, including those from which Western civilization descended, developed methods to access what they considered an organic knowledge database. More significantly, these cultures accepted such access as not merely possible but necessary for survival and advancement.

Archaeological evidence as far back as Paleolithic cave art (40,000 to 10,000 BCE) demonstrates a striking pattern that aligns precisely with tholonic principles, at least according to some new theories about ancient thinking. We see this resemblance in the manner that the ancients depicted animals with extraordinary anatomical detail and specificity, while human figures appear schematic, abstract, and often faceless.8 This asymmetry was not artistic limitation but conceptual sophistication. The detailed animals represented instances that required precise definition for successful manifestation in the material realm through the hunt. The abstract, faceless humans represented archetypal roles, such as the hunter, the shaman, and other human participants, who, at the conceptual level, individual identity is irrelevant.9

Similarly, Paleolithic Venus figurines emphasize reproductive function while omitting faces, preserving the archetype of fertility rather than depicting specific individuals.10 This pattern reveals that Paleolithic humans had already discovered the fundamental tholonic distinction between archetypes (abstract, universal) and instances (precise definition).

The cave itself functioned as the “magical” space where conceptual definition occurred before material instantiation, where the archetype of “the perfect hunt” was crystallized through detailed rendering before being enacted in reality.11 In short, cave paintings were “hunting magic”, and although we tend to think of this form of sympathetic magic as superstition, it actually represents sophisticated understanding of how reality operates, given the context of Paleolithic reality.

Most culture have some form of this “magic”, as described in great detail in The Golden Bough (1890),12 and all of which were various forms of understanding how concepts move from high to low entropy through precise definition, then requiring energy expenditure for material instantiation. Early human reasoning naturally discovered tholonic principles not through abstract philosophy but through survival necessity, as these patterns are fundamental properties of reality itself rather than cultural constructs.

Consider the hierarchical organization of intelligence in biological systems. A cellular transcriptor possesses the intelligence necessary to manage its own sustainability, which integrates into the intelligence of the cell. This pattern continues upward through organs, organisms, and beyond. Humans represent one level in this hierarchy, themselves components of larger systems. Within this framework, presuming the existence of intelligence operating at scales beyond human perception becomes reasonable rather than fantastic.

Our ancestors called this higher-order intelligence “the voice of the gods.” Approximately 5,000 years ago, the concept of deities controlling human fate began crystallizing into organized pantheons with hierarchical structures. Later, between 3,500 and 2,500 years ago, some cultures began consolidating these multiple deities into unified monotheistic concepts. This consolidation proved efficient given the context and scope of ancestral worldviews. It provided a single name for all unknowns and satisfied the human need for order and meaning. Once named, humans then attempted to describe and systematize that truth and order.

The Human Brain and Reason

American psychologist Julian Jaynes proposed a revolutionary thesis in his 1976 book “The Origin of Consciousness in the Breakdown of the Bicameral Mind.” Jaynes argued that during a relatively recent evolutionary stage, humans possessed the ability to hear voices in their heads providing instructions and knowledge. These voices of unknown origin were attributed to the gods and thus became the voice of Truth.

“Who then were these gods that pushed men about like robots and sang epics through their lips? They were voices whose speech and directions could be as distinctly heard by the Iliadic heroes as voices are heard by certain epileptic and schizophrenic patients, or just as Joan of Arc heard her voices. The gods were organizations of the central nervous system and can be regarded as personae in the sense of poignant consistencies through time, amalgams of parental or admonitory images. The god is a part of the man, and quite consistent with this conception is the fact that the gods never step outside of natural laws.”13

Jaynes proposed that before approximately 1000 B.C., human consciousness lacked metaconsciousness. Humans were not aware of their own awareness. Jaynes anchored this timeline to several convergent historical developments. The catastrophic Bronze Age Collapse (1200-1100 B.C.) had just devastated civilizations across the Mediterranean and Near East, disrupting the stable social structures that may have supported bicameral voices. The era depicted in Homer’s Iliad (composed around 750 B.C. about events circa 1200 B.C.) shows heroes acting on direct divine commands rather than through internal deliberation, providing Jaynes with his primary literary evidence. Shortly after this period, around 800-500 B.C., the Axial Age began, marking the nearly simultaneous emergence of reflective philosophy across Greece, India, China, and Persia. These convergent developments suggest a fundamental transformation in human consciousness occurring within a relatively narrow historical window.

The bicameral theory suggests two distinct brain functions operated somewhat independently. One part managed mundane activities, habits, and routine tasks (the self). Another part activated during confusion or difficult challenges (the voice of god).

[For bicameral humans], volition came as a voice that was in the nature of a neurological command, in which the command and the action were not separated, in which to hear was to obey.

Sidenote: What we call the “ego” today evolved out of that really boring and mundane “self” part of the brain, so it’s probably best not to use it for anything other than mundane tasks.

According to Jaynes, the transition from bicameralism to modern consciousness (linguistic metacognition) occurred between 1800 B.C. and 800 B.C. This period encompasses Hammurabi’s rule in Babylon and his codification of laws addressing complex social issues such as contracts, wages, liability, inheritance, divorce, paternity, and reproductive behavior, extending through to the founding of Rome. This metamorphosis of consciousness ushered in a golden age of empires and cultures that established the foundations of Western Civilization in art, literature, theater, government, philosophy, mathematics, and athletics. (Western culture receives emphasis here because it represents the tradition this author best understands and for which the most information is readily available, not because other cultures did not develop along similar lines under comparable conditions.)

From the days of Rome and the ancient Greek philosophers to the present, bicameralism has continued diminishing. This decline may account for decreasing reports of people hearing divine voices. This shift may have been beneficial. However, the disappearance of divine guidance forced humans to apply reason to challenging problems without external assistance. Our newly developed reasoning tools tended to reject ideas we could not understand, which proved problematic given our inexperience with conscious reasoning during often desperate times. Our ability to consciously make sense of things appeared to work well enough that it naturally dominated our worldview, despite our ongoing uncertainty about how reasoning actually operates. This marked the birth of human intelligence as distinct from universal intelligence. Lao Tzu refers to this transition as follows.

When men lost their understanding of the Tao, intelligence came along, bringing hypocrisy with it.14

Jaynes eloquently captured the lingering effects of this transition.

The mind is still haunted with its old unconscious ways. It broods on lost authorities, and the yearning, the deep and hollowing yearning for divine volition and service is with us still.

Hence, religion.

Everything that failed to fit the highly constrained and “reasonable” worldview was progressively marginalized until we arrived at the present situation. This “inquisition-by-reason” served to purge ideas deemed heretical from the dogma of rationality. This process has been both reasonably necessary and useful. In the ongoing reformation of thought, ancient ideas capable of withstanding rigorous examination will be resurrected in new forms with broader and more integrated understanding.

The memories stored in cells, the voices our ancestors heard, the wisdom shamans received from plants, and countless other instances of “divine” or “mystical” knowledge all raise the same questions. Where does this information originate? Where is it stored? How does one access it?

Perhaps reason has evolved sufficiently that we may now address these questions reasonably.

The Path of Reason

The rules and conditions of reality we have discovered followed a perfectly reasonable path long before humans could articulate them. Sometimes discovery takes considerable time, but eventually we identify these patterns.

For thousands of years, humans observed the movements of celestial bodies and created various explanatory narratives. Kepler discovered the underlying mathematical relationship through painstaking analysis of observational data, which when visualized on logarithmic charts reveals the pattern clearly. This became Kepler’s 3rd law of planetary motion, stating “the square of the planet’s orbital period is proportional to the cube of its distance from the Sun.” Similarly, we discovered DNA forms a helix stabilized by hydrogen bonds. In each case, we can identify both the pattern and the rules that generate it.

This does not mean that everything the gatekeepers of rational thought consider “unreasonable” actually lacks underlying reason. Myths, unconventional theories about consciousness, alternative cosmologies, and various paranormal or metaphysical concepts may conform to natural principles operating within frameworks that extend beyond currently accepted boundaries. Contemporary science views planets as gravitationally bound masses and DNA as molecular structures, while our ancestors understood these same phenomena through narratives of divine guidance and magical serpents. Each perspective captures aspects of underlying reality through different conceptual frameworks.

Historical examples demonstrate how “unreasonable” ideas often prove prescient. Electricity, flight, wireless communication, space travel, nuclear energy, and much of modern technology would have seemed magical to previous generations. In 1899, Charles H. Duell, Commissioner of the U.S. Patent Office, wrote in his annual report that “the advancement of the arts from year to year taxes our credulity and seems to presage the arrival of that period when human improvement must end.”15 Beside if “low-tech” understanding of the Bernoulli’s principle16 which enabled flight, within just a few years, humanity would develop quantum mechanics, discover relativity, and begin the technological revolution that defines modern life. These phenomena became explicable once we discovered the principles governing them.

The benevolence of self-preservation

If our reasoning derives from observing the world, then everything in nature and the Universe must operate “reasonably” according to existence’s laws. However, the reasoning we derive from limited observations sometimes proves inaccurate or incomplete.

Key 42: The rules of reason are predetermined based on the rules of our reality that we have recognized, and these rules require that any “reason” that exists must abide by them.

Consider our reasoning behind the evolutionary drive for self-preservation. We typically assume it represents a somewhat selfish yet necessary biological imperative passed through genes to maximize reproductive success. This makes sense when viewing the world as individual entities competing for resources.

Self-preservation can also be understood from the perspective of larger organisms composed of many contributing organisms.

Consider a collective such as a tribe. If the collective is destroyed, all members suffer. The collective therefore bears responsibility to its members. Conversely, if members become damaged, the collective suffers. The collective thus attempts to preserve both itself for the sake of its members and its members for the sake of itself. We typically do not conceptualize collectives as organisms, yet all organisms are collectives. Therefore, collectives can be organisms. Apply this reasoning to an animal. The animal attempts to preserve itself for the sake of its members (organs, limbs, cells) and preserve its members for the sake of itself. A tree protects itself for the sake of its members, which it requires for sustainability. Beyond bark and thorns, trees emit chemicals to repel predators or invading species. Leaves protect themselves with fine hairs, spines, thorns, and even toxic compounds. We tend to conceptualize trees as objects with trunks, branches, and leaves, but they can equally be understood as collective organisms composed of other collective organisms (trunk, branch, leaf), each maintaining itself and its components.

From the parent organism’s perspective, losing a member proves worthwhile if it benefits the collective and its capacity to preserve members.

Cardiac muscle cells possess self-preservation abilities to avoid foreign substances. Yet these defensive actions may damage cells sufficiently to cause fatal heart attacks, all while attempting self-preservation despite killing their parent organism.

Some theorize cells behave this way because hearts have greater survival chances if cells avoid damage and await possible resuscitation. This may be true, but consider how many successful resuscitations occurred during the 520 million years since hearts evolved, particularly before CPR’s invention in the 18th century. Why would heart cells “anticipate” resuscitation as possible? A more reasonable explanation posits that component elements of a person (cells) are compelled to preserve their component elements (nucleus, chromatins, cytoplasm, organelles). Killing the parent organism falls outside their concern or scope of influence. Preserving members constitutes the parent organism’s responsibility, which has evidently failed in such cases. The sacrificed parent organism (i.e., person), existing within the collective (tribe, community, country), represents only one improperly functioning member. This process occurs in our bodies 60 billion times daily as cells die and are created.

Key 43: The most efficient model for a collective is individual self-preservation.

Understanding self-preservation as protecting component parts makes more sense when recognizing that every organism in life’s hierarchy, from cells to cities, constitutes an individual entity.

Just as entities endanger their parent collective to preserve members, healthy collectives sacrifice members for the whole’s betterment.

This perspective explains how individuals sacrifice themselves for others lacking genetic connection, ruling out genetic preference, such as when parents sacrifice for children. This suggests our most basic drive extends beyond self-survival to survival of something more important than self, yet remains self-serving. Examples include honor, country, and divinity, common motivations in collectives.

This trait extends beyond humans. We observe it in single-cell organisms17, multi-cellular organisms18 such as slime mold, where specific cells sacrifice themselves for the greater organism when facing resource scarcity19, virulent parasites20, and plants21. Society exhibits this clearly in soldiers willing to die for country or ideology, and countries willing to send soldiers to war. Similar sacrificial behaviors emerge in human societies during times of scarcity, when resources become limited and collective survival requires individual sacrifice.

How does warfare constitute self-preservation? It represents self-preservation of the tribal entity, and the soldier’s self-preservation makes him valuable to the tribe. Otherwise he would prove useless. The soldier fights for his tribe because his interests and preservation depend on it, just as a leaf depends on its branch. Another factor obvious in humans and some animals, possibly existing in all life forms, is love. Humans regularly sacrifice themselves for love of family, country, or divinity. This strongly suggests love may provide as powerful or more powerful an evolutionary driver than genetic survival alone, though evolutionary biology has only recently begun rigorously investigating the evolutionary basis of bonding and altruistic behaviors. The true nature of love remains philosophically complex, though it produces at least one significant effect: commitment. This commitment proves primarily important to sustainability of both child organisms and parent organisms (family, tribe, and by extension, divinity). This raises an interesting philosophical question. Does divinity require such commitment for sustainability, as tribes and families require commitment from members? It seems reasonable that something’s sustainability depends on commitment from its components.

This self-sacrificing quality proves fairly common across life generally. A paper published in the International Academy of Ecology and Environmental Sciences, “Invasive cancer as an empirical example of evolutionary suicide,”22 addresses this phenomenon.

In recent years, a large portion of the literature has focused on evolutionary suicide. “Darwinian extinction” or evolutionary suicide is one of the most important findings in adaptive dynamics [which inevitably bring us to the conclusion] that evolutionary theory falls short of adequately explaining the phenomenon of life in its fullness and complexity. This is due to the fact that [evolutionary suicide] is not a rare or special case and that it can occur in the most common ecological conditions.

We also observe a less dramatic version in genomes, which reduce their replication rate instead of committing suicide to help other genomes.23

As unromantic as it sounds, “evolutionary suicide” constitutes commitment, an act of love, to the greater organism (not exactly Hallmark material). Still, this perspective proves reasonable and better explains how and why life operates as it does.

With ideas such as wave functions and quantum physics’ many-worlds theory, biology’s morphic fields, research into hyper-time telepathic communication (for long-distance space travel), and self-determining AI, what we accept as “reasonable” will dramatically change as we learn how these new processes of reality and life operate.

Some may view this as “Intelligent Design.” We prefer “Coherent Integration” to represent an integrated self-similar pattern of moving energy extending far above and below the sliver of reality’s spectrum we are attuned to perceive.

This idea resembles the theory put forth 250 years ago by the man who laid modern economics’ foundation specifically to eliminate poverty and increase sustainability. Adam Smith referred to this self-preservation power in his Theory of the Invisible Hand, described in his book An Inquiry into the Nature and Causes of the Wealth of Nations published in 1776.

The natural effort of every individual to better his own conditions, when suffered to exert itself with freedom and security, is so powerful a principle, that it is alone, and without assistance, not only capable of carrying on the society to wealth and prosperity, but of surmounting a hundred impertinent obstructions with which the folly of human laws too often encumber its operations.

It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest. ~Adam Smith, 1776

Applying this idea to society suggests that the most sustainable practice for the world can be summed up in a phrase attributed to international banks, John Lennon and Yoko Ono, Buckminster Fuller, and others24.

Think globally, act locally

This is easily practiced by spending money at local businesses rather than Walmart or Amazon, or simply, to quote Mother Teresa when asked “What can we do to promote world peace?” responding “Go home and love your family.”

Key 44: There is an order to creative processes, and creations are a product of order.

Given all this, ignoring the elephant in the room proves difficult. Governments are global tribes, and in healthy systems, governments support members and members support governments. Generally this has been the case, meaning a slight edge favors healthy over unhealthy governments. We know this through observing progress in justice and human rights concepts from Hammurabi’s Code, to the Magna Carta, to the Enlightenment producing the U.S. Constitution, which subsequently influenced many constitutions, particularly throughout the Americas and in nations seeking to establish democratic federal systems.

Ideas of justice and human rights possess a long and rich history. They did not originate exclusively in any single geographical region of the world, any single country, any single century, any single manner, or even any single political form of government or legal system. They emerged instead in many ways from many places, societies, religious and secular traditions, cultures, and different means of expression, over thousands of years. Indeed, they took millennia to evolve, since they always depended upon their specific historical context and what was possible in the face of established tradition and often determined resistance, at the time. ~Dr. Paul Gordon Lauren, historian, “The Foundations of Justice and Human Rights in Early Legal Texts and Thought”

That “slight edge” leaves considerable space for unhealthy governments, as any historian can attest. The modern world of global access has given rise to a new tribal type, the multinational corporation (MC), that does not answer to it tribal members, nor has its members’ support. The MC can, and tends to, operate more parasitically than synergistically, though parasitism has proven very successful. The largest of these MCs wield power exceeding most governments and exert tremendous influence even among the largest nations. Relative to governments, they prove efficient, fast, smart, and expand global networks at malignant rates. This is not the place to discuss how governments and MCs have formed global corporatism seemingly heading toward digital neofascism. In any case, this appears very unsustainable, much like industrial agriculture, as both systems destroy members’ interests and degenerate their abilities (people and plants, respectively). Both systems will ultimately fail or transform, but as history shows, this may take considerable time.

Cycles & Waves

Let’s return to the concept of oscillations. Cycles are similar to oscillations. The difference, at least in this book’s context, is that oscillations describe more quantitative properties such as frequency, wavelength, and amplitude, while cycles refer more to qualitative values such as medium, effects, seasons, life, and animal migrations.

One of the most fundamental concepts in our attempts to understand reality and the Universe is cycles, which represents one of the earliest concepts humans explored for obvious reasons.

Everything that has been studied has been found to have cycles present. We do not need to know what this force or cause is. It is enough to know that it must be something. We then proceed from there to find it. ~Edward. R. Dewey, Founder of “Foundation for the Study of Cycles” in 194125 and chief economic analyst for the Department of Commerce during the Great Depression.

The Big Cycle is the cycle of the Universe. Various theories describe how the Universe came into being, the most widely accepted being the Big Bang. However, uncertainty remains about how it will end. The candidates for the cosmic finale are the Big Crunch, the Big Freeze, and the Big Rip. Of these three, only the Big Crunch is cyclic, causing a new Big Bang. The Big Freeze and Big Rip scenarios offer no recovery. These are one-cycle-only options. The more sustainable option is the Big Crunch, as it delivers unending Big Bangs that occur inside black holes, creating new universes within black holes, each with its own Big Bang. Systems within systems, just like the reality we inhabit. “But what if it’s wrong?” becomes the natural question. In 1,000, 10,000, or 100,000 years, we will almost certainly hold entirely different views of how and why reality began and will end. Whatever we propose now (or at any time) describes reality as we perceive it, not as it actually is. Still, we have developed compelling ideas about creation and reality, such as the Holographic Theory and the Simulation Hypothesis. Even if this reality is “only” a simulation, that does not diminish its cyclic nature. If it is a simulation, whatever it simulates must be cyclic in nature.

Holographic Theory and Simulation Hypothesis

This section addresses cycles, specifically cycles of creation. The holographic and simulation hypotheses may not themselves be cyclic, though whatever they project or simulate presumably is. This digression proves relevant to understanding reality’s fundamental structure.

The essence of this idea has ancient roots, probably beginning with the first human who accidentally consumed the wrong mushrooms and thought, “this is all an illusion!” Its more formal lineage extends from the skeptical hypotheses of ancient Greece through “The Butterfly Dream” of 3rd century B.C. Chinese philosopher Zhuang Zhou. These ideas developed further through René Descartes’s “Cartesian doubt,” ultimately reaching contemporary form in Oxford philosopher Nick Bostrom’s influential 2003 paper “Are you living in a computer simulation?” (also a documentary26).

Two distinct ideas warrant consideration. The holographic theory offers testable predictions27, while the simulation hypothesis remains more philosophically oriented. Despite various evidence, we face Zhuang Zhou’s original dilemma. If we can dream we are a butterfly, how do we know we are not dreaming we are human? If we exist within a simulation, how can we test for it when our testing instruments are themselves simulated? Philosophers have their work cut out for them. Since holograms share conceptual similarities with simulations, and some theoretical frameworks suggest reality may be holographic, we can productively consider them together.

How is this reality holographic?

Hologram: Creates images by channeling light through patterns formed by dividing one light source into two parts (direct light and reflected light modified by matter).

Universe: Creates things by channeling energy through patterns formed by dividing one energy source into two spectrums (radiant energy and matter).

The key to understanding both holograms and reality lies in how interference patterns create structure and information.

In a hologram, when direct light from the source meets reflected light that has bounced off an object, these two beams interfere with each other. Where the waves align (constructive interference), brightness appears. Where they cancel (destructive interference), darkness results. This interference pattern encodes all the three-dimensional information about the object into a two-dimensional surface. When you illuminate the hologram, it reconstructs the full 3D image from this encoded pattern.

The universe operates through a similar principle. The two spectrums of energy (radiant energy and matter) constantly interact and interfere with each other. This interference creates the patterns, structures, and laws we observe:

Just as a hologram’s interference pattern contains encoded information that generates a three-dimensional image, the continuous interference between radiant energy and matter creates the encoded information that generates physical reality. The patterns, laws, and structures we observe are not separate from this interference but are the direct manifestation of it. Where radiation and matter achieve stable interference patterns, persistent forms emerge. Where they cannot find stable interference, forms dissolve or transform.

Both systems demonstrate that reality is fundamentally about patterns created through interference, whether it’s light beams in a hologram or energy spectrums in the cosmos.

In this speculative scenario where reality is a holographic projection, the Big Bang becomes the equivalent of a cosmic hacker flipping the “on” switch to their “Reality Framework Server.” In some interpretations of the holographic principle, these projections could originate from black holes. Just as a hologram encodes three-dimensional information on a two-dimensional surface, the event horizon of a black hole (the boundary from which nothing escapes) could act as a two-dimensional information-encoding surface that projects our three-dimensional reality. We conceptualize this world as analog, but it may well be digital with a level of resolution far beyond our ability to perceive. This would make sense if one were designing a simulation. If the event horizon functions as a projection surface, it can be imagined as composed of a series of 1s and 0s, each “bit” occupying a square with sides of one Planck length (1.616 × 10-35 m per side, or approximately 2.6 × 10-70 m2). To establish perspective, the number of “bits” on the event horizon surface of Gaia BH1, one of our nearest known black holes located approximately 1,560 light-years away, would be on the order of 1088. Current estimates place the total number of atoms in the observable Universe at 1082, making this an extraordinarily large number from just one black hole “projector.” With an estimated 100 million such black hole projectors in our galaxy alone, and approximately 500 billion galaxies within our range of observation, we arrive at least 5 × 1019 “higher-dimension reality projecting processors” (for lack of better terminology) operating in perfect synchronization, theoretically.”

This raises an apparent paradox. If nothing escapes from a black hole’s event horizon, how can it “project” anything? The answer lies in the Big Crunch scenario discussed earlier. In that cyclic model, our universe itself exists inside a black hole from a parent universe. When matter and energy from the parent universe fell into that black hole, all the information was encoded on the event horizon surface according to the holographic principle. The event horizon doesn’t project outward into the parent universe. Instead, it serves as the informational boundary between the parent universe (outside) and our universe (inside). The Big Bang that created our universe wasn’t an explosion outward into existing space, but rather the emergence of new space-time within the black hole, using the information encoded on the event horizon as its “cosmic blueprint.” In this model, the event horizon acts as both the ending (where the parent universe’s information is preserved) and the beginning (where our universe’s information structure originates). The black holes in our universe would function the same way, with their event horizons encoding all the information that falls in, potentially seeding new universes within them. Nothing needs to escape. The event horizon is simultaneously the final surface of one reality and the initial template for the next. This nested, recursive structure means each universe generation inherits its informational structure from the event horizon boundary of its parent, creating an infinite regress of universes within universes, each encoded on the two-dimensional surface that separates them.

Even more mind-bending is the possibility that this reality is only “rendered” on demand. This simulation creates wave functions of possibilities that only collapse into form when “observed” (i.e., when interacting with the environment), a phenomenon well-established in quantum mechanics that also happens to be exactly what you’d expect if creating actual physical reality proved computationally intensive.

The word “simulation” implies an artificial copy of something pre-existing. A more accurate understanding would recognize this as genuine creation that may have been designed with a specific template or purpose.

Most creation theories share a surprising compatibility with creation myths from Judeo-Christian, Islamic, Hindu, Zoroastrian, Taoist, Buddhist, and even the world’s oldest continuous civilization, Aboriginal Australians28. Cycles of creation appear implicit in all these traditions and explicit in many. We would expect such similarity between radically different perspectives, from modern science to ancient religion, if they all attempt to understand the same underlying reality.

What Goes Where?

Key 45: All dualities stem from the first duality.

Suppose, as we have been doing all along, the two primal, conceptual, and opposing endpoints of the spectrum of our reality are somethingness and nothingness. This leaves considerable space for pretty much anything to exist. The principle of entropy maximization tells us that the probability of where things will appear follows natural distribution along the spectrum in what is called a Bell curve, or normal distribution of probability. Regardless of the number of variables at play, the probabilities of where things will appear will always tend toward this normal distribution.

If we imagine the ultimate Bell curve between the states of nothingness (high entropy chaos) and somethingness (low entropy chaos), the part of the curve representing the most sustainable condition appears in the middle. The middle marks the point where the least amount of disorder exists and therefore the highest probability for order emerges, as each side will be represented in the most balanced and sustainable manner. The chart below illustrates this general concept.

We observe this distribution pattern everywhere in nature. In theoretical biology, this is called environmental dimensionality and describes how various biological systems coexist29 in whatever scope and context apply.

Key 46: The most likely and persistent state of anything (including ideas) that exists due to other states will be where the most balance exists between those other states.

Key 47: Order emerges most efficiently in a balanced state.

Consider the human eye sensitivity chart below. It shows that a particular shade of green (555nm) proves most dominant because this wavelength falls at the middle of our biological sensitivity range to light. This demonstrates that the most efficient (most energetically ordered) function of our eyes is the perception of green. The prevailing view suggests primates and many other animals developed such sensitivity to green because they evolved in predominantly green environments. The tholonic view acknowledges this as a contributing factor, but identifies a larger reason.

The highest frequency energy that we have been able to detect is 1030 Hz, and the lowest is 0.1 Hz, so on a spectrum from 0.1 Hz to 1030 Hz, the visible light spectrum for humans and most mammals is right smack in the middle if we use a logarithmic scale. This would make perfect sense if we accept that logarithmic scales are the way nature works and the middle of the spectrum would most likely be the place where the most order would evolve, and therefore where we would most likely see new forms of order and evolution occur. However, one possible theoretical limit for radiation is the Planck frequency, which is 1043 Hz. It’s interesting that our vision occupies the very center of the spectrum when we define it from the lowest to the highest frequency we are capable of detecting.

This center slice of the spectrum representing visible light coincides with the most dominant solar radiation reaching Earth, which spans approximately 400-1100 nm with peak intensity at around 500 nm (green) and high energy continuing through the near-infrared (900 nm).

The entire visible spectrum plus the near ultraviolet spectrum occupies approximately 1.19% of the electromagnetic spectrum (on a logarithmic scale from 0.1 Hz to 1030 Hz), yet this narrow band falls near dead-center of the observable spectrum. This makes visible light the most “efficient,” ordered expression of EM energy and explains why this is the primary range of EM radiation used for photosynthesis in most organisms, occurring predominantly in the visible spectrum (between 400 nm and 700 nm). Plants evolved to absorb the most efficient form of energy that hydrocarbons could interact with, explaining why virtually all vision across animals and insects overlaps significantly with this range (typically 300-700nm), regardless whether they can detect only one range of color (dolphins) or five ranges of color (birds), and regardless of the color of the environment in which they evolved.

But if plants should be absorbing green, why do they reflect green? Surprisingly, the answer to this question was only properly addressed in 202030. In short, researchers discovered that plants have opted for stability over efficiency, preferring to create a “portfolio” of energy sources from less energy-efficient colors with reduced input from the green range. This increases energy stability, eliminates large fluctuations in energy, and reduces “network noise” during the photosynthesis process. Because green delivers the most energy to a plant, it can cause rapid and dramatic changes in the chlorophyll network, which then must work harder to compensate for the irregularity. The red and blue light plants absorb is also processed at different rates, allowing for a more flexible and stable supply of electrons to the cells. This proves critical as too few electrons cause cell failure, while too many can cause cell damage.

This demonstrates a fundamental principle: stability is more important than efficiency, and sustainability trumps the path of least resistance. By strategically rejecting the most abundant energy source for long-term stability, plants achieve superior results. This apparent wisdom (prioritizing sustainability over immediate energetic advantage) suggests an organizing intelligence embedded in natural systems, whether through evolutionary optimization or some deeper principle of order, a concept that applies far beyond photosynthesis.

You might think blue light, which carries more energy than green light, would produce the same effect. However, green light can penetrate much deeper into the plant than blue light, making green light a far more efficient transmitter of energy. The cells in our eyes, which send signals to the brain, do not face the same demands as the chloroplasts in plants, which convert light into chemical energy, so there is no need for such advanced stabilizing processes.

The plant example illustrates a broader principle: efficiency is not absolute but contextual. Just as plants optimized for stability over maximum energy capture, organisms in different environments optimize according to entirely different constraints. What proves “efficient” in one context may be irrelevant or even counterproductive in another.

An excellent example of contextual evolution was found in the Movile Caves in Romania. These caves were sealed off 5,000,000 years ago, yet within the hostile environment, rich in hydrogen sulfide gas and sulfuric acid, at least 33 new species of life have evolved and flourished, feeding off the bacteria that consume rocks and other inorganic material (chemosynthetic bacteria).

Note: In the chart above, the maximum frequency listed, 1030 cycles per second (cps), comes from the fastest gamma ray ever recorded, detected on Sept 16, 2008. It came from a mega-supernova approximately 12 billion light years away, which is to say this mega-supernova occurred 12 billion years ago when the Universe was about 1.5 billion years old. The gamma rays were moving at 99.9999% of the speed of light, suggesting that 1030 cps approaches the upper limit of the maximum end of the spectrum.

Regarding the entropy labels in these charts: In the context of electromagnetic radiation, entropy aligns with the tholonic framework through two convergent principles. First, considering energy concentration, higher frequencies carry more energy per photon, representing concentrated energy states analogous to lower entropy (like the Big Bang singularity where all energy is concentrated). Lower frequencies carry less energy per photon, representing dispersed energy states analogous to higher entropy (like the heat death where energy is maximally dispersed). At the theoretical extreme, infinite frequency would contain infinite energy in a point (lowest entropy), while near-zero frequency approaches zero energy spread across space (highest entropy). Second, considering spatial distribution, longer wavelengths are literally more spread out per cycle, occupying more space and reflecting a more dispersed (higher entropy) state, while shorter wavelengths are spatially concentrated (lower entropy). Both perspectives converge on the same conclusion: higher frequency equals lower entropy, lower frequency equals higher entropy. However, it’s important to note that all electromagnetic waves, regardless of frequency, maintain perfect structural order as sine waves. The entropy distinction here refers to energy and spatial concentration rather than structural chaos.

The oscillations between two states, whether chaos and order, movement and stillness, or any two points that differ yet can form an interaction, represent the most basic form of energetic expression in all the orders of creation. On this point, metaphysics, spirituality, and science all happily converge. Everything exists in a duality of some sort, or at least that is how we can describe everything.

Were imbalance and difference to cease existing, the cycles of creation, nature, and the movements of the Universe would also cease. Nature is the balancing of these imbalances between systems across all scopes, extending all the way down (or up) to the first act of imbalance and balancing.

Waves

Waves express cycles. When we observe simple waves such as light, radio, and sound waves, we see oscillating movement of energy over time.

At the atomic level, electromagnetic waves include light, X-rays, and radio waves. At the material level, mechanical waves include sound. At the organic level and beyond (planets, life, culture, politics, etc.), we use the term cycles rather than waves, representing the same phenomenon in different contexts and scopes.

Frequency and wavelength are terms typically used to describe waves of radiation and vibration. A typical wave model shows one cycle of energy moving over a period of time. Frequency indicates how many cycles occur per unit time, while the distance covered in one cycle defines the wavelength. We could say the moon has a wavelength of approximately 50 million km with a frequency of 0.00599584916 Hz, or that the migration cycle of the Arctic Tern (a bird flying between Arctic and Antarctic annually) has a frequency of 1/year with a wavelength of 70,000 km. We do not use such terminology because it proves cumbersome, non-intuitive, and contextually confusing.

Despite the awkwardness of applying wave terminology to such cases, these natural cycles remain expressions of the same law governing light waves in energy transference, operating at different levels of expression with higher orders of complexity, size, and dependency. Instead of photons, we observe birds and planets.

Consider the interdependent cycles of cohabiting rabbits and coyotes. Comparing X-rays with prey-predator cycles might sound ridiculous, yet both share the most basic function of energy transference attempting to create balance, operating in very different scopes and contexts.

Not Just a Cycle

Waves may represent more than simple periodic cycles, at least in how they interact with their environment. When we take a wavelength equal to Earth’s circumference and map it across the equator, we find an extraordinary number of cultural artifacts representing incredible advances for their time, stretching over 7,000 years. Considerable speculation and research31 examines why the Great Pyramid and Machu Picchu, or the Nazca Lines and Easter Island, fall within this same line (shown in the image below) to within less than one-tenth of one degree of latitude. Other locations, such as Persepolis, Mohenjo Daro, Petra, Ur, and the temples at Angkor Wat, fall within one degree. Earth’s inhabitants seemingly possessed unconscious knowledge related to a wavelength matching the planet’s size. Could Earth’s inhabitants be moved by this planetary wavelength similar to how sand moves on a vibrating surface to form patterns?

The two most common wave classes are electromagnetic (photons) and mechanical (particles, M-waves). Comparing them would reveal similarities identifiable in cycles such as those mentioned here, but that extends beyond this book’s scope. We can at least compare the properties of these two common wave classes.

For physicists and electrical/mechanical engineers who might read this with extreme incredulity, many differences exist between electromagnetic (EM) and mechanical (M) waves. We expect energy to express movement quite differently when operating at subatomic scale vs Newtonian scale vs cosmic scale. Nevertheless, many analogous parallels exist between the two.

To establish perspective on their contexts and scopes, consider that the difference between an electron and the Rock of Gibraltar is about the same difference between the Rock of Gibraltar and the entire galaxy. It’s easy to imagine how a fundamental law might be expressed differently at these different scales.

Despite these vast differences in scale, the similarities prove more important, particularly how energy travels through media via waves. All energy forms represent redirections of energy from one state to another, as energy can be neither created nor destroyed. This is the first law of thermodynamics, the conservation of energy.

We have identified many energy forms including sound, chemical, radiant, electrical, atomic, mechanical, elastic, ionization, gravitational, and dark energy (the force causing the Universe to expand faster than expected). Is it unreasonable to imagine still more energy forms? Are emotions like love or fear, desires, thoughts, or imagination redirections of energy? What form does that energy take?

Can we claim to know every medium type, cycle, and controlling law? Certainly not, but we can begin exploring by applying fundamental laws of energy and cycles.

We can only recognize energy in one form or another, not as pure energy itself. You cannot see or detect light until light energy hits something such as a detector, dust particle, or optical cell in your eye. The same applies to electricity and all other energy forms.

The real mind-bender is realizing that the dust particle, and all mass, is also energy in a different form, as m=E/c2 demonstrates. This means we can only see energy when it interacts with a denser form of itself. This principle mirrors the holographic analogy discussed earlier, where we see the image created by the interference between two forms of light (direct and reflected), but never the pure light itself. Similarly, in our universe, we observe the patterns created by the interference between the two spectrums of energy (radiant energy and matter), but never pure energy in isolation.

Energy alone resembles a black hole. We cannot see it directly, but we can observe its effects on its environment. Because of these effects, we can state that energy creates sustainable patterns, interacts (or interferes) with other energy forms, and must oscillate in some medium.

Key 48: We can only see the interactions of energy and not the energy itself.

What about energy that does not oscillate? Does 0 Hz frequency energy exist? Yes. It is called direct current (DC) electricity, the most common form in nature, including lightning, static electricity, and solar particles. However, the electrons transferring that energy do possess frequencies as both waves and particles.

Additionally, the energy being transferred and the electrons transferring it travel in a cycle. If they were not returned to the source and recycled, the current would end and the circuit would exhaust its electrons. While AC results from electrons staying in place and oscillating back and forth 60 times per second (in the U.S.), passing charge down the wire near the speed of light, DC force flows consistently in one direction like a river.

In DC circuits, while the electrical signal propagates near the speed of light, the electrons themselves drift through the conductor at remarkably slow speeds, typically less than a millimeter per second in common household wiring. This drift velocity represents the speed at which electrons physically return to the source. That cycle represents a slow, large, fundamental pattern in natural function.

Alternating current (AC) proves quite rare in nature, although AC technically exists as planetary magnetic fields flip. For example, Earth’s magnetic field has reversed 171 times in the last 71 million years, and the Sun’s magnetic field flips every 11 years. The modern AC form of electricity was invented by Frenchman Hippolyte Pixii in 1832 and popularized by Nicola Tesla, creating a new scope and context of energy design by and for humans.

We lack words for many phenomena we have not yet recognized and therefore cannot describe or measure. Speculating that cycles and media entirely new to us exist in the realm of ideas and archetypes proves not merely reasonable but necessary, as they too express energy.

The real question regarding energy remains “What is energy?” Surely the answer would illuminate the very purpose and meaning of existence, as everything that exists is energy. Fortunately, millennia of research and investigation into this greatest mystery have provided an answer:

Energy is the ability to perform work

Were you expecting something more profound? Well, stand in line. This definition resembles defining a cat as “the ability to reduce mouse population.” Both definitions describe what the subject can do but nothing about what it is.

Many types of energy exist, such as kinetic, potential, thermodynamic, and metabolic. We know how to measure it, how it acts, and that it cannot be created or destroyed. Energy resembles the proverbial duck. We know how it looks, quacks, and walks, but we have no idea what a “duck” is.

With this definition, the answer to “What is phlimquitz?” becomes “the ability to increase suicides by increasing the federal space technology budget.”

As far as science concerns itself, energy is simply a value equating to movement, which proves adequate for physical sciences but inadequate for understanding the mechanics of awareness, which constitutes this book’s purpose (and which we will address very soon).

We can expand on this by noting that ordered systems “work” more efficiently than chaotic systems. Chaotic systems do not synchronize with other systems32, so like the Las Vegas of energy, any “work” performed in a chaotic system stays in that system (mostly). Ordered systems synchronize well, so “work” distributes more easily between systems, allowing much wider energy distribution. This irony that order facilitates chaos deserves repeating:

Low entropy chaos creates order to better distribute energy, resulting in high entropy chaos.

Key 49: A high state of order distributes more “work”.

Work

Refining the concept of work may seem like an odd segue at this point, but now that it has been brought up, let’s be clear about what this means.

In a physical sense, work is anything that can be measured in joules or calories. Work is defined as using force to move something. That something might be atoms, molecules, muscles, bricks, neurons, or anything that can move, regardless of why such things are being moved. From a physics perspective, building the Taj Mahal and creating the world’s largest garbage landfill is the same regarding the work involved if they both burned the same number of calories. This is a purely quantitative understanding. As a qualitative measure, consider that the amount of work, in calories, expended in the full-time effort for an entire year on solving the quantum theory of gravity or a week-long backpacking adventure is equal to less than 7 prime rib dinners with a beer and dessert33.

The issue is that work is measured only in quantitative values, which cares nothing about the qualitative values of that work. This is fine when optimizing a steam engine but not when applied to the real world, where quality is just as important as quantity, if not more so. Where humans are concerned, quality requires intention. One must intend to create something beautiful, like the Taj Mahal, as it will not happen by accident.

Let’s see if we can translate our energy concepts into the realms of effort and intentions. For example, we all understand the concept of resistance in our daily lives, which is anything that makes whatever we are attempting to do more difficult. Resistance is usually measured in time, money, or aggravation.

Volts, or potential, is also easy to equate. We are (hopefully) aware that we can probably lift 100 pounds but not 500 pounds, as doing so will most definitely short circuit your back. The same goes for current, which is more like stamina. You cannot lift a 500 pound stone, but you can probably lift a 5 pound stone 100 times.

It is not that far a stretch to relate physical work with electrical work, as in both cases, the work can be measured in joules. However, in the realm of human will and conscious intentions, these definitions get a bit more abstract as we have no idea how to measure such things.

An example of this might be the effort it takes to quit smoking. Effort, in this context, captures the concept of work as well as intention, but in a totally different context. It takes a lot of effort to quit smoking, but minimal calories and essentially no additional measurable physical energy, as we usually define that word.

With this definition, the technical meaning of entropy is useless, as no scientific instrument can measure intention or will, but the concept is still valid. Entropy in this context of effort is the amount of effort not available for use, or simply lack of effort.

How can we measure something subjective, especially something already so abstract? If you ask me which I would prefer to do, dig a trench for 6 hours or listen to a 1-hour lecture on Post-modern Fashion in Appalachia, I would be digging before you could say “in Berkeley”. I would rather expend an additional 10,000,000 joules (2390 calories) shoveling dirt than sitting in a lecture hall feeling tortured.

Here, the path of least resistance is far more thermodynamically inefficient, but that is just for me. Someone else would be tickled pink to attend such a lecture (I assume).

However, if energy must always travel the path of least resistance, then one can only presume that the non-thermodynamic energy I am conserving by not having to listen to that lecture is greater than the thermodynamic energy I am losing shoveling dirt.

Whatever that energy is, science cannot measure, and what science cannot measure, science ignores. One way it could be quasi-measured is to ask what the limit of shoveling is before I choose the lecture. If I draw the line at 6.5 hours of shoveling, then I must feel the lecture will cost no more than the psychological equivalent of 10,000,000 joules, whatever that is (psycho-joules? Calories of aggravation?).

Even then, that value could change at any moment, for any reason or no reason, depending on my state of mind or how much wine I drank at dinner.

Pressure, difference, order, movement, and pattern are low entropy states or states that require effort to maintain. We can now add the more qualitative concepts of moderation, control, command, discipline, and determination to this category.

Dispersion, disorder, and stillness are all high entropy states or states that require little effort, as do the qualitative states of indecision, apathy, doubt, irresolution, and weakness.

Microstates, or the number of possible configurations of how energy can move between two or more states, remain the same in concept but as it applies to effort and in the context of that effort. How many ways can you, as a system, expend effort when you have the intention to address a specific issue?

For example, how many ways can you quit smoking, learn piano, lose weight, kill mosquitoes, or write a poem?

Take one example, such as losing weight, of which there are at least 29 ways to accomplish34. This is similar to how the quanta use different paths to distribute themselves among bonds.

In North America, the most popular way to lose weight, and the way chosen by 60% of dieters (according to Nielsen Global Health & Wellness Survey), is to change their diet away from fatty and processed foods and more towards natural, fresh foods. We might then surmise that this microstate is the one that represents the balanced application of effort.

Obviously, a major difference is that humans, and all living things, can select many microstates at once, unlike quanta, which can only select one.

Still, the probability curve will remain the same if we consider a living thing as having many quanta of effort.

In molecules, the exchange of energy is induced by the natural tendency for energy to balance itself between 2 different states of energy, such as hot and cold. Here, the 2 differing states are what is and what is intended, regardless of whether that intention resulted from necessity or desire. The intention acts to balance that difference, as in “I am overweight. I need to lose weight. I will diet.”

But, as we all know too well, intention without effort is useless (and possibly dangerous). Energy causes something to change direction or move. Here, it is effort that causes something to change direction or move.

Whatever is being changed acts as resistance, so we will just call it that.

Notice what we have now. Resistance is acting like mass that requires effort over time, or force, to change, and the amount of that force is determined by one’s intention, or force of will.

Metaphorically, the concept of F=m × a is perfectly captured in this one sentence from a speech given to the graduates of the Lenox Academy in Massachusetts, which coined the phrase “onward and upward”:

Fail not for sorrow, falter not for sin, but onward, upward, till the goal ye win. ~Frances Anne Kemble, 1809-93

This phrase makes it quite clear that intention to reach one’s goal must be a consistent force greater than the resistance of (at least) sorrow and sin.

Can we then say, if F=m × a, then force = resistance × intention, or f = r × i? In this mapping, resistance acts like mass (what resists change), and intention acts like acceleration (the rate at which you want to change).

Let’s test this. We will use the values of f=force, e=effort, t=time, r=resistance, and i=intention.

Imagine these scenarios where different people must quit smoking:

In either case, the force of smoking is 20. To break free, To bring the force to 0, one must either reduce the psychological intention to smoke (i) or weaken the physical habit (r).

The smoking impulse (effort × time) distributes itself between physical resistance and psychological intention. If we know someone’s intention to quit, we can calculate how much of that impulse manifests as resistance: r=(e × t)/i or r=f/i. For example:

You can see that the long-time heavy smoker has the greatest resistance to overcome while the beginning mild smoker’s resistance to quitting is small, but the long-term mild smoker has twice the resistance as the short-term heavy smoker.

Notice the pattern: the total smoking impulse (e × t) remains constant but distributes differently between resistance and intention. Someone who does not see smoking as a problem (low r) develops strong psychological attachment (high i). Someone who recognizes it as problematic (high r) may have weaker psychological attachment (low i). Either way, r × i = e × t = f.

Similarly, if we know the resistance level, we can calculate how much manifests as psychological intention to smoke: i=(e × t)/r or i=f/r. For example:

Was this test successful? It correctly predicts who would have the hardest time quitting, the effects of effort over time, and even tells us that smoking a little for a long time creates a bigger problem than smoking a lot over a little time, which we know to be true35.

While we cannot apply the statistical data of physics to these concepts of effort, force, resistance, and intention, there certainly seems to be a correlation between the laws of physics and these broader interpretations of the physical attributes.

This is what we would expect to see if we were looking at how fundamental patterns of energy apply to different contexts.

If this correlation has any merit, then we can also draw an analogy to electricity. Intention acts like voltage, effort like current, and resistance remains resistance. The force (f = r × i) functions like electrical energy rather than power.

This correlation is poetically poignant when you consider how high-voltage/low-current circuits are more dangerous than low-voltage/high-current circuits. You will not feel anything if you touch the terminals of a 12 volt 5-amp battery, but touch a 220-volt supply with 0.27 amps, and you will be shocked, literally and figuratively, even though they both have the same amount of power.

This implies that big intentions with little effort are more dangerous than small intentions with a lot of effort. This notion is supported by the saying “The road to hell is paved with good intentions.”

Is this true in practice? Is attempting to start a new business, get married, or climb Mount Everest with little effort more dangerous than 100% commitment to balancing your checkbook (not that anyone has a checkbook these days), waxing the floor, or buying a new pair of slippers? For anyone who has experience with these examples, the answer is obvious.

This reflects the Zen wisdom of “be-here-now.” Full presence in the current moment, complete commitment to the task at hand regardless of its perceived importance, creates a foundation for genuine accomplishment that grandiose but unfocused intentions cannot match.

Simple Cycles

Let’s look at the simple cycle again as the archetypal pattern that describes the movement of energy through a medium, regardless of the medium. In some contexts, this pattern is quite measurable. In others, it is analogous, such as in the migration pattern of the Arctic Tern.

When we examine predator-prey cycles, we see they follow the same cyclical archetype as light, electricity, or sound. Below we show a typical predator-prey cycle (taken from a Northern Arizona University online biology class). The left column shows the Predator-Prey Cycles, and beneath that, the cycle of the system of coyote/rabbit (combined cycles). The right column shows the parent archetype of the Predator-Prey Cycles, which is 2 perfect sine waves out of phase by 90 degrees.

This is a naturally occurring cycle, but notice that the two cycles naturally occur π/2 radians (90 degrees) apart, which is the maximum difference any two waves can have. The peak of the predator population aligns with the midpoint of the prey population’s decline, exactly as we see in electromagnetic waves. We also see this in electricity, where voltage lags current by π/2, or how electric and magnetic fields in a wave are 90 degrees apart, with one field’s peak occurring at the other’s midpoint. Quite literally, the predator acts like the voltage, and the prey acts like current.

In the world of electricity, resistance = voltage/current (R = V/I), and power = voltage × current (P = V × I). Does this mean we can find the equivalent to resistance in the predator/prey cycle with predator/prey?

Let’s test this with hypothetical extreme scenarios. Imagine a population crash where there are 1000 coyotes but only 10 rabbits left. The resistance would be 1000/10 = 100. Now imagine the opposite: 10 coyotes with an abundant rabbit population of 1000. The resistance would be 10/1000 = 0.01.

Looking at these results, we see an interesting pattern. In electrical terms, high resistance blocks energy flow. Here, high resistance (many coyotes, few rabbits) means the system is stressed and coyotes starve. Low resistance (few coyotes, many rabbits) means abundant energy flow from prey to predator. So the analogy actually works, but from the system’s sustainability perspective rather than individual success.

Let’s trace this energy flow more explicitly. If the rabbits are being eaten, there is a transfer of energy from rabbit to coyote, and the coyote continues to contribute to the chain downstream, so to speak. If there are no rabbits, the coyotes die of starvation, and while the energy is still transferred back into the earth (from decomposed coyotes who died of starvation), the downstream chain has been weakened.

In this sense, the value of the resistance represents the resistance to the chain’s sustainability, as rabbits are to coyotes, as coyotes are to wolves, as wolves are to mountain lions. Ultimately, everything ends up back in the ground, feeding the grass the rabbits eat, completing the cycle.

In the coyote/rabbit world, a coyote needs about 275 rabbits a year to survive if all they ate was rabbits. This would then mean that the baseline r-value for a coyote/rabbit system is r = 1/275 = 0.00363. Of course, coyotes do not just eat rabbits, so in practice, r-values for all the systems interacting with each other would need to be determined and integrated before these values had any practical meaning.

What about the power equivalent with predator × prey? Certainly, the concept of power, which is the rate at which energy is transferred, applies to coyotes eating rabbits. Current, which is the amount of energy moving through a system, would apply to the rabbits, at least as far as the coyotes are concerned. With a little imagination and creativity, you can see how this ecological cycle could be described as an oscillating circuit using concepts of Newton’s 2nd law of motion.

There is another archetype not shown, and it is the archetype of the archetype. As this meta-archetype has not context, no values apply, so it is just a flat line, a frequency of 0 Hz, just like nature’s DC current.

We keep bumping into the same universal constants and concepts in humanity’s quest to understand reality and its processes, either through religion, culture, economics, nature, alchemy, or science.


  1. Livio, M. (n.d.). “Why Math Works”. https://www.scientificamerican.com/article/why-math-works↩︎

  2. Britannica, T. E. (2019, April 08). “Laws of thought”. https://www.britannica.com/topic/laws-of-thought↩︎

  3. 15Insane Things That Correlate With Each Other”. (n.d.). http://www.tylervigen.com/spurious-correlations↩︎

  4. Abreu, N. (n.d.). “Methodology for Investigating the Hypothesis of Anomalous Remote Perceptions as Objective Phenomena.http://cref.tripod.com/tucsonpaper.htm Science of Self Club, University of Florida↩︎

  5. This was calculated by Jordan Hurst, an independent game developer and writer from Toronto, Canada.↩︎

  6. Meyer, Stephen C. “Darwin’s Doubt”. HarperCollins Publishers Inc, 2014.↩︎

  7. Examples include the Baghdad battery and other evidence of early electrical knowledge, the massive stone pillars at Göbekli Tepe (which present engineering challenges even for modern technology), sophisticated astronomical knowledge in ancient cultures, the construction methods of the pyramids, and more. One particularly intriguing example appears in Egyptian depictions of Min, the god of fertility and sexuality. In carvings from 2500 B.C.E., what appears to be ejaculate contains a clear diagram resembling a sperm cell, complete with head and tail, shown in context and placement that suggests anatomical knowledge. According to conventional understanding of that period’s technology, Egyptians could not have possessed microscopes with the 100x magnification necessary to observe sperm cells. ↩︎

  8. Curtis, Gregory. “The Cave Painters: Probing the Mysteries of the World’s First Artists.” Knopf, 2006. See also French Ministry of Culture, Lascaux Cave Art resource on thematic repertoire showing human figures are limited compared with animals. https://archeologie.culture.gouv.fr/lascaux/en↩︎

  9. Sauvet, Georges, et al. “Signs, Symbols, and Myth: The Significance of Palaeolithic Graphic Imagery.” Cambridge Archaeological Journal, vol. 19, no. 3, 2009, pp. 307-325. Discusses how graphic signs and imagery function as supports for collective myths and social stability, representing categorical and archetypal rather than individual representations.↩︎

  10. McDermott, LeRoy. “Self-Representation in Upper Paleolithic Female Figurines.” Current Anthropology, vol. 37, no. 2, 1996, pp. 227-275. Proposes that facelessness and omission of individuating features in Venus figurines suggests function oriented toward type, role, or embodied concept rather than portraiture.↩︎

  11. Lewis-Williams, David, and Thomas Dowson. “The Signs of All Times: Entoptic Phenomena in Upper Palaeolithic Art.” Current Anthropology, vol. 29, no. 2, 1988, pp. 201-245. Discusses caves as liminal spaces where altered consciousness and ritual preparation facilitated the manifestation of archetypal imagery into material practice.↩︎

  12. Frazer, James George. “The Golden Bough: A Study in Magic and Religion.” Macmillan, 1890 (abridged edition 1922). This comprehensive comparative study documents magical practices across cultures, including sympathetic magic principles where conceptual representation (images, rituals) influences material outcomes, a universal pattern of understanding reality through symbolic manipulation.↩︎

  13. Jaynes, Julian (2000) [1976]. “The origin of consciousness in the breakdown of the bicameral mind” (PDF). Houghton Mifflin. p.73.ISBN 978-0-618-05707-8.↩︎

  14. This quote is also interpreted in the D.C. Lau translation of the Tao Te Ching, Chapter 18, as “When the great way falls into disuse, there are benevolence and rectitude. When cleverness emerges, there is great hypocrisy.” Lao Tzu. Tao Te Ching. Translated by D.C. Lau, Penguin Books, 1963.↩︎

  15. Duell, Charles H. “Report of the Commissioner of Patents for the Year 1899.” U.S. Patent Office, 1899.↩︎

  16. Fast-moving air is at a lower pressure than slow-moving air, so the pressure above the wing is lower than the pressure below, creating the lift that powers the plane upward.↩︎

  17. Fiegna, F., & Velicer, G. J. (2003). “Competitive fates of bacterial social parasites: Persistence and self–induced extinction of Myxococcus xanthuscheaters”. Proceedings of the Royal Society of London. Series B: Biological Sciences, 270(1523), 1527-1534. doi:10.1098/rspb.2003.2387;↩︎

  18. Muir and Howard, 1999 - Rankin, D. J., López-Sepulcre, A., Foster, K. R., & Kokko, H. (2007). “Species-level selection reduces selfishness through competitive exclusion.” Journal of Evolutionary Biology, 20(4), 1459-1468. doi:10.1111/j.1420-9101.2007.01337.x↩︎

  19. Rainey, Paul B. “Precarious Development: The Uncertain Social Life of Cellular Slime Molds.” Proceedings of the National Academy of Sciences, vol. 112, no. 9, 2015, pp. 2639-2640., doi:10.1073/pnas.1500708112. https://www.pnas.org/content/pnas/112/9/2639.full.pdf↩︎

  20. Roode, J. C., Pansini, R., Cheesman, S. J., Helinski, M. E., Huijben, S., Wargo, A. R., . . . Read, A. F. (2005). “Virulence and competitive ability in genetically diverse malaria infections”. Proceedings of the National Academy of Sciences, 102(21), 7624-7628. doi:10.1073/pnas.0500078102↩︎

  21. Gersani, M., Brown, J. S., O’brien, E. E., Maina, G. M., & Abramsky, Z. (2001). “Tragedy of the commons as a result of root competition”. Journal of Ecology, 89(4), 660-669. doi:10.1046/j.0022-0477.2001.00609.x↩︎

  22. Ahmed Ibrahim, “Invasive cancer as an empirical example of evolutionary suicide”, Network Biology, 06/2014, v. 4 2↩︎

  23. Levin, Samuel R., and Stuart A. West. “The Evolution of Cooperation in Simple Molecular Replicators.” Proceedings of the Royal Society B: Biological Sciences 284, no. 1864 (November 2017): 20171967. https://doi.org/10.1098/rspb.2017.1967.↩︎

    1. Buckminster Fuller, Jr. (1895-1983), http://mindprod.com/ethics/quote.html; Rene Dubos, as an adviser to the United Nations Conference on the Human Environment in 1972, http://capita.wustl.edu/ME567_Informatics/concepts/global.html; A slogan attributed to Yoko Ono and popularized with the help of her husband, John Lennon. http://www.everything2.com/index.pl?node_id=680227; “A well-known international bank coined the phrase” [states Louisa T. C. Kim President of Korea TESOL
    ↩︎
  24. For a fascinating tour of cycles in general, browse through the archives of the Foundation for the Study of Cycles, which has over 100,000 documents related to cycles compiled, written, and curated by scholars and scientists. https://cycles.org/. Dewey’s book. “The Case for Cycles” is available for download at https://cyclesresearchinstitute.org/pdf/cycles-general/case_for_cycles.pdf↩︎

  25. The Simulation Hypothesis Documentary. (2018, August 01). https://youtu.be/pznWo8f020I↩︎

  26. Study Reveals Substantial Evidence of Holographic Universe.” University of Southampton. Accessed April 19, 2022. https://www.southampton.ac.uk/news/2017/01/holographic-universe.page.↩︎

  27. Klein, Christopher. “DNA Study Finds Aboriginal Australians World’s Oldest Civilization.” History.com, A&E Television Networks, 23 Sept. 2016, https://www.history.com/news/dna-study-finds-aboriginal-australians-worlds-oldest-civilization↩︎

  28. Parvinen, Kalle, and Ulf Dieckmann. “Environmental Dimensionality”. Journal of Theoretical Biology, 2018, doi: 10.1016/j.jtbi.2018.03.008. https://www.ncbi.nlm.nih.gov/pubmed/29551543↩︎

  29. Ortega, Rodrigo Pérez, “Why Are Plants Green? to Reduce the Noise in Photosynthesis.” Quanta Magazine, September 4, 2020. https://www.quantamagazine.org/why-are-plants-green-to-reduce-the-noise-in-photosynthesis-20200730/.↩︎

  30. “Exploring Geographic and Geometric Relationships Along a Line of Ancient Sites Around the World” https://grahamhancock.com/geographic-geometric-relationships-alisonj↩︎

  31. Louis M. Pecora, Thomas L. Carroll, “Synchronization of chaotic systems”, 2015, Chaos: An Interdisciplinary Journal of Nonlinear Science, https://aip.scitation.org/doi/abs/10.1063/1.4917383↩︎

  32. Based on the fact that the brain burns approximately 13 calories an hour, so 13 × 2,080 hours a year = 27,040 calories, and a prime rib dinner with a beer and dessert is approximately 4,060 calories.↩︎

  33. Add protein to your diet, eat whole, single-ingredient foods, avoid processed foods, limit your intake of added sugar, drink water, drink (unsweetened) coffee, supplement with glucomannan, avoid liquid calories, fast intermittently, drink (unsweetened) green tea, eat more fruits and vegetables, count calories once in a while, use smaller plates, try a low-carb diet, eat more slowly, add eggs to your diet, spice up your meals, take probiotics, get enough sleep, eat more fiber, brush your teeth after meals, combat your food addiction, do some sort of cardio, add resistance exercises, use whey protein, practice mindful eating, focus on changing your lifestyle.↩︎

  34. BMJ 2018;360:j5855, https://www.bmj.com/content/360/bmj.j5855↩︎