Tholonia - 040-THE_LAWS
The Existential Mechanics of Awareness
Duncan Stroud
Published: January 15, 2020
Updated: Updated: Jan 1, 2026
Welkin Wall Publishing
ISBN-10:
ISBN-13: 978-1-6780-2532-8
Copyright ©2020 Duncan Stroud CC BY-NC-SA 4.0
This book is an open sourced book. This means that anyone can
contribute changes or updates. Instructions and more information at https://tholonia.github.io/the-book (or contact the
author at duncan.stroud@gmail.com). This book and its on-line version
are distributed under the terms of the Creative Commons
Attribution-Noncommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
license, with the additional proviso that the right to publish it on
paper for sale or other for-profit use is reserved to Duncan Stroud and
authorized agents thereof. A reference copy of this license may be found
at https://creativecommons.org/licenses/by-nc-sa/4.0/. The
above terms include the following: Attribution - you must give
appropriate credit, provide a link to the license, and indicate if
changes were made. You may do so in any reasonable manner, but not in
any way that suggests the licensor endorses you or your use.
Noncommercial - You may not use the material for commercial purposes.
Share Alike - If you remix, transform, or build upon the material, you
must distribute your contributions under the same license as the
original. No additional restrictions - you may not apply legal terms or
technological measures that legally restrict others from doing anything
the license permits. Notices - You do not have to comply with the
license for elements of the material in the public domain or where your
use is permitted by an applicable exception or limitation. No warranties
are given. The license may not give you all of the permissions necessary
for your intended use. For example, other rights such as publicity,
privacy, or moral rights may limit how you use the material.
There are many laws that describe how energy works. From the 3rd century B.C., starting with the Archimedes Principle, to the present laws of quantum mechanics, researchers have been compiling and updating a long list of laws.
One fundamental principle states that energy will always follow the path of least resistance. This principle manifests through the concept of entropy, which explains why water runs downhill to form rivers, why electricity flows through conductors, why high pressure seeks low pressure, and why structures degrade over time.
In thermodynamics, entropy measures the amount of energy in a system that is unavailable for doing useful work. This concept can be confusing because entropy describes a relationship rather than a tangible object. Consider an analogy with currency. If a cashier gives you 30¢ change from $1 after a 70¢ purchase, we don’t say “here’s your not-70¢ change,” yet that’s essentially what entropy describes. It measures the portion of energy that cannot perform work. The entropy of something resembles using someone’s age as a measure of their remaining lifespan. More age means less remaining life. Why measure what energy cannot do rather than what it can do? For the same reason we cannot count down from some unknown maximum lifespan to zero. We don’t know the final energy state of any system, just as we don’t know anyone’s final age. Instead of measuring the usable energy directly, we measure how much has become unavailable, like tracking growing credit card debt.
A quick review of entropy
Classical entropy is a thermodynamic concept based on observations of how heat, or thermal energy, distributes itself within a system. At its core, entropy measures how many different microscopic arrangements (microstates) correspond to the same macroscopic state. While entropy itself is not a physical object, it is a real and measurable property of systems. Consider financial debt. You don’t possess negative ten dollars as a tangible thing, but you do have a real debt of $10. Entropy relates to energy similarly to how financial debt relates to net cash balance (though this analogy can extend to other concepts of “debt” and “value”).
Using the credit card metaphor, imagine how many ways you can spend money in the Dubai Mall, one of the largest malls in the world (1,124,000 m2, or approximately 157 football fields). Now imagine how many ways you can spend money in Mexican Hat, Utah, a tiny town with minimal services.
The number of spending options differs dramatically between these locations. In this metaphor, entropy’s “how many ways something can occur” equates to how many ways you can spend money. The ability to do “work” depends on the scope of the “work” and the context of where it is applied. A $300 credit card can purchase relatively little in Dubai (perhaps a sandwich or a couple pairs of socks), but at a remote gas station, the same amount might be sufficient to obtain the resources needed to get home. Conversely, a $1,000,000 credit limit enables extensive “work” in Dubai when “work” means “obtaining resources to live luxuriously,” but would be largely useless in Mexican Hat where such services don’t exist. The point is clear. The ability to do work depends on both the available resources and the opportunities to deploy them. As black holes account for most of the total entropy in the Universe, you can think of the banks and governments that control and own most of this debt as financial black holes.
In a concrete scientific example, imagine two simple molecular systems named A and B, each consisting of particles connected by chemical bonds. In chemistry, energy is stored in the bonds between atoms. This energy can be quantified in discrete units called quanta. System A contains 6 quanta of bond energy, and system B contains 2 quanta, so A has higher thermal energy than B. These quanta can redistribute among the bonds in different arrangements, but the total energy remains constant (6 quanta in A and 2 quanta in B). The various configurations that these quanta can adopt are called microstates.
A microstate represents one possible microscopic arrangement. For example, when tossing 2 pennies, there are 4 possible microstates (HH, TH, HT, TT), each with a 25% probability. These 4 microstates correspond to 3 macrostates (2 heads, 2 tails, or 1 of each). The macrostate “1 of each” has 2 microstates (TH and HT), making it twice as likely as either “2 heads” or “2 tails.” Another example involves poker hands. In a deck of 52 cards, there are 52! (approximately 8.06×1067) possible arrangements of the full deck. For 5-card hands, there are approximately 2.6 million possible distinct hands (microstates). Of those, only 40 are straight flushes, while over 1 million are “junk” hands (the official poker term for hands with no pairs or better). Because any specific 5-card combination is equally likely to be dealt, but the junk category contains far more microstates than the straight flush category, you are approximately 25,000 times more likely to be dealt a junk hand than a straight flush. This is entropy at work.
Returning to our molecular example, when systems A and B are combined to create system C, the total becomes 8 quanta distributed across 12 bonds, creating many more potential microstates. Which microstates are most likely to occur? The microstates that create the most balanced distribution of energy are the most likely because there are more ways to achieve a balanced distribution than an imbalanced one. This is analogous to releasing air from a balloon. The high-pressure air inside the balloon will exit into the lower-pressure environment outside because there are vastly more microstates corresponding to evenly distributed air molecules than to concentrated ones.
Energy naturally balances itself because more microstates exist when energy is evenly distributed. Consider tossing 10 coins. There is only 1 way to get all 10 heads (HHHHHHHHHH) and only 1 way to get all 10 tails (TTTTTTTTTT). However, there are 252 different ways to get 5 heads and 5 tails (the balanced outcome). This makes the balanced result 252 times more likely than either extreme. Even getting 6 heads and 4 tails has 210 possible arrangements, while 7 heads and 3 tails has 120 arrangements. The closer to balanced, the more ways it can occur. With just 2 coins, this disparity isn’t obvious (2 ways to get mixed results vs. 1 way each for all heads or all tails), but with more elements, the pattern becomes unmistakable. This is why when you flip 100 coins, you’ll almost never get all heads or all tails, but you’ll consistently get results near 50/50. Another analogy involves rain falling into containers. If you place 10 red buckets and 1 green bucket outside during a rainstorm, more total water will accumulate in the red buckets simply because there are more of them.
This example focuses on energy in chemical bonds, but energy can be distributed in other ways, such as the electromagnetic radiation filling the Universe, which we observe as cosmic microwave background radiation. This energy doesn’t disappear. It redistributes and persists because the Universe is a closed system with nothing external to receive the energy.
The mathematical formula for entropy looks deceptively simple, but like E=mc2, its applications can become complex. The change in entropy (ΔS) equals the change in heat (Q) divided by temperature (T), expressed as ΔS=ΔQ/T. The statistical entropy (S) equals the natural logarithm of the number of microstates (Ω), expressed as S=ln(Ω). In the 10-coin example where we get 5 heads and 5 tails, there are 252 microstates, so ln(252)=5.529. For more complex systems, such as our hypothetical molecular example with 15,876 possible microstates, ln(15,876)=9.672. This use of natural logarithms to describe probability distributions is not unique to entropy. As we saw in the previous chapter with Benford’s Law, natural logarithms also determine the probability of digits appearing first in naturally occurring numbers. In both cases, the logarithmic relationship reflects how nature distributes possibilities across different scales, making the natural logarithm a fundamental pattern in describing the distribution of energy, numbers, and probability itself.
Why do logarithms appear in both entropy and Benford’s Law? Because both describe systems that grow or combine multiplicatively rather than additively. When two thermodynamic systems combine, their microstates multiply (System A with Ω₁ states plus System B with Ω₂ states creates Ω₁ × Ω₂ total states). Yet entropy must be additive (total entropy equals the sum of individual entropies) for the mathematics to work. Logarithms solve this because ln(Ω₁ × Ω₂) = ln(Ω₁) + ln(Ω₂). Similarly, natural processes such as population growth, compound interest, and physical measurements grow multiplicatively across scales (1-10, 10-100, 100-1000), which logarithms convert into equal intervals. This reveals a deeper principle that configurational space, whether measuring how energy can arrange itself or how numbers distribute themselves in nature, follows the same mathematical pattern. Heat transfer represents energy spreading across available microstates, and the more microstates available, the higher the probability of that distribution occurring. The logarithmic pattern emerges because possibilities multiply as systems scale up, making the natural logarithm the fundamental language for describing how nature distributes energy, probability, and structure across different scales of organization. If nature operates logarithmically, then from nature’s intrinsic perspective, multiplication and addition are equivalent operations. What we perceive as exponential growth or multiplicative combination, nature experiences as linear, additive steps. This is why entropy is defined as S = ln(Ω) rather than simply S = Ω. The logarithm transforms our multiplicative mathematics into the additive form that reflects how nature actually organizes and distributes possibilities.
However, these dimensionless numbers become physically meaningful when related to actual energy scales. In thermodynamics, the statistical entropy is multiplied by the Boltzmann Constant (k or k_B), which equals approximately 1.38×10-23 joules per kelvin. This constant bridges the microscopic world of individual particles with the macroscopic world of measurable temperature and energy. The Boltzmann constant would have different values or interpretations in other domains, such as economics or information theory. In conceptual, philosophical, metaphysical, or thought experiment realms, defining “real” measurable values becomes challenging, not because energy doesn’t exist in these domains, but because the forms it takes may be unknown to us, making measurement difficult or context-dependent.
What system has the fewest microstates? A system with minimal thermal energy, approaching absolute zero temperature (0 Kelvin). Reaching absolute zero is thermodynamically impossible according to the Third Law of Thermodynamics, but researchers have achieved temperatures as low as 38 picokelvin (3.8×10-11 K) in laboratory settings. At the opposite extreme, what system has the most microstates? The Universe itself, which appears to contain a finite (though enormous) amount of energy distributed across all possible configurations.
To summarize, entropy measures the degree to which energy has distributed itself within a system. Higher entropy corresponds to greater energy dispersion, more uniform distribution, and lower available pressure or potential for work. Lower entropy corresponds to less dispersion, greater concentration or imbalance, and higher pressure or potential for work. These relationships manifest differently across various contexts, environments, and conditions.
Continuing
The preceding section provided the standard thermodynamic explanation of entropy, which most people encounter in physics courses. Here we will use the word more broadly to reflect a more general conceptual framework that applies beyond thermodynamics. In this expanded view, entropy can be understood as measuring the number of available distribution pathways relative to the resources being distributed, expressed as the ratio (distribution pathways)/(resource units). An abundance of resources constrained to very few distribution pathways represents very low entropy. For example, 1 pathway per 1,000,000 units (1/1,000,000 = 0.000001) indicates highly constrained, low-entropy distribution. Conversely, many distribution pathways available for limited resources represents very high entropy. For example, 1,000,000 pathways per 1 unit (1,000,000/1) indicates highly dispersed, high-entropy distribution. This concept parallels how entropy appears in information theory (measuring uncertainty or message possibilities), image compression (measuring data randomness), and linguistic analysis (measuring predictability), where entropy always describes the degree of constraint versus freedom in how something can be arranged or distributed.
Within
this broader conceptual framework, consider water flow at the macro
level, focusing on available pathways rather than molecular behavior.
The images to the right illustrate this concept perfectly. A large river
with only one possible path, such as the Colorado River flowing through
the Grand Canyon (left image), represents a low
entropy system because the water has limited distribution
options. The canyon walls constrain the flow to a single channel. In
stark contrast, a trickling stream that empties into a swamp or wetland
(right image) represents a high-entropy system because
the water can distribute through countless channels, spreading in
multiple directions with many possible pathways. This generalized
entropy concept captures what all entropy measures share in common: the
relationship between what exists and the ways it can be distributed or
arranged.
This reveals a fundamental principle about entropy and work potential. Low entropy systems possess significantly higher potential for doing work precisely because they are constrained. The concentrated, channeled river in the Grand Canyon can power turbines, carve rock, and transport massive amounts of sediment. Its constraint gives it power. The dispersed water in the swamp, despite potentially containing the same total volume, has minimal capacity for work because its energy has already distributed across countless pathways. When we describe something as having low entropy, we are implicitly stating that it possesses high potential for expansion, interaction, and transformation. The constraint itself represents stored potential. A compressed spring, a charged battery, water behind a dam, or concentrated wealth all share this quality. They are systems held in constrained states with limited distribution pathways, which means they retain the capacity to expand, interact with their environment, and perform work when those constraints are released or when pathways open. Conversely, high entropy systems have already exercised their potential for expansion and interaction. The energy or resources have distributed themselves across available pathways, leaving minimal potential for further work or transformation.
We all understand what “work” means colloquially when we say “That was a lot of work,” but if someone asked “How many joules did you expend?” not knowing the answer doesn’t mean we lack understanding of the concept. Moreover, even if we knew the answer, the numerical value would have no bearing on the subjective difficulty of that work. Listening to a relative’s tedious poetry reading could require far more “work” than building a stone wall around a garden if we have limited cognitive resources for enduring poor poetry but abundant physical resources for construction, even though the latter requires approximately 770% more joules per hour.
When we speak of systems, we refer to any process or entity that has identifiable boundaries. In thermodynamics, a system is simply any matter around which a boundary can be drawn, so a rock constitutes a system, as does a planet or a galaxy. Poker is a system because it has conceptual boundaries defined by the rules of the game and the cards used to play. Cars, living organisms, and computers are systems composed of smaller subsystems such as transmissions, organs, and hard drives. Even a drawing on paper can be considered a system because it has boundaries, and within those boundaries exists something that can be described in terms of energy.
The amount of entropy a system possesses is relative to that system, which we call local entropy. Because every system exists within a larger system, the larger system’s local entropy functions as universal entropy relative to the embedded system.
This relationship explains why the common argument that evolution breaks the 2nd law of thermodynamics is short-sighted.
As a reminder, the 2nd law of thermodynamics states that in any closed system, total entropy never decreases. It either increases or, in ideal cases, stays the same. This means that in any closed system, energy naturally spreads out and becomes less available to do useful work. Useful energy is constantly being degraded into diffuse, unusable energy, and this process is irreversible. Because energy always becomes less available for work and never spontaneously becomes more available, the universe has a built-in direction of time. The future is the direction of diminishing usable energy. In short, the 2nd law says that the universe is slowly running down. Energy remains conserved, but its ability to produce change is constantly being lost over time.
The argument claims that if entropy always leads to disorder, then
life could not continually evolve into more ordered forms while entropy
is always increasing. Therefore, either the theory of evolution is
wrong, or the 2nd law of thermodynamics is wrong. The
confusion arises because the 2nd law only applies to isolated
systems. Obviously, if a system receives external energy input, its
entropy can be lowered. Consider a poker deck analogy. If a poker deck
were supplemented by 6 additional
cards at
every shuffle, then the 2nd law would no longer apply to the
closed system of poker. In a normal closed deck, shuffling increases
entropy until reaching maximum randomness (approximately
8×1067 possible arrangements of 52 cards). However, adding 6
each shuffle
does two things. First, it increases the total configurational space (58
cards have far more possible arrangements than 52). Second, and more
importantly, it imposes order by introducing redundancy. The deck
becomes less diverse with each addition (7 Jacks of Hearts, then 13,
then 19), representing externally imposed organization. The deck’s
entropy locally decreases despite the shuffling because energy and
structure are being added from outside the system, just as the Sun’s
energy allows Earth to maintain and increase biological organization. We
can easily demonstrate this principle with a simple capacitor that, when
external energy is applied to it, collects positive ions on one side and
negative ions on the other. This represents a virtual impossibility in
an isolated system where the 2nd law applies. Likewise, the
Earth is not an isolated system because it receives 5×1023 HP
of energy every second, which equals 3.72×1026 joules (or
watts) per second. That represents an enormous influx of energy from the
Sun every second, bringing approximately 500,000 times more energy than
is consumed by all human activity.
Therefore, we can say that the 2nd law applies to the local entropy of the closed system of the entire Universe, which represents the ultimate universal entropy. However, the Earth cannot be a closed system given that it is energetically open to the Sun (and potentially the rest of the Universe if one accepts the somewhat speculative but intriguing Electric Universe paradigm).
An analogy we will frequently use as a common example of entropy involves a battery. A charged battery has low entropy (high usable energy), and a dead battery has high entropy (no usable energy). A dead battery still contains all the energy it had before. However, that energy is no longer usable because the electrons have moved from the negative terminal to the positive terminal until achieving balance. No more electrons remain on the negative side with the electrochemical potential to move. These electrons are still in the battery and still behave as electrons, but they are “useless” from the perspective of the battery’s intended work function.
Returning to the poker hand example, a junk hand like
has high entropy
because the cards are disordered, or the order is chaotic. Conversely, a
straight flush
has
low entropy because there is order in both suit and value, and there is
imbalance because all the cards are close in suit and value, which is
very rare. It is not a coincidence that the higher the value of a poker
hand, the lower the entropy of that hand.
The
2nd law of Thermodynamics is another way to say that all
systems tend to move toward their most stable or lowest energy states
because entropy, or the inability to do work, will always increase. Why
will it not decrease? Because when left alone, energy, on average, will
never spontaneously imbalance itself. The “on average” qualification is
important because, technically speaking, there is an extremely small
probability, approximately 1 in 1024, that this law can be
violated statistically (see generic probability chart on the right). You
could drop an ice cube in hot water and there is an infinitesimally
small chance the ice will get colder and the hot water will get hotter,
though it may be the first and last time that will ever happen in the
history of the Universe. The 2nd law is not actually a
deterministic law but a statistical observation. There is nothing that
says entropy could not be reversed other than the vanishingly unlikely
probability that it can be reversed. Even so, there can be spontaneous
local decreases of entropy,1 not to mention the
hypothetical and highly speculative case proposing that the
2nd law exists in reverse somewhere in the Universe,2 thereby keeping the net entropy
level of the Universe at zero. It may be easy to dismiss this idea until
one considers it was proposed by William James Sidis, the man often
considered the most intelligent human to have ever lived (See
Appendix K, “William James Sidis”).
We can apply the concept of entropy to any two things that differ and interact with each other. For example, have you ever wondered why one drop of black paint in a can of white paint makes a noticeable difference, but one drop of white paint in a can of black paint makes minimal difference? We could say that white paint has high entropy and black paint has low entropy if we defined the concept of “work” not as kinetic energy but as how much light is reflected or absorbed by a color. If white paint has an entropy value of 1 (absorbs least) and black paint has an entropy value of 10 (absorbs most), we would discover that if we mix 1 part of each into 10 parts of the other, the white color changes by approximately 10%, but the black color changes by only 1%. This application of entropy may work mathematically, but it is also potentially confusing because we are using the same terms and concepts that equate the color black with dispersion, disorder, and balanced energy, while equating white with concentration, pattern, order, and imbalance. This serves as a good example of the importance of context. Entropy can be measured as energy radiation in one context but energy absorption in another.
The universality of the concept of entropy explains why it appears across many domains, from thermodynamics to culture to economics, and in anything else that can have microstates, whether chemical, electrical, physical, spiritual, emotional, or conceptual. For example:
We may be taking liberties with the concept of entropy, but even in the scientific world, the definitions of entropy are so diverse and context-specific that even scientists acknowledge confusion about the term:
“As a consequence of this diversity of uses and concepts [of entropy], we may ask whether the use of the term entropy has any meaning. Is there really something linking this diversity, or is the use of the same term with so many meanings just misleading?”3 ~Annick Lesne, author and researcher at the Institut des Hautes Etudes Scientifiques
For our purposes, we can accept that entropy is not merely a measure of chaos but a dimension of chaos. In this framework, chaos in general can be defined as the “lack of order in form or movement of energy due to the dispersion and balancing of energy.”
For this reason, we define two types of chaos: high-entropy chaos and low-entropy chaos. This parallels the same concept as the chaos of 0 and the chaos of ∞ mentioned in the first chapter, but here the context is material reality rather than pure mathematics.
The image above provides a general diagram of this concept. The three patterns in the center show examples of low-entropy (right), high-entropy (left), and a balance of low and high entropy (center). It is not a coincidence that the center image resembles a plant. As we show later, reality, and living things especially, represents a mixture of order and chaos, high and low entropy combined. This may result from the fact that the evolution of complex systems actually decreases local entropy while the process of growth itself increases entropy, making life a constant negotiation between these two opposite forces.
In summary: - Everything moves from low to high entropy as a result of dynamic processes that dissipate energy, such as heat in thermodynamics. - Everything exists on the spectrum of low→high entropy. - Each effect results from a cause, which becomes a new cause for a new effect. This resembles the cause/effect chain of microstates within microstates.
Microstates
within microstates within microstates. This is equivalent to the
self-similar fractal pattern of the rainbow bush shown to the right, but
unlike that static pattern, the entropic systems branching off one
another are chaotic, dynamic systems. This creates not simply chaos, but
an endless chain of chaotic systems within chaotic systems. Does every
system with microstates qualify as chaotic? To some degree, yes, but
where predictability approaches 100%, it does not appear
chaotic. Perhaps we can say it is only minimally chaotic. However, even
the most predictable event exists at the end of a long chain of
previously unpredictable events. Consider all the events that had to
occur from the first cause to make a predictable coin toss possible.
Planets had to form, life had to evolve, consciousness and
meta-consciousness had to emerge and be “discovered.” The coin toss
itself may not be very chaotic, but the countless events leading up to
it certainly were, which is why everything is part of and built upon the
chaotic system that constitutes this reality.
If everything is chaotic, where is the order? Ironically, order is a subset of chaos, and everything is unpredictable to some degree, with that “degree” measured in probabilities.4 We live our lives as if the Sun rising tomorrow is 100% guaranteed, but it is not. There may be only a 1 in 10-99 chance it will not rise, but it remains a probabilistic value. This probability is so high because the solar system has reached a state of equilibrium, a state of high entropy. Not its highest possible state of entropy (when that occurs, everything will be reduced to cosmic dust and black holes), but sufficient equilibrium to make it sustainable for now. There simply are not any more microstates that are more likely than the one we currently inhabit. Suppose this equilibrium is disturbed by a massive foreign object passing through our space, a change in the temperature of the Sun, or some other change large enough to create more available microstates than currently exist. In that case, the chaos of low-entropy/high-energy will ensue and rearrange everything. Order emerges when some degree of equilibrium of the parent system is reached. As the parent system slowly increases its entropy and loses energy, the more energetic, more chaotic, low-entropy child system becomes more stable until it also reaches equilibrium as it loses energy. Life on Earth could not begin until the chaos of Earth’s early days settled down. Earth could not form until the Sun finished forming in the center of a nebula that was formed by gravity’s effect on dust and gas, and so on, all the way back to the Big Bang. No single thing, system, or order exists that is not dependent on the equilibrium of the system it is built upon.
Does this mean that one day the probability of getting heads in a coin flip will not be 50%? In one sense, yes, because when all matter and energy is equally dispersed, there will be no coins, humans, or planets. While the archetype of that system might exist, the reality of it will not. This follows the traditional Heat Death theory of how the Universe ends. There are other theories, but in any case, the stable systems will destabilize, and no one will exist to toss coins that do not exist. The coin toss and the coin tossers both exist because the system of our current reality, the state of reality at this moment in the life cycle of the Universe, allows for them to exist. Imagine tossing a coin during the Big Bang or inside a black hole. The idea is absurd because all the laws that allow for tossing and tossers, while they still exist, are uninstantiated and exist only as archetypes.
In our current system, a coin toss is not a random event but one of deterministic chaos. The reason it is unpredictable is because too many micro variables affect the outcome: muscle movement, atmosphere, relative starting position, spin axis, initial velocity, imperfections in the coin, whether it is caught or allowed to bounce when it lands. A common coin flip, where the coin is not allowed to bounce when it lands, constitutes a 12-dimensional system5 in that it requires 12 dimensions to define, just as 3 dimensions are required to describe a cube or 4 dimensions to describe a cube in time. The archetype, however, is only 1-dimensional with 2 possible states, given that the result needs only 1 of 2 values to perfectly describe it. This means that while there are only 2 possible outcomes (heads or tails), with 12 dimensions, there are many millions of ways to arrive at those outcomes. If we could flip a perfectly vertically balanced coin (a horizontal coin has a 51% chance of landing in the same orientation as it started because there will always be one more even number of flips) in a vacuum with precise pressure on a precise location, and do it identically each time, the result could easily be predicted by Newton’s laws of motion and angular momentum, at least in theory. The possibility of creating such a system is probably physically impossible, thanks to chaos. Because the system of a coin toss is a subsystem of countless parent subsystems (not only those of time and space, but those of the tosser, the environment, the coin itself), all of which are descendants from the initial chaos of creation, any change to any of those parent systems will change the coin toss system.
The laws we live by are stable enough for now, but they may be changing as you read this. According to our current understanding, even the most fundamental laws, like Newton’s laws of Motion, will completely break down as the Universe approaches statistical equilibrium, which means there will be equal probability that energy will move in any direction in a reality where all matter, decomposed into atomic dust, just wiggles around like 1086 lost atomic-sized particles in space as they slowly wait to get absorbed into black holes. The speed of light, Planck’s Constant, and every other constant may be changing due to the expansion of the Universe, quantum fluctuations, increasing dark matter, and countless other phenomena we have not yet discovered. Many scientists have theorized this, and there is some evidence to support the idea. The problem is, on a cosmic time scale (using the theory that the Universe will come to an end in approximately 4 billion years6), if the Universe lasted 100 years, we humans have only been measuring things for the equivalent of 43 millionths of a second (5,000 years), so we can only speculate about the other 99.9999995% of existence.
Entropy tells us how close a system is to equilibrium, where perfect equilibrium equals perfect disorder (because all the parts are spread out equally). Random dots on a page represent perfect equilibrium and therefore high entropy. If those dots create any sort of pattern, then this is not an equal distribution of dots and therefore has lower entropy, which means there must be more energy available for “work.” We will not delve into exactly what “work” means with regard to information here, but as proof of this principle, below are the results of an entropy analysis7 of 4 images, each with 64,000 black dots on a white background. You can easily see that more order equals less entropy.
So, where does chaos fit into this?
Earlier,
we defined two kinds of chaos: high-entropy chaos and
low-entropy chaos. We also established that the most efficient
and stable expressions of energy exist in the middle region between
these poles. This naturally divides the spectrum into two parts:
ascending order and descending order. The image on the
right illustrates this concept using our previous model (with the X-axis
flipped to match the traditional left → right format). Because this
represents a spectrum of chaos, we can describe the ascending
side that shows positive growth of order as positive
chaos, and the descending side, which shows negative growth of
order, as negative chaos.
It would seem more symmetrical and intuitive if the +chaos and -chaos moved in opposite directions. However, that would mean the -chaos would be going backward in time, which appears to violate the 2nd law of Thermodynamics. Yet according to William James Sidis, who spoke eight languages, wrote four academic books by age eight, and was reading the New York Times at 18 months, this apparent impossibility deserves consideration. His estimated IQ of 250-300 places him in extraordinary territory (though ironically, he considered intelligence testing “silly, pedantic, and grossly misleading”). To put this in perspective, in the entire world today, there is a statistical chance that one person on the planet has an IQ of 192. An IQ of 200 is a 1:76,000,000,000 rarity, making Sidis, statistically speaking, potentially the most intelligent human to have ever existed (See Appendix K, “William James Sidis”, for more).
According
to Sidis’ highly speculative hypothesis, “The Animate and the
Inanimate”8, published in 1920 when he was 20
years old, entropy could theoretically be reversed. More precisely,
there might exist parts of the Universe that run opposite to ours, like
a mirror image of this reality. These “parts” are interspersed with the
“parts” that run “forward”, similar to a checkerboard, but we could
never observe them or travel to them because they exist in a different
space-time. This sounds like an early concept of the modern “many
worlds” hypothesis of quantum mechanics and shares conceptual ground
with the modern idea that before the Big Bang, the Universe was a
reflection of what it is today9. In this reversed
Universe, time does not run backward in the way we might initially
think. Its anti-time or negative-time properties exist
in a Universe of anti-matter, so everything appears the same to
those living in anti-world as long as all matter, energy, and
time were equally inverted. This is called charge, parity, and time
(CPT) reversal symmetry and is well understood as The CPT
Theorem. If we woke up tomorrow morning in the anti-world
version of our current reality, we would not notice anything different,
as long as the CPT was inverted. Applying Sidis’ ideas to our model
might look something like the right image, which represents just two
“squares” on the checkerboard of reversing realities. This image would
also apply to the modern CPT ideas, where the 0 low-entropy (green) dot
would represent the Big Bang.
While I cannot comment definitively on the validity of Sidis’ ideas, it is noteworthy that he predicted black holes, the expanding Universe, and the Big Bang using only the 2nd law of Thermodynamics, and years before the discovery of the expanding Universe and the Big Bang. His ideas warrant consideration. Regardless of their validity, what matters here are the patterns that these ideas imply, which are patterns fundamental to all of existence, from ancient oracles to electricity to DNA, as we will see.
This process of emergence and decay of order applies to any system. Therefore, it applies not just to the life of the Universe, but to the life of all systems created in the Universe, and the systems that those systems create, making reality a dynamic fractal of countless embedded systems following the pattern low-entropy chaos → order → high-entropy chaos.
In this manner, the emergence of an apple follows the same rules as the emergence of the Universe, as does the tree, weather, planet, and solar system that made the existence of that apple possible. The energy and the archetypes of these rules instantiate within the context and scope of each creation, which we recognize as the laws of creation for each context and scope. The example (above image) is a very simple model, but this simplicity does not diminish its validity.
“Only simple qualitative arguments can reveal the fundamental [laws].” ~Philippe Nozières, award-winning quantum physicist
“Truth is recognized as such by its simplicity and harmony.” ~Isaac Newton
Because this energy dispersion results from energy always seeking balance, we can say that entropy is a measure of the state of balance. Higher entropy equals more balance or equilibrium of energy.
If this is true, then it is also true that the following statement holds:
This principle applies to all forms of work, though it manifests in two distinct ways. Spontaneous work is work that creates balance. It occurs when systems naturally move from imbalance toward equilibrium. Water flows downhill, heat dissipates from hot to cold, batteries discharge, and compressed springs expand. In each case, the work being performed is the process of balancing the existing imbalance. The work continues until balance is achieved, at which point the capacity for further work is exhausted. Imposed work, in contrast, is work that deliberately creates imbalance. When humans build a dam, they perform work to lift water to a higher elevation, creating a gravitational imbalance where none existed. When charging a battery, work creates charge separation, establishing an electrical imbalance. When compressing a spring, work creates a mechanical imbalance. These newly created imbalances then possess the capacity to perform spontaneous work later. The distinction is clear: spontaneous work moves systems toward balance and increases entropy, while imposed work moves systems away from balance and decreases local entropy. However, imposed work always requires drawing energy from an existing imbalance elsewhere, which is why the total entropy of the complete system still increases, consistent with the 2nd law of thermodynamics discussed earlier.
“Entropy” is a noun, as it only describes a system’s state. What is the verb for the actions that led to that state? If entropy is a measure of balance and work is the act of balancing, then “work” is the verb form of entropy. More work equals more balance equals more entropy.
Because we will be discussing balance throughout this book, we should clarify what we mean when we use the word “balance.” “Entropy” and “work” are different words describing different yet interdependent concepts, but the single word “balance” is synonymous with both concepts because “balance” can be both a verb and a noun.
The word itself comes from the Latin bi+lanx, meaning “two sauce pans”, as in the classic hanging scales. The scales show when something is balanced (noun) after the two sides were balanced (verb). When encountering the word in a sentence, it will be up to the reader to determine which definition is more contextually appropriate.
Before we leave the subject of entropy, there is another concept to keep in mind that will come up again later.
Information is entropic. This concept was demonstrated in a thought experiment by James Maxwell in 1867 that appeared to violate the 2nd law of Thermodynamics. The gist of this hypothetical was that if a magical being that generated no heat or friction could calculate the trajectories of every atom in two chambers (one filled with hot gas and one filled with cold gas), it could selectively allow certain atoms to pass through a door that it could open or close. This would make the hot side hotter and the cold side colder, representing a massive violation of the 2nd law. Lord Kelvin dubbed this imaginary being Maxwell’s Demon. The problem puzzled scientists for a century because the demon appeared to decrease entropy without any energy cost, seemingly breaking the fundamental law. One hundred years later, the solution was found10. The energy needed for this demon to collect and store all the information on all the atoms would generate enough entropy to compensate for the seemingly impossible result. Hence, information has entropy, or more correctly, stored information has entropy. That storage could be a hard drive, scribbled notes, the brain, or whatever matter the information is stored in or on. It was also shown that erasing that information raises the entropy even more (eliminating any attempt to trick entropy by storing only a minimum amount of new data and constantly erasing the old data).
A concept central to the tholonic model builds upon Sidis’ notion that entropy might move in reverse in certain parts of reality. The tholonic model agrees with this principle but proposes a different answer to the question of “where” such reversal occurs.
We understand electromagnetic energy as the spectrum of non-material energy expressed through waves, frequencies, and photons. On the material scale, energy manifests as matter through electrons, protons, and neutrons, which combine to form atoms. This material spectrum is represented by the periodic table of elements. Matter constitutes a much shorter and denser spectrum than the electromagnetic. We describe it as “denser” because matter is energy, according to m=E/c2.
Thermodynamic entropy is a material phenomenon fundamentally tied to systems that experience the flow of time. On the material scale, energy manifests as matter through electrons, protons, and neutrons, which combine to form atoms. This material spectrum is represented by the periodic table of elements. Matter constitutes a much shorter and denser spectrum than the electromagnetic. We describe it as “denser” because matter is energy, according to m=E/c2.
From the reference frame of a photon traveling at light speed, proper time equals zero and spatial distance contracts to zero. Photons in transit exist in a timeless state outside the conventional framework of spacetime entropy. Entropy becomes meaningful only when radiation interacts with material systems that experience proper time, where energy exchanges occur and statistical mechanics applies. Therefore, from the perspective of the photon’s own reference frame in which time does not pass, radiation has no entropy. Light thus represents the purest form of energy known to science that exists at the boundary between the timeless and time-bound realms.
Does there exist yet another form of energy even purer than light, operating in a realm where entropy not merely remains constant but actively decreases? The tholonic model proposes there is. This energy is referred to as tholonic energy, which extends far beyond the electromagnetic spectrum and can be understood as coherent thought. Just as electromagnetic radiation represents a step down from this primordial energy into the realm of spacetime, and matter represents a further step down into dense, slow-moving forms (energy divided by c2), so too does tholonic energy represent the source from which these lower forms emerge. Heat and electricity, the familiar energies that drive material processes, are themselves stepped-down manifestations of this more fundamental energy.
This tholonic energy exists in another “part” of the Universe, not as a location but as another level of reality. This corresponds to the world that Plato described in his Theory of Ideas, where ideas possess their own existence independent of any material instantiation. In this realm, ideas are objective realities rather than subjective constructs. More provocatively, as will be explored in detail later, these archetypal concepts not only possess independent existence but also embody their own form of consciousness. They are not inert templates waiting to be discovered but active principles with their own awareness and agency.
What distinguishes this realm is not merely the absence of entropy but its reversal. Within the world of thoughts, concepts, and archetypes, entropy decreases. How can this be? In material systems, energy disperses and patterns degrade over time. In the realm of tholonic energy, the opposite occurs. Sustainable ideas that survive naturally evolve, growing more organized, more coherent, and more powerful over time. A mathematical theorem, once proven, becomes more elegant through successive refinements. A philosophical insight deepens through contemplation and discourse. A scientific theory becomes more unified and explanatory as it integrates more phenomena. These are not metaphors for organization but instances of actual entropy decrease. Information does not degrade but crystallizes. Patterns do not dissolve but strengthen. Order emerges spontaneously from the interaction of ideas, without requiring an external energy source to combat disorder. This represents a fundamentally different thermodynamic regime, one where the arrow of time points toward greater complexity and coherence rather than toward heat death and equilibrium.
Coherent thought, or the ability to form and refine ideas, represents an instance of this energy as expressed through the context of consciousness, making consciousness a conduit for this energy into material reality. While tholonic energy transcends our classical understanding of energy, it is integrated into our reality as fundamentally as heat and electricity. Indeed, it is the source from which these lower forms step down, and reality itself depends on it to exist.
The interconnectivity of things, whether through direct contact or through radiations such as heat and light, allows for the movement of energy. This movement creates order. Therefore, the principle that everything is connected is not an abstract philosophical concept but a physical necessity for order and, consequently, for life.
At first, the notion that perfect balance results in perfect chaos seems contradictory or at least counterintuitive. This perception likely arises because in our experience, perfect balance exists only in small systems, only temporarily, and only under carefully controlled conditions. When perfect balance does exist, we tend to ignore it because it produces no observable effects. There is no movement and no available energy. Outside of laboratory settings, something that is perfectly balanced serves no useful function. However, if we extend the idea of balance beyond the classical concepts of movement, we can better understand this relationship.
All of existence consists of systems within systems, from galaxies to plants to pebbles. This principle applies across all contexts, including economics. When we pay three dollars for a pound of carrots, we balance the difference between a particular instance of supply (that specific bag of carrots) and demand (for a particular use). Once that transaction is complete, perfect balance exists within the very small system of that transaction. That particular instance, being perfectly balanced, no longer has the imbalance that drove it. The potential for that specific exchange is exhausted. It becomes a completed system and returns to the undifferentiated (chaotic) state from which it emerged. That instance was one of many in larger systems that had to exist for it to instantiate in the first place, including markets, sales, distribution, supply chains, farming, and all the systems upon which they depend.
Ultimately, these systems naturally move toward balancing their constituent energies, whether those energies manifest as thermal gradients, electrical potentials, economic disparities, or informational asymmetries. The more pathways available within a system, the more opportunities exist for imbalances to resolve themselves. This means that balance can be achieved more efficiently when more pathways exist. Because systems naturally evolve toward more efficient configurations, conditions emerge that generate more pathways for interaction and exchange. How do these pathway-rich conditions emerge? Through order. Organized structures create the pathways through which balancing can occur. Consider the tremendous amount of order on countless levels that must be maintained for a consumer to purchase a three-dollar bag of carrots from a farm 3,000 miles away. Without the organized systems of markets, transportation, communication, and currency, such a balancing transaction between distant supply and demand would be impossible. Order facilitates balance by creating the structured pathways through which imbalances can resolve themselves, ultimately arriving at a perfectly balanced state of high-entropy chaos.
Economics, and everything else that humans create (whether conceptually or physically), represents as natural a form of evolution as the transformation of lifeless organic molecules into life, and for precisely the same reason.
Approximately 3.8 billion years ago, when carbon-based organic molecules existed in abundance but before life emerged, these molecular aggregates absorbed substantial energy from the Sun and from the Earth’s molten core through hydrothermal vents. According to the laws of thermodynamics, these aggregates needed to dissipate the same amount of energy they absorbed, but simple molecular aggregates possess no specialized energy-dissipation mechanisms. Energy moving through any structure will cause changes that allow for more efficient energy transfer. Over time, these aggregates self-organized to allow better dissipation of excess energy. Patterns of energy began to form within these structures, and these patterns became what we recognize as life. This may explain why the earliest forms of life are believed to be microorganisms that formed on hydrothermal vents on the ocean floor, which emit water at temperatures up to 500°C (930°F). That heat provided their energy source and delivered significantly more energy than the Sun (and continues to contribute substantially to ocean warming and methane production). The earliest direct evidence of life consists of 3.5-billion-year-old microfossils found on hydrothermal vents in the Pilbara region of Western Australia.
According to Jeremy England, an assistant professor at the Massachusetts Institute of Technology who derived a mathematical formula11 explaining how energy movement can transform carbon-based molecules into life forms, the process follows predictable thermodynamic principles.
You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant.
This principle applies not only to organic molecules but also to inorganic systems, such as self-assembling crystals and quasicrystals.
Wherever energy exists in the material realm, there is some degree of imbalance seeking equilibrium. We say that things are “balanced” when the net energy difference between any two states is zero. Before reaching zero difference, there is balancing, which is the movement of energy, and this movement creates order.
In the first moment the Universe came into being, the total energy of the Universe existed within an infinitesimally small volume (described by different cosmological models as either a soccer-ball-sized region or a zero-dimensional point of singularity) and was released into a void of nothingness. This maximum state of imbalance initiated the balancing process that constitutes reality. At the end of the Universe, there will be no further need for energy to balance itself because everything will have achieved balance. The Universe begins with a verb and ends with a noun.
If this trajectory holds true for the material realm, then by symmetry, the opposite must hold true for the tholonic realm. In the world of archetypes, in Plato’s realm of ideas, where entropy decreases rather than increases, the arrow points in the opposite direction. If the material universe begins with maximum concentration and ends with maximum dispersion, then the tholonic realm must begin with maximum dispersion and move toward maximum concentration. The endpoint of this process would be ultimate order, the ultimate pattern of understanding, essentially infinite knowledge perfectly unified. Just as the material universe moves inexorably toward a state where all energy is perfectly balanced and evenly distributed (heat death), the tholonic realm moves inexorably toward a state where all knowledge is perfectly organized and completely integrated. This represents the omega point of consciousness, where all possible truths converge into a single, coherent, all-encompassing understanding.
Newton’s 1st law of motion, the law of inertia, states that an object will remain at rest or move at a constant speed in a straight line unless acted upon by an unbalanced force. This fundamental principle explains why objects do not move spontaneously without cause. No object moves unless energy is applied as force. Inertia relates to movement as gravity relates to mass. Gravity and inertia are interrelated forces that affect matter. The gravitational field of an object can be calculated by determining how much energy is required to move it.
These represent just two of the many laws that govern how our reality operates, at least within the scope of reality we typically experience. On quantum and galactic scales, or in states of extremely high or low energy, different laws may apply or familiar laws may manifest differently.
Inertia and entropy together ensure that systems operate at their most efficient levels. What does “operate” mean in this context? It refers to the optimal movement of energy. What does “optimal” mean? It refers to the most efficient pathway to balance energy. Energy moves only when a difference exists between two states. The movement of energy serves a single function, which is minimizing that difference by creating balance between conditions that differ, whether those conditions involve the extremes of somethingness and nothingness or merely a few degrees of temperature. Once balance is achieved, movement ceases. A balanced battery is a battery at peace with itself…, and it is also a dead battery.
The optimum condition for energy movement between two states is one where both states achieve maximum expression within the limitations and capacities inherent to each state.
A
beautiful example of entropy and inertia can be easily demonstrated with
a device called the harmonograph. This fascinating instrument
creates an oscillation from an initial push (representing low-entropy
chaos and high imbalance) and then traces its path as that initial
energy slowly diminishes through inertia and the balancing process,
until it stops (representing high-entropy chaos and balance). The path
from low-entropy chaos to high-entropy chaos creates the incredibly
beautiful and highly ordered pattern visible in the harmonograph’s
drawing. This perfectly illustrates how order emerges during the
transition between the two extremes of chaos.12
Patterns exist throughout nature, life, physics, and mathematics. It can be useful to think of patterns as rivers etched into the terrain of reality from the first moments of creation, representing paths of least resistance for the movement of energy, whether electrical, physical, conceptual, or emotional. Reality, regardless of scope or how one perceives it, is built upon layers of patterns that sit on top of the chaos from which they emerged. A recurring theme in this book involves recognizing patterns that repeat across various themes, contexts, and scopes. Some of these meta-patterns are clear, such as the Fibonacci sequence and exponential curves, while others remain more hidden and even challenge our understanding of what defines a pattern. An excellent book on this subject is “Patterns in Nature: Why the natural world looks the way it does”.13
The position put forth here is that any two phenomena sharing the same pattern represent at least two instances of one pattern. This differs from the scientific view that the pattern results from the cause. While this makes sense from a bottom-up perspective, from a top-down perspective, it also makes sense that the archetype of a pattern is the cause of the instance (form). Perhaps this is a matter of perspective, or perhaps it reflects the natural order of creation, where the archetypes, ideas, or awareness of something must exist before it can be instantiated.
Patterns can be qualitative as well as quantitative. Consider the example of a Ford Falcon in Chicago and a Ford Falcon in Argentina. These two instances represent the quantitative patterns of the concept of “Ford Falcon”. They have no direct connection to one another, but some of what is learned about one will apply to the other. Each of these instances carries a cultural pattern as well. The Ford Falcon, an ordinary economy family car popular in both North America and Argentina, carried vastly different cultural meanings. In the United States, it represented affordable, practical family transportation. In Argentina, that same unremarkable family car became a symbol of terror when used by the secret police during the Dirty War, precisely because its ordinariness made it invisible until it was too late. Here are two instances of the same quantitative pattern that have two very different qualitative patterns.
Nearly everything we see or know has some sort of subjective, cultural, idealized, or assumed quality imposed onto it. It is difficult, if not impossible, to isolate objective patterns from subjective patterns. This represents a real question in the quantum mechanical “many worlds” model where subjective observations determine which of the many possible realities the observers will find themselves in. Even more confounding, tests14 of the famous Wigner’s Friend Paradox15 suggest that objective facts may not exist in the way we assume. Different individual observers, all observing the same objective reality, can observe different objective facts. If this is the case, even science itself cannot fully separate objective patterns from subjective patterns. We think of superstition, ritual, beliefs, and the dogma that often accompanies them as having their roots in mixing the subjective with the objective, but it may turn out that this applies to all of existence as well.
If subjective/qualitative and objective/quantitative patterns are not independent of each other, does this mean science is superstitious and dogmatic? To the degree that human consciousness, which will never ultimately be free of assumptions, axioms, and beliefs, plays a role in the “objective facts” we observe, the answer is yes.
We have seen this historically in the sciences. The Pythagoreans held deeply mystical beliefs about numbers and reportedly treated the discovery of irrational numbers as a philosophical crisis. According to ancient legend, Hippasus, who is credited with discovering or revealing the irrationality of √2, was killed for this heresy, though whether by drowning or some other means remains a matter of myth rather than documented history. Regardless of the legend’s veracity, the philosophical discomfort with irrational numbers persisted in Greek mathematics. While Greek mathematicians continued to work with these numbers geometrically, their conceptual framework remained constrained by philosophical assumptions about the nature of number and reality. In 1299, Florence banned the use of Hindu-Arabic numerals in official account books, ostensibly to prevent fraud but reflecting deeper resistance to unfamiliar symbolic systems. The Catholic Church’s condemnation of heliocentrism, culminating in Galileo’s house arrest in 1633, demonstrated how institutional belief systems could suppress scientific ideas that contradicted prevailing cosmology. In the 19th century, the geological establishment initially rejected the theory of continental drift proposed by Alfred Wegener, partly because it challenged fundamental assumptions about the Earth’s structure. Later in this book, modern examples of similar thinking in the sciences will be presented.
A law or pattern that can be seen in every part of the Universe is that everything oscillates in some manner. Chaotic movement of energy can occur, but oscillation allows for more sustainable energy movement. Why? Because sustainability results from a balance between internal and external fields of force, and energy must move and will always seek stability and balance. Oscillation is more balanced and stable than chaos due to the conservation of angular momentum, which we all know from bicycles and gyroscopes.
Before we move on
We cannot directly observe energy while it travels between its source and destination. We can only measure it when it interacts with something, such as a detector. Once detected, we can determine properties like amplitude, frequency, and wavelength, but this reveals its behavior during interaction. We infer that if energy leaves as X frequency and arrives as X frequency, it maintains that frequency during travel. While this inference is supported by consistent experimental results and theoretical models, it remains a model-dependent conclusion. The wave description of electromagnetic radiation predicts outcomes with remarkable accuracy, yet whether this model describes what is “actually” happening or merely provides a useful mathematical framework remains a philosophical question. Wave-particle duality in quantum mechanics reminds us that our descriptions of nature are observer-dependent and context-sensitive. Future theories may employ radically different frameworks that reconceptualize what we currently understand as waves, particles, and energy itself.
Nikola Tesla observed the following principle.
“All things have a frequency and a vibration.”
The tholonic perspective extends this further. Because everything has energy, everything oscillates in some fashion and has some kind of frequency pattern. We phrase it this way to include both chaos and order.
We
typically think of light waves and sound waves as classic examples of
oscillations, but the heavenly bodies are also oscillating particles on
a cosmic scale. When we examine the orbits of planets, stars, and
galaxies, they are not simply spinning in relatively two-dimensional
planes of orbits but spinning while moving in a direction through
space.
People have been fascinated with this obvious commonality across all of creation for some time. Kepler himself was quite interested in the relationship between planetary frequencies and musical frequencies, but the study of planetary and musical relationships extends back to at least the 9th century with Eriugena, an Irish monk, theologian, and Neoplatonist philosopher most famous for his work “The Division of Nature”. This work proposes that nature’s first primary division was the division between that which is (being or somethingness) and that which is not (nonbeing or nothingness). His work was condemned as “swarming with worms of heretical perversity”. The 9th-century Archdiocese was a tough crowd.
Energy oscillates, and because matter is energy, matter also oscillates. The electrons, protons, and nuclei that constitute all matter are themselves systems of oscillating energy fields. When these various systems combine to form sustainable patterns of oscillations, they create a new system, the atomic structure, which functions as a high-frequency oscillating energy grid. For example, a system of the nucleus oscillates at approximately 1022 Hz. The system of an atom (at 70°F) oscillates at approximately 1015 Hz. An entire molecule, which is a system composed of systems composed of systems, oscillates at approximately 109 Hz, and so on.16 The key characteristic of what makes a system is the sustainability of the oscillations and the internal forces that hold it together. More significantly, these oscillations and forces determine how it will integrate and interact with other systems in its environment. A real-world example of this is how viruses and bacteria can be killed with specific inharmonious frequencies from oscillating electric fields.17 Another example is how the Earth maintains its distance from the Sun because of the speed at which it travels. If the Earth slowed down, the distance would increase, and if it sped up, the distance would decrease. A pattern that governs the frequencies of planets in our solar system is Kepler’s 3rd law, T2=a3, where the orbital period in years squared equals the semi-major axis distance in astronomical units cubed. If a new planet were to enter our solar system, it will either find its (hopefully unoccupied) orbital frequency according to this pattern, be tossed out into space, or be sucked into the sun. .
(Image from Grant Sanderson’s video “But what is a Fourier series? From heat flow to circle drawings”.18
More abstractly, everything we can see, hear, and touch can be described as a collection of oscillations. This was proven by the famous French mathematician Joseph Fourier (1768–1830) during his study of heat transfer. From his work came the famous and brilliant Fourier Series and Fourier Transformation. The Fourier Series describes the hierarchical order of frequencies needed to produce a specific output. For example, the image above shows a portrait of Joseph Fourier being drawn by a pen at the end of a long series of oscillating circles, shown by the rotating arrows, with each subsequent oscillation originating from the tip of the previous circle’s arrow. The Fourier Transform is the mathematical method that can reverse engineer any collection of oscillations, such as music or a painting, and discover the recipe of its various frequencies and their quantities. By extension, this can also apply to three or more dimensions.
There are a lot of universal patterns and laws. For example, the Harmonic Series that describes music, , the Fibonacci Sequence of (0, 1), 1, 2, 3, 5, 8, 13, 21, 34…, how prime numbers can create π, , and ratios and constants like Euler’s number (e, 2.71828), Phi (Φ, 1.618) and Pi (π, 3.14159). Many instances share these foundational patterns, and while these instances may not have a direct connection, they do share properties of these laws and patterns. We don’t say a basketball is the same as a planet, but we do say they are both round, and anything we can say about roundness applies to both. This may sound childishly simple, but when this is applied to the 2nd law of motion, we see some fascinating patterns and relationships.
Newton’s 2nd law of motion was the brilliantly simple and profound formula of force=mass × acceleration, or F=m×a. This law seems so intuitively obvious that it borders on silly. We all know that getting hit by a baseball thrown by a little league pitcher might leave a mark, but one thrown by Aroldis Chapman of the Cincinnati Reds might be fatal (Chapman holds the world record of the fastest pitch in history at 105.1 mph in a game against the San Diego Padres on September 24, 2010). The genius of this formula was not just its simplicity but that it could be proven that it was as universal as 6=2×3 and therefore applicable to baseballs as well as the planets, ushering in a revolutionary change in the world view at the time. A world view that others, like Giordano Bruno and Galileo, were burned at the stake or tossed into prison for simply suggesting.
Let’s take a big step back for a moment. Energy is what causes things to move. When energy moves, it affects things that it interacts with. We can see this when we drop a rock in the water. The energy of the falling rock interacts with the water to cause waves to spread out, and we can see how those waves interact with other things, such as the shore. With each interaction, the energy is transferred, spread out, shared, until all the energy in the wave has been dissipated and balanced within the system (the body of water) thanks to entropy.
This simple example can be abstracted into three basic concepts: the cause of movement, the medium of movement (e.g., mass, electricity, water, etc.), and the effect of movement (on that medium). Cause, medium, effect represent the properties of the archetypes whose relationships to one another can easily be expressed in the values 6, 2, and 3, respectively. The 2nd law of motion tells us that mass (2) × acceleration (3) = force (6), which implicitly tells us that medium (2) × effect (3) = cause (6). But there’s an obvious yet unspoken property that is fundamental to all three properties, which is time, or more correctly, space-time, as time and space are inextricably bound.
We use the numbers 2, 3, and 6 because 2×3=6 is the first instance where two distinct archetypal numbers (the first two primes, even and odd) interact to create a new wholeness that contains both qualities. This is a fundamental prerequisite for creation. In the world of nature, 2×3 represents transformation. Two and three are not merely numbers but archetypes of the most basic distinctions. Two represents the first differentiation from unity, the emergence of duality, the even principle. Three represents the second differentiation, the establishment of stability, the odd principle. When these two fundamental archetypes interact through multiplication, they create six, which embodies both the even nature of two and the transformative power of three. This is not mathematical coincidence but the pattern that underlies Newton’s 2nd law, where medium (2) times effect (3) equals cause (6).
When we diagram this relationship (illustrated below) in its simplest form, we can see the natural pattern formed by the relationships between the different instances of energy. We can also see how this pattern will self-assemble itself into a form based on the inherent patterns and relationships. In this case, the relationships are simple math functions (×, ÷). These functions are themselves patterns, or stable concepts, of the integration of parts forming a whole and the disintegration of a whole formed of parts. This is why the integration of multiplication (and addition) is bidirectional (i.e., 2×3=3×2), but the disintegration of division (and subtraction) is one-way (2/3≠3/2). This is 2nd grade level math, but it is also the pattern that describes how reality functions, how parts become wholes and wholes become parts. The ancient Greeks were exceptionally well versed in these patterns, but it wasn’t until the 16th century that Newton applied them to the physical world, bringing us Newton’s laws, and with them, the modern era of mechanics, physics, relativity, and quantum theory, all of which are based on Newton’s 2nd law of motion, which is based on the pattern of 2×3=6.
This same pattern appears in multiple contexts, as shown in the image below. When seeing these properties in different contexts, less obvious relationships become clearer. For example, we see that cause results from combining effect and medium, while effect is the result of separating cause and medium. We also see how there is no rational way to explain a First Cause, given that a cause requires an existing effect and medium. This does not mean there was no First Cause, just that if there was, it defies reason.
This pattern also suggests that the First Cause could have emerged from an effect and medium outside our reality. This aligns with Sidis’ idea and with the theory that Big Bangs create new universes inside black holes. Since a black hole represents a maximum state of high-entropy chaos, it exists beyond the boundaries of the Universe that emerges from the Big Bang. Our self-assembled triple-circle pattern supports this view. It clearly shows that the cause in one space-time system is created when the medium from one system merges with the effect from a different system to create the cause of a child space-time system.
Here we can see how the model that applies to numbers, motion, and cause and effect could also apply to the creation of our particular space-time Universe. We stress the word “could” because this is not a theory, hypothesis, or even a glimmer of a faint idea in the world of science, as far as I know. However, it is a model that fits the larger patterns of creation.
Another way to look at this is that the First Cause is nothing. We see this in mathematics, where 1 is the effect of 0 because the creation of a concept of nothing demands the creation of a concept of something, or so we are told by philosophers, mathematicians, and physicists19. Numerically, this makes 0 the First Cause, but it seems contradictory that nothing can be the First Cause, at least in the material world. It might make sense if we consider the inside of a black hole as a form of 0 and the singularity that forms within that nothingness as a form of 1. It’s just a thought.
We have seen the 2-3-6 relationships and how they match the cause/medium/effect relationships, but there is a 4th property of space-time. How does that fit into the 2-3-6 pattern? To know this, we must first determine the value of this new space-time element. Fortunately, this is simple to determine because that value must be the smallest value that all other elements contribute to, and that value is 18. Being 6 × 3, it fits perfectly.
The product 6 × 3 equates to force × acceleration or cause × effect. And what does force × acceleration equal? Power. Power in this context is defined as “the rate at which work is done, or energy is transferred in a unit of time”. But wait, doesn’t force also involve energy and time? Not quite the same way. Force causes change, but power measures how quickly that change occurs (i.e., the rate at which force does work over time).
Consider this example. If it takes me an hour to climb six flights of stairs, I’m applying force over time, but with relatively little power. If I ran up those same six flights in 15 seconds, I’m still applying force over time, but now with much more power. Same work done, vastly different power.
In the world of electricity, power is measured in watts. A 100-watt bulb has the same voltage (force) as a 1-watt bulb, but 100 times the current (rate of electron flow) due to lower resistance, resulting in 100 times more power. Another analogy involves money. A penny is money. A wheelbarrow of pennies is also money, but it has far more buying power.
In the physical world, this concentration of force is called pressure. What do we call it in the archetypal world where force × acceleration is expressed as cause × effect? We could still call it power or pressure because the definition remains the same. Cause × effect describes the transference of energy, but now explicitly including the property of space-time.
In practical terms, cause and effect alone, with no regard for the medium or context where things are happening, is the power of this reality. It is what makes everything happen in this reality. The medium will determine the instances of that power, but it is still the same power.
Another context where this law works very well is electricity. In electricity, the three states cause, medium, and effect are instances of volts (V), resistance (R) or ohms, and current (I) or amperage. In the electrical world, these relationships are called Ohm’s law. Just as the 2nd law of motion, F = m × a, tells us that “movement is proportional to pressure and inversely proportional to mass”, Ohm’s law, V = R × I, tells us that “electric current is proportional to voltage and inversely proportional to resistance”.
The world of matter and electricity share these same qualities in the following ways:
Force
equates to voltage (which is technically called
electromotive force). Both are of the archetype of
cause. Mass equates to
resistance or ohms. Both are of the
archetype of medium or resistance.
Acceleration equates to current or
amperage (movement of electrical charge). Both are of
the archetype of effect. Mechanical
power equates to electrical power
because both represent how much energy is transferred in a unit of time,
and both are of the archetype of time.
The commonality of this law explains why we can describe the concept of electricity flowing through a wire as water flowing through a pipe.
As we move from context to context, such as matter to electricity, these three properties are defined, measured, and interact in different ways, but the law does not change. With a little analysis, it can be shown how these different properties relate across many contexts.20
Here are just some of the common contexts where this law applies:
For the record, Ohm’s law is an instance of what is called the
lumped element model, which approximates a system’s behavior
without knowing the complexity of the underlying systems. In the case of
electricity and Ohm’s law, we don’t need to use the partial differential
equations to calculate the Lorenz force to know how many amps the
refrigerator needs. We only need to calculate I = V/R
(amps = volts/resistance) to know how many amps the fridge uses, rather
than .
We saw in the image above titled “The Pattern” that energy has three states through which it can interact with all other forms of energy. Those states are cause, effect, and medium. It does not matter if the energy is from a volcano or the flipping of a coin. These three states will always be the same. It’s similar in practice to how standardized electrical outlets have three inputs that work the same no matter where you are or what you plug into them, regardless of whether the energy comes from a coal plant or nuclear power. Think of a radio’s power button, volume control, and tuner. No matter their form, they work regardless of what’s inside the box, be it tubes, coiled wires, or integrated circuits. Newton’s laws are the “universal standard” for the mechanical functioning of this part of reality we exist in, but only “this part” of reality. By “this part”, we mean a reality that isn’t near a black hole or an enormous mass that distorts spacetime, isn’t too hot or too cold, and where typical “things” are not too big or too small. Newton’s laws are valid in the middle of the bell curve of scales and conditions where life as we know it exists, from microbes to elephants, where the symbiotic relationships of nature operate under predictable cause-and-effect patterns. We exist in that zone, and our perception of reality is naturally centered on that zone.
Below are three tables that compare Newton’s 2nd law of motion, the simple math it is based on, and Ohm’s law.
If we look closely at these formulas, there appear to be at least two that are missing. Following the numerical pattern, we can see 6 × 3 = 18, which relates cause × effect to power, but where is 6 × 2 = 12, which is cause × resistance = ?. And where is 3 × 18, or effect × power = ? What happens if we add them in with the other formulas just to see what all the formulas would look like together? This is shown below with the new values creatively named x and y.
We’ll refer to the chart of 18 formulas divided into six sections as the hexagonal model, and the chart of 12 formulas divided into four quadrants as the cube model.
This looks quite different now. This hexagonal model also shows quite a bit more symmetry and pattern than the classic cube or circle model, and is even self-similar or fractal and has a few other interesting qualities. For example:
The reader may be thinking, “Sure, that’s what happens naturally with numbers. There’s nothing special about any of this,” and that would be correct. Not only that, but the new x and y values do not tell us anything new from a mathematical perspective. When you do the math, you discover that x is always the same as mass (i.e., medium or resistance), and y is always force2 (i.e., cause2). OK, so then why are we even bothering with this? Because we are more interested in the pattern than the practicality of these extra formulas, and here we show that the complete pattern of the model that defines a fundamental law of reality is hexagonal in nature. This will prove to be an extremely significant detail.
Family Values
Are there only 6 laws in this pattern? Yes, but not because six possesses some mystical limiting property. Rather, when we start with the first two primes (2 and 3) and apply a specific generational rule, we arrive at exactly six values. The rule is this: only accept products that remain divisible by both original parent factors. For example, if we start with the factors 2 and 3, we can create the products 4 (22), 6 (2 × 3), and 9 (32), but 4 and 9 are tossed because only 6 is divisible by both factors of 2 and 3, and are therefore not part of the ‘family’ of 2 and 3 (the bastards). Think of 6 as the child of parents 2 and 3, the 1st generation. Now we have a new factor of 6 that can create three more products: 12 (6 × 2, math is very incestuous), 18 (6 × 3), and 36 (62). This is the 2nd generation. The total number of values, including factors and products, is now 6 in number (2, 3, 6, 12, 18, 36). In short, when we limit ourselves to two generations, the total members of this ‘family’ will always be the 2 parents plus 1 child for the 1st generation plus 3 grandchildren for the 2nd generation, giving us 2 + 1 + 3 = 6. We could continue to a 3rd generation, which would yield 19 total family members, but stopping at the 2nd generation produces a remarkably complete and self-contained pattern. The reference to children, parents, and family for numbers is not an anthropomorphic metaphor I just came up with but was adopted from Pythagoras, Plato, and other ancient pioneers of math who took these associations very seriously. Pythagoras created an entire religion around math on the idea that only math is the one true source of knowledge21, and many beliefs and customs were based on math. This seems odd, considering their gods were homicidal inbreeding maniacs, but it makes sense if the myths told the same stories of numbers and math but with a bit more drama.
A word on electricity
We are using Ohm’s law, the laws of electricity, as an example for another reason as well. We are all familiar with lightning. Watching the bolts of light shoot instantly across the sky and into (or from) the earth is awe-inspiring and exciting. Contrary to the statement “Lightning never strikes the same place twice”, lightning often strikes the same place many times, and for good reason. Ben Franklin suspected why, which gave birth to the lightning rod, saving millions of buildings from burning to the ground.
We know that energy tends to favor paths of least resistance, distributing across all available paths inversely proportional to their resistance. As the opposing charges of the earth and the sky increase, two electrical fields grow. The positive field of the earth is pulled toward the negative field of the sky, and vice versa. As these fields strengthen, they ionize the air between them, creating preferential pathways through the atmosphere. When the fields finally connect, a channel opens, and electrons from the cloud rush downward through this ionized path toward the earth. The brilliant flash we perceive as lightning is primarily the return stroke, a surge of energy that propagates back up through this same channel at nearly the speed of light. The fields determine where this channel forms before the visible discharge occurs, much like how we can predict where rainwater will flow by identifying the paths of least resistance that already exist in the landscape. By the time we see the lightning, the fields have already balanced themselves. The engine of this interaction is the electrical fields themselves. Lightning is the byproduct of their interaction.
This branching pattern that emerges from energy flowing across fields of potential appears in remarkably similar forms across radically different systems: rivers carving through landscapes, roots seeking nutrients and water, neurons transmitting signals, dendrites forming connections, trees reaching for sunlight. Is it reasonable to suggest that these patterns, despite arising from different mechanisms, all represent optimal pathways for distributing resources through their respective systems? Each pattern emerges as the byproduct of some form of energy or resource seeking equilibrium through available pathways. The striking visual similarity across such diverse contexts suggests an archetypal pattern at work. What exactly is the energy balancing in each case? It seems the main difference between lightning and these other examples is not the underlying principle but the timescale and the nature of the energy involved; some balance in milliseconds, others over years, centuries, or even the lifespan of the Universe.
These bifurcated expanding patterns are ancient, primitive, and foundational in nature. As nature evolved, so did its patterns and the patterns created by its own creations. Perhaps our collective human intelligence is at the crystallizing stage of evolution, mirroring the moment when chaotic atomic structures first organized into stable crystalline forms.
Newtons and Joules
Before we continue, it would be helpful to clarify how energy is measured, as this will come up a number of times.
The standard measure of energy is a joule. For example, it takes 1 joule to accomplish the following:
A joule measures the energy transferred when a force does work. What is a force? It’s any push or pull that can change how something moves. Newton’s 1st law of motion, the law of inertia, states that something at rest will stay at rest until some external force is applied to it.
Newtons are the units used to describe force, specifically the force required to move 1 kg one meter per second per second (1 m/s2). Imagine a pineapple (which weighs about 1 kg) floating stationary in deep space. If it were pushed with a force of 1 newton, it would accelerate, reaching one meter per second after one second of continuous push. If the force was constant, it would accelerate at 1 m/s2, similar to the way things fall at a rate of 9.8 m/s2 here on planet Earth. This is because gravity exerts a force of 9.8 newtons on each kilogram of mass. Because gravitational acceleration is approximately 10 m/s2, each kilogram experiences about 10 newtons of force. Something 100 grams sitting on a table, like a stick of butter, applies a force of 1 newton to the table. If you wanted to lift that stick of butter 1 meter above the table, you would need to apply an additional force of 1 newton to do so. This is where joules come in, as 1 joule equals 1 newton × 1 meter, the amount of energy needed to exert a force of 1 newton on an object to move it 1 meter. In the butter example, the work done is 1 joule. Whether it takes 1 second or 1 hour, the work remains 1 joule, but the power differs: 1 watt if done in 1 second versus 0.00028 watts if done over 1 hour. Perhaps the closest commonplace concept to this energy/time/space relationship is the old-fashioned measure of horsepower. HP was created to compare the work of a steam engine to that of a horse. If a steam engine could lift a ton of water 1 foot in 1 minute, and it took 4 horses to do the same, then the engine had 4 HP. This is the same concept behind joules/second, as 1 HP equals 745.70 joules/second. The Pontiac GTO, a classic muscle car from the 1960s, boasted a 300 HP engine, which is equivalent to 223,710 joules/second.
To sum up the relationship between force, energy, and power:
There is one more pattern comparison to look at. In the formulas above we can see how the pattern of 18=2×32 (P=R×I2) looks exactly like another popular formula, E=m×c2. Using the rational and analogous associations, we can equate the following:
We can successfully recreate all 12 formulas from this one equation, making E=mc2 yet another context for this universal pattern. However, a new value that equates to volts or force has appeared. We will call these zvolts for now as they correlate to electrical volts using Ohm’s law formulas.
An interesting observation is that in this context, the variable for
speed or current is c, the speed of light,
which must always remain constant. It appears that c is
Relativity’s version of current or amperage, the maximum
current supported rating for this universe. Think of it like a 40
Amp fuse we use to ensure we do not melt our wires and burn out our
devices. Does this suggest that if we break the speed of light, we would
“blow a cosmic fuse” and “melt” our reality? Maybe we will find out one
day.
But what, if anything, are these zvolts? In electricity, voltage is electric pressure resulting from the difference between high potential energy (like a storm cloud or mountain top) and low potential energy (like a lightning rod or valley). In the world of matter, force creates pressure that causes change. Can we then say that these missing cosmic volts represent pressure or force that creates the movement of energy, making them a relativistic version of electrical volts?
Let’s calculate the value of a zvolt. If z=c×m, then m acts as a multiplier of c. We know that c×1 represents the speed limit of this reality, which would suggest the maximum value for m is 1. Yet we also know that mass values can exceed 1 in practice. How do we resolve this apparent contradiction? The answer lies in recognizing that the “1” in c×1 does not represent a specific weight like 1 kilogram. Instead, it represents the unity or totality of mass and energy. When mass travels at the speed of light c, it embodies this total unity, where m=E=1 in normalized units.
In this case, z can never be greater than 1, yet z=m=E, and m can be any value. To resolve this contradiction, we assign c a value of 1. When c=1, then E, m, and z are always the same value, though they represent three different properties. Since c has a dimension of time as part of its definition, we need to consider that z must also have a dimension of time. We can interpret z=1 to mean the limits of E and m within 1 z-unit of time. In this framework, z can equal 2, but this would represent c=1 in each z-unit of time, and there are 2 z-units of time.
This is a far stretch from current science, and no physicist would consider this a valid scientific idea. However, it works both conceptually and mathematically, fits the model, and opens the door to a new perspective. The zvolt concept suggests a representation of the potential for energy and mass within a discrete unit of time. This makes it not only the causative force of creation but also the framework that holds the limitations and potentials for instantiation.
This might seem like an odd place to switch to the subject of alchemy, but it is not, as you will see.
If we claim that these laws and patterns have been visible throughout our journey of discovery over the past millennia, we should be able to see them in the early forms of reasoning that evolved into modern science.
Alchemy is the birthplace of modern science. Despite the charlatans of science in the olden days, just like today, many alchemists profited by promoting “elixirs of life” and promises of discovering the “philosopher’s stone”. The true goal of alchemy was to discover the secrets of nature, and to alchemists, this was as much a spiritual journey as it was a technical one. Modern science has done away with the spiritual or mystical aspects of knowledge and doubled down on the technical aspects, which we will discover is an unsustainable position.
We can see early forms of these modern concepts, specifically in the concepts of the elements of earth, water, air, and fire. These elements do not refer to the material instances but to their archetypes, of which the material forms are limited instances. The first form of matter that came into existence, which would be the equivalent of modern science’s soccer-sized ball of everything that exploded to fill the Universe, was considered to be formed by these four archetypes. This was an idea held by the ancient Greeks, Islamic philosophers, scientists, and learned Asians and Europeans of their day.
As archetypes, they instantiated not only as matter but also as qualities. For example, the elements were used to describe health as far back as Hippocrates (400 B.C.) and as recently as Carl Jung’s theory of personality, which drew heavily on Hippocrates. In Jung’s theory, there were 4 types of personalities. These were feeling (fire, choleric), thinking (water, phlegmatic), intuition (air, sanguine), and sensation (earth, melancholic). He then added the attributes of introversion/extroversion to come up with 8 basic personality archetypes.
This is relevant here because the 4 alchemical elements are a very early version of the 4 qualities of matter as expressed by Newton’s 2nd law. It might seem odd or even ridiculous to compare perhaps the most significant laws of technical thinking to the hocus-pocus of alchemy. Still, Isaac Newton was himself an alchemist22 who not only attempted to turn lead into gold but believed he could discover the Elixir of Life. In fact, Newton was feared by the English Crown because if he did discover the Philosopher’s Stone, that magical element that could turn lead into gold, he would ruin the British economy. Newton also feared the Government as they imposed very severe penalties on anyone trying to turn lead into gold, so none of his alchemical works were published.
Newton’s alchemical work was only discovered in 1936. When his manuscripts were auctioned by Sotheby’s, it was discovered that one-third of his work was alchemical, and even more surprising, was as meticulously researched as his technical work. This does not include his religious writings, such as the 323-page “Observations upon the Prophecies of Daniel, and the Apocalypse of St. John”, or the volume of work that predicted that the apocalypse and return of Jesus would take place in 2060.
Newton was a secret member of the heretical Christian sect called Arianism (after the 4th century Alexandrian Arius) that did not believe in the biblical doctrine of the Trinity of God, but rather the Oneness of God (which is ironic as his laws are brilliant examples of a trinity). This belief was punishable by death according to the Blasphemy Act of 1697 enacted under William III, so we can safely speculate that a lot of Newton’s research was kept hidden and will probably never be seen.
Considering that 20 years of his work was destroyed in a fire started by his dog, one-third might be a very conservative number. These works were labeled “not fit to be printed” by the King after his death for fear someone would pick up where he left off. Today, we have hundreds of years of science to base our thinking on, but before Newton and his laws, there was only alchemy, and this is what Newton studied, along with occultism and hermeticism. Modern science is loath to admit that Newton was as much a ‘magician’ as he was a scientist, but there is no doubt his esoteric studies had an impact on his theory of forces and gravity.
Historical note: Newton may also have been a bit mad as modern analysis of his hair showed elevated mercury levels, and in 1692-1694 he wrote letters that certainly sounded mad, accusing John Locke of attempting to “embroil me with woman”, and dropping out of contact. He wrote “I am extremely troubled at the embroilment I am in, and have neither ate nor slept well this twelve month, no have any former consistency of mind. I must withdraw from your acquaintance, and see neither you nor the rest of my friends any more.” But this madness didn’t stop him from publishing his masterpiece of experimental physics, “Opticks”, in 1704.
What we understand as resistance, current, volts, and power (or mass, velocity, force, and power) today, the alchemists would describe as qualities that have the properties of earth, fire, air, and water archetypes, respectively. This is not to suggest that just as F=m×a, so too does air=earth × fire, but we could easily draw parallels of both concepts like movement, energy, force, and resistance. These different models are examples of how the same patterns and archetypes keep appearing across many contexts and scopes, such as technology, science, mysticism, social order, biology, and many more. In Newton’s case, they formed his understanding of reality that was then applied to his laws. It seems reasonable to assume that Newton knew of the hexagonal pattern of his laws, but he had no reason to speak of any but the most technically useful. It may also have been that he chose to keep certain information and discoveries away from the watchful eyes of the nervous king and a pope who saw him as a potential heretic. This, of course, is pure speculation.
Related to concepts of reasoning is how we look at numbers. Typically, a number is a quantitative value. We have 6 apples, $1,000, etc., yet having 6 apples says nothing about the apples themselves. This lack of qualitative meaning in numbers is at the core of the ongoing debate in the world of statistical analysis. Imagine the differences in approach and perspective between a quantitative understanding of overpopulation versus a qualitative understanding.
For clarity, here is how we understand the concepts of quantitative and qualitative:
Lay persons tend not to think of numbers as having qualitative properties. How would you describe the number 1? Most people would not say, “It’s that number which, when multiplied by anything, has no effect”, or “It’s its own square root”. When you see the number 7, how often do you think, “I wonder why its inverse is an infinite recurring pattern of all the numbers not divisible by 3, yet add up to 3×3×3 (or 33)?” (1/7=0.142857 142857 142857 142857 and so on).
In ancient times, numbers were far more qualitative. Greeks considered 3 the number of man, 2 of woman, and among the Pythagoreans, 9 was too sacred a number even to be uttered. Of course, most qualitative values of numbers will be cultural, which doesn’t mean they are invalid but are only valid within that context, such as how Plato and Socrates, both eugenicists, believed in the qualitative values of numbers when it came to breeding. Plato even went so far as to suggest that people should only be allowed to listen to certain music based on its harmonics.
However, numbers have a universal or objective, qualitative value regardless of culture or beliefs. The Oxford English Dictionary defines the word “unity” as “The abstract quantity representing the singularity of any single entity, regarded as the basis of all whole numbers; the number one.”
Take the number 2, for example. It is the first pair, and it can be said that “2 represents the result of two separate things joining to create a new separate thing, the first union” (we saw an example of this in “Family Values”). It also “allows us to define an area, which requires 2 dimensions”. We can say 3 represents the most stable shape, as the simplest shape that can be created requires 3 sides, or the most stable form is a tripod. From these qualitative properties, we can speculate on the qualitative properties they contribute to the values they can create, such as 4, 5, 6, 7, etc.
Further on, when we explore relationships that can be expressed in numbers or geometry, we will often consider their qualitative significance because to ignore it would be to ignore an entire dimension or perspective of understanding.
The oscillation constant also gives us a glimpse into another fundamental yet profound property of creation and reality which is self-similar redundancy. Self-similar redundancy (a term that is itself redundant) is how one property or law manifests itself across different orders of scale in the most effective manner given its context, state, and scope.
One of the more obvious examples of this type of inter-scope self-similarity might be the commonality between a solar system’s structure and an atom’s structure.
Of course, universal laws, such as the laws that describe the conservation of energy and mass, apply to all systems. At the atomic level, we also have local laws such as electron energy levels, the nuclear weak force, etc. In much larger systems, like a solar system, there are local laws of planetary motion.
Some scientists perceive no relationship between atoms and solar systems and claim that this analogy depends on an old and outdated concept of the atom, writing it off as humanity’s tendency to oversimplify the complex and over-relate the unrelatable.
No doubt this is true to some degree, but more importantly, there are some similarities worth investigating that would give us an idea of the laws that both systems deploy in the most efficient way they can be expressed, given their scope. A perfect example of this can be seen in the field of molecular dynamics where atoms are treated as tiny balls, and the bonds between atoms are treated as mechanical springs. We know atoms and bonds are not balls and springs, but applying Newton’s laws to these pretend objects results in accurate predictions.
Naysayers notwithstanding, this idea of self-similarity across scopes is legitimate enough to be studied and named. Quoting from the International Journal of Theoretical Physics23:
This report concluded the following:
The simplicity of [the Self-Similar Cosmological Model (SSCM)] and its ability to quantitatively relate atomic, stellar, and galactic scale phenomena suggest that a new property of nature has been identified, which is discrete cosmological self-similarity. Although the SSCM is still in the early heuristic stage of development, it may be the initial step toward a truly remarkable unification of our considerable but fragmented physical knowledge.
A more organic example of this inter-scope self-similarity24 is to compare the structure of the Universe to a brain cell, the birth of a cell to the death of a star, or the human eye and a nebula, and countless other examples.
There are many matching patterns between cells and the Universe, and it is a subject far too broad to get into here. One recently published paper25 shows the similarity in structure of a neutron star and a human cell. Other comparisons based on scientific and rational observations have also been noted, such as the following:
One could say that if you look long and hard enough, you can find relationships and patterns between any two things. That may be true, but if certain patterns keep popping up, it might be more than just an overactive imagination.
It might even cause some incurably curious researchers to wonder if there was a bigger picture that they have been ignoring and inspire them to do some investigation that might open new doors of understanding. Someone like the esteemed Stanley N. Salthe, Professor Emeritus, Brooklyn College of the City University of New York, who said the following:
It is an interesting possibility that the ‘power laws’ followed by so many different kinds of systems might be the result of downward constraints exerted by encompassing supersystems. ~Stanley N. Salthe, Entropy 2004, 6, 335
Here is what Hans van Leunen, a physicist from the Eindhoven University of Technology, Dept. of Applied Physics, and founder of The Hilbert Book Model project, which applies mathematical test models to investigate the foundation of physical reality, has to say about this as well:
Obviously, physical reality possesses structure, and this structure founds on one or more foundations. These foundations are rather simple and easily comprehensible. The major foundation evolves like a seed into more complicated levels of the structure, such that after a series of steps a structure results that appears like the structure of the physical reality that humans can partly observe26. ~Hans van Leunen, The Structure of Physical Reality
He then goes on to say the following:
The [paper ‘The Structure of Physical Reality’] applies the name physical reality to comprise the Universe with everything that exists and moves therein. It does not matter whether the aspects of this reality are observable. It is even plausible that a large part of this reality is not in any way perceptible. The part that is observable shows at the same time an enormous complexity, and yet it demonstrates a peculiarly large coherence.
The conclusion is that physical reality clearly has a structure. Moreover, this structure has a hierarchy. Higher layers are becoming more complicated. That means immediately that a dive into the deeper layers reveals an increasingly simpler structure. Eventually, we come to the foundation, and that structure must be easily understandable. The way back to higher structure layers delivers an interesting prospect. The foundation must force the development of reality in a predetermined direction. The document postulates that the evolution of reality resembles the evolution of a seed from which only a specific type of plant can grow. The growth process provides stringent restrictions so that only this type of plant can develop. This similarity, therefore, means that the fundamentals of physical reality can only develop the reality that we know.
In other words, he is saying that there are self-similar and redundant orders in the hierarchy and layers (in his words) of creation, and these orders abide by specific laws which are limited (predetermined) by their component parts (seeds) which are fundamentally simple. Likewise, the restrictions of the growth process will be similar at every level, and consequently, the laws at play will be similar.
You can read his paper27, but unless you know your way around multidimensional Hilbert space lattices, it will be challenging.
For purposes of this book, I am going to define a scope or order of creation, (or level, as Hans van Leunen would say) as that creative cycle from which an apparent order emerges out of a state of the apparent disorder defined by the limits of the duality it emerged from. I say “apparent” because I don’t want to suggest that there is disorder in a seed and order in the resulting flower. Obviously, there is order in both, but the explicit order of a flower in bloom, at the peak of its expression, when it is ready to drop its seeds, is far more apparent than the implicit order of a seed. The flower is explicit when it is in bloom, and implicit in the seed, while the seed is implicit in the flower. This also suggests that within the scope of the life cycle of a flower, which begins with a seed and ends with compost, the flowering stage represents the most optimum expression of energy or the most effective form that instance can realize.
Xing Xiu-San, “Spontaneous entropy decrease and its statistical formula”, Department of Physics, Beijing Institute of Technology, Beijing, China; https://arxiv.org/pdf/0710.4624.pdf↩︎
Sidis, W.J. (2011). The Animate and the Inanimate. Originally published in 1920. https://www.sidis.net/animate.pdf↩︎
Lesne, A. (2014). “Shannon entropy: A rigorous notion at the crossroads between probability, information theory, dynamical systems and statistical physics”. Mathematical Structures in Computer Science, 24(3). doi:10.1017/s0960129512000783↩︎
Strevens, Michael. Bigger than Chaos: Understanding Complexity through Probability. London: Harvard University Press, 2006.↩︎
The 12-dimensional system refers to the coin’s phase space, a mathematical construct where each possible state of the system is represented by a unique point. To fully describe the coin’s state at any moment requires 12 coordinates: 3 for spatial position (x, y, z), 3 for linear velocity (vx, vy, vz), 3 for angular orientation (typically Euler angles or similar rotational coordinates), and 3 for angular velocity (ωx, ωy, ωz). Together, these 12 dimensions completely specify the coin’s motion through space and time, making the dynamics deterministic in principle but unpredictable in practice due to extreme sensitivity to initial conditions.↩︎
Bousso, Raphael, Ben Freivogel, Stefan Leichenauer, and Vladimir Rosenhaus. “Eternal Inflation Predicts That Time Will End.” Physical Review D 83, no. 2 (2011). https://doi.org/10.1103/physrevd.83.023525. https://arxiv.org/pdf/1009.4698v1.pdf↩︎
Software used: Arch Linux, DiE (detect-it-easy) v3.03, Nov 14, 2021↩︎
Sidis, W.J. (2011). The Animate and the Inanimate. Originally published in 1920. https://www.sidis.net/animate.pdf↩︎
Boyle, Latham, Kieran Finn, and Neil Turok. “The Big Bang, CPT, and Neutrino Dark Matter.” Annals of Physics 438 (2022): 168767. https://doi.org/10.1016/j.aop.2022.168767, https://arxiv.org/pdf/1803.08930.pdf↩︎
Jeremy L. England, “Statistical physics of self-replication”, J. Chem. Phys. 139, 121923 (2013) https://doi.org/10.1063/1.4818538↩︎
Here are a couple of sites showing a harmonograph. They are fascinating to watch. http://andygiger.com/science/harmonograph/index.html, https://www.youtube.com/watch?v=HJYvc-ISrf8↩︎
Ball, P. (2016). Patterns in Nature: Why the natural world looks the way it does. Chicago: The University of Chicago Press. ISBN-10: 022633242X ISBN-13: 978-0226332420↩︎
Proietti, Massimiliano, Alexander Pickston, Francesco Graffitti, Peter Barrow, Dmytro Kundys, Cyril Branciard, Martin Ringbauer, and Alessandro Fedrizzi. “Experimental Test of Local Observer Independence.” Science Advances 5, no. 9 (2019). https://doi.org/10.1126/sciadv.aaw9832.↩︎
https://en.wikipedia.org/wiki/Wigner%27s_friend↩︎
Source of frequency values: Bentov, Itzhak. “Stalking the Wild Pendulum: On the Mechanics of Consciousness”. New York: Bantam Books, 1979. Print.↩︎
Meessen, A. (2020) “Virus Destruction by Resonance”. Journal of Modern Physics, 11, 2011-2052. doi: 10.4236/jmp.2020.1112128.↩︎
at https://www.3blue1brown.com/↩︎
Carroll, Sean M. “Why is there something, rather than nothing?” The Routledge Companion to Philosophy of Physics. Routledge, 2021. 691-706., https://arxiv.org/pdf/1802.02231.pdf↩︎
Yee, Jeff. (2019). The Relation of Ohm’s law to Newton’s 2nd law. 10.13140/RG.2.2.15576.75523. https://www.researchgate.net/publication/330639107_The_Relation_of_Ohm%27s_Law_to_Newton%27s_2nd_Law↩︎
Mankiewicz, Richard. The Story of Mathematics. Princeton: University Press, 2004., pg. 24-26↩︎
Transcriptions of Newton’s alchemical works are available at https://www.newtonproject.ox.ac.uk/texts/newtons-works/alchemical↩︎
Oldershaw, R. L. (1989). Self-Similar Cosmological model: Introduction and empirical tests. International Journal of Theoretical Physics, 28(6), 669-694. doi:10.1007/bf00669984 https://www.academia.edu/26520933/Self-Similar_Cosmological_model_Introduction_and_empirical_tests↩︎
You can find many examples of this self-similarity http://www.bordalierinstitute.com↩︎
https://www.sciencealert.com/scientists-have-found-a-structural-similarity-between-human-cells-and-neutron-stars↩︎
van Leunen, Hans. (2018). The structure of physical reality, https://www.researchgate.net/publication/327273285_The_structure_of_physical_reality↩︎
Leunen, J. J. (2018). Structure of physical reality. http://vixra.org/pdf/1806.0087v3.pdf. Here is the entire report http://www3.amherst.edu/~rloldershaw/OBS.HTM↩︎