Note: this is a sequel to https://peakd.com/galactic-milieu/@knotwork/gravitational-wave-mechanics-and-circuitry-is-means-so-can-do
Note2: The first post of this "series" of posts is https://peakd.com/galactic-milieu/@knotwork/musings-arising-from-g-spencer-browns-laws-of-form
It has been seeming lately while bringing up many many wikipedia pages across whole swathes of quantum physics and the mathemetics it uses that I should look more closely into something I read many many years ago: the purported fact that any system of dynamical equations can be simulated / approximated to any arbitrary degree of accuracy by cellular automata.
If indeed what quantum mechanics is revealing to us is an ontological fact that "spooky action at a distance" is real then that ought to break the claim that cellular automata can achieve the same outcomes to arbitrary accuracy.
So I am starting to think that maybe it would be useful to prove or disprove the "they can both get arbitrarily close to the same results" hypothesis...
https://www.google.com/search?q=can+cellular+automata+simulate+dynamical+equations%253F
https://www.google.com/search?q=can+cellular+automata+simulate+psi+wavefunctions%253F
Aha that last link yields "Discretization: The conversion of continuous quantum processes into discrete steps in space and time", which sounds to me a lot like quantum time and quantum space, which both sound like things one ought to at least be considering if thinking of quantising gravity while also having a pre-conception that gravity involves something characterised as spacetime...
If it is the psi wavefunction itself that provides the spookiness though and a cellular automaton can to arbitrary fidelity simulate that, there would presumably be no spookiness just an action mediated by a wavefunction that itself presumably propagates at or below the speed of light...
At this point I find myself wondering whether "normalising" probabilities to always add up to unity (one) is really necessary.
If a 150% probability were allowed to simply mean a high likelihood of one plus a chance of a second one, might that not be useful if what is supposedly being simulated or represented includes some kind of "something out of nothing" potential? Or even a "something out of background energy" potential?
To what extent might it be feasible to have just one "fundamental" value at each cell of a cellular automaton, something like "energy" or "frequency" or even for now we could call it "might", and use just that to determine for each cell how much of it it will induce or induct or transmit to each of its neighbor-cells?
Speaking of neighbors, we know from Bucky ("Synergetics" etc) that closest-packed spheres have twelve neighbors when ideally packed around a "nuclear" sphere, and we can see by placing three layers of tracing-paper hex-maps on top of one-another that there are preferred directions if one does that, so that if one used three Cartesian co-ordinates to index one's cells there would be a choice between for example having north and south as "sides" or having east and west as "sides", so to speak.
Implementing cells as a three-dimensional matrix with integer indices typically in computer-programming languages gives you on surfaces eight neighbors, in volumes twenty-six neighbors, so to simulate or emulate hex-maps in such languages you get to choose the orientation whereby to pick only six neighbors on each plane for each cell and you end up with preferred directions.
The motivation for using hex-maps rather than squares-maps is usually itself that the preferred directions on a hex-map distorts distances less than do the preferred directions on a squares-map, so maybe while we are implementing a six-neighbors on each plane, 12 neighbors total per cell cell-structure within a traditional 3-D matrix in a computer-programming language we take the moment to provide some "averaging out" of the distortion of distances that is an artifact of the choice of tessellation of the volume and its planes by either randomising or deterministically changing which matrix-positions to use as neighbors for each "turn" of the "game", each "one step for each cell" iteration through the cells.
At this point I also begin to speculate whether if time is to be alterable there will end up being mechanisms whereby the "symmetry" of having each cell have one "turn" in each iteration through an execution-loop will need to be broken, possibly even going so far as to have an "urgency" value assigned to each cell specifying how much it "needs" to be the next cell we process; possibly even having that value be a probability rather than a "highest urgency first with some other value breaking ties and some even more-other value breaking any ties that gives and yet another breaking any remaining ties and so on: tiebreakers all the way down" kind of algorithm.
One thing Bucky pointed out is that not all spheres in closest-packed spheres are "nuclear" spheres, with the others packed neatly around them; that is in a sense a kind of "crystal" structure he tries to get insights into nuclear physics by examining. For example he notes that a few layers of closest-packed spheres packed around a "nuclear" (centre, central) sphere brings up the number 92, and happens to be the same number of layers at which some spheres that could become "nuclear" spheres themselves arise. 92 being considered by Bucky to be the number of "regenerative" elements in the periodic table of the elements, element 92 being supposedly a kind of cutoff-point or limit beyond which elements are getting "too big for their britches" or some such idea.
Thinking more on what to do with this "might" value in or of each cell, it occurs to me to wonder whether simply being able to take square roots, cube roots, fourth roots and so on and so on would be enough all by itself or would it be useful to allow more values, one for each level of "root" as it were?
I am imagining a swathe of experiments that could be useful in exploring this cellular-automata idea, in order to get some clues as to how best to set them up.
For example what if one looked up the energy of of known particles and tried directly mapping "might" to those values, so that for a given value of might (possibly aka energy) you could have a lookup-table of particles that value "might" happen to "be" at that moment?
Is there, for at least some known particles, some amount of energy-per-something at which that particle not only "might" happen but indeed "would" happen barring some other particle, possibly a higher-energy or higher-"might" one, happening instead?
This might be a reasonable moment at which to post to see if there are any responses... :)
Meanwhile, back in Western occultism:
The so called Sephiroth of the Tree of Life might of course mainly be pointing at neural network nodes, with the whole body of "correspondences" one is urged to come up with being part of how one trains one's brain to assemble some such or similar nodes, and arise not from billions of years of Darwinian evolution "learning" the fitness of them but merely from their having been trained-in to Western occultists' brains.
But if evolution has "learned" anything about any "ontological reality" outside ourselves one ought perhaps to wonder whether quantum mechanics seeming somehow "intuitively" not quite right might not necessarily be because evolution did not experience a quantum reality in practical day to day or billenium to billenium survival.
Even if the many-worlds interpretation is ontologically correct organisms could have learned not to bother complicating their decisions by continuing to expect worlds already branched away from, if only to sharpen awareness of those that might still be immanent.
Although so far I have been looking at Laws of Form as if proceeding from the top of the Tree of Life, it might be useful to keep in mind that in that Tree's own model the form, the symbol, would presumably be associated with WOULD, the sphere as it were of symbols and indications, and the paper and pencil one writes it with it when doing it on paper would be within the sphere of THAT.
The Tree diagram itself of course is just such a sign or symbol itself so is just as much in its material or even imagined form an artifact of a higher-numbered Sephiroth than the whole it sets out to depict.
WOULD being my label for Sephiroth number eight and THAT my label for Sephiroth number ten.
In the spirit of Western occultism the benefit of the Tree and its associations and correspondences probably ought not to be thought to be whether or not it correctly depicts some ontological reality but, rather, should be thought to lie in the very exercise of "trying it for yourself" to experience for yourself the degree, if any, to which such exercises benefit you.
Like "going to the gym" exercises, a lot of the point lies in actually doing it not just sitting back considering whether one opines, exercise untried, that it will or that it will not benefit one to try it.
You could for example try to come up with better labels than I have; working out what labels might seem better and exploring the space of what words in a given language might seem more likely to point to key nodes in one's neural net, such as words having multiple meanings or interpretations, can be interesting and possibly even instructive in some sense.
-MarkM- aka Knotwork as in https://MakeMoney.Knotwork.com/