I am happy to announce that the English translation was added and is accessible here.
I am happy to announce that the English translation was added and is accessible here.
We use such concepts as matter, energy, force, time, speed and others to describe processes occurring around us. And these terms seem to perfectly describe and explain all that. But do they really? Do these familiar terms answer the question as to how our world works? No, not in my view. And problems started long ago, ever since the discovery of elementary particles, when it became clear that everything around us was made of uniform micro-pieces of something. What are these pieces? Why are they so weird? Why do they not fit within our usual linear mathematics, where each position of matter can be assigned specific values? These values of time, length, height, etc. are, to us, linear and continuous.
But suddenly, with the discovery of these particles, it became clear that it was a completely different world, uncertain and conditional. There was a gap between concepts. Suddenly we had to live in two completely different worlds with different physics and maths. The differences are so significant that, until now, scientists, physicists and chemists are unable to reconcile these principal distinctions. These attempts of reconciliation happen all the time, but the reality is such that we are living in the world of two physics – micro and medium size physics.
Note that I am talking about the “medium size” physics, the physics of Newton. Why? Because in the middle of the 20th century we, all of a sudden, learned a lot about the world of scale of galaxies and the universe. And this enormous-sized world suddenly also gave us a lot of surprises, when the scientists again tried to fit their knowledge of mid-size physics onto real physics of large masses. Black holes, dark matter, galaxies, gravity, time and distance became new problems. Hunting for new physics and mathematics started again.
Scientists are striving, constantly coming up with new ways of pairing differences, but the differences keep piling up, resulting in even more inexplicable knowledge and data. They invented the Big Bang theory. A brilliant idea, but what about the dark matter which distorts the results of calculations based on that theory?
In my opinion, we have already accumulated enough new information to stop trying to pull our past material knowledge onto something that, in fact, is not material and requires a rethinking of all accepted notions of the world order. The concept of matter can only describe a rather small in-between layer of sizes of the objects present in the universe.
Let’s turn our attention to one very interesting point related to everything existing in this visible world. Here, everything consists of particles combined into known matter – the matter we can only call so at the level of “mid-size” physics. Micro and macro worlds obey their own laws, and these laws we’ll try to explain in the following sections, abandoning the concept of matter and all other fundamental definitions.
In my opinion we have already accumulated enough of new information to stop trying to “pull” our past material knowledge on something that, in fact, is not material and requires a rethinking of all accepted notions of the world order. Using the concept of matter, we can only describe a rather small intermediate layer of sizes of the objects present in the universe.
Let’s turn our attention to one very interesting point related to all that exists in this visible world. All here is composed of particles, which are combined to the known matter, into matter, which we can call so only at the level of the “mid-size” physics. Micro and macro worlds obey their own laws, and these laws we will try to explain in the following sections, abandoning the concept of matter and all other fundamental definitions.
Non-material particles that make up our material world somehow change the laws of our “mid-size” world. Why do micro particles, grouped into large masses, follow the laws of matter? And why do these same micro particles, grouped into enormous masses, stop following the laws of matter?
Alas, it is not possible to answer these fundamental questions using the modern concept of physics. Here you need to change completely a point of view on everything that happens around us. Let us try to apply new knowledge that humanity just gained in 20th century.
What happened then that was of such importance in explaining the nature of the universe? A science called Cybernetics emerged, and with it came a concept of information. We began to describe as “information” virtually everything including ourselves. The concept of information is so universal that can be used to explain literally everything. So what’s the problem? Why we still do not use information to describe our world? Unfortunately, there is one discrepancy, which does not allow associating material world with information directly, without changing those very fundamental concepts. It turned out that we can describe anything using the information, but we cannot “materialise” the information. We do not yet have such tools, alas. After failing to link information to energy, the information itself was no longer considered as fundamental universal value. So far there is still energy, mass, power, speed, etc as we are still used to. Information is in computers now, and we continue to live in three worlds, keeping wondering of their properties. We, as Pagans in science, worship different gods: God of physics, God of chemistry etc. Yet in reality God is one.
Unfortunately for modern science, new experimental data became so inconsistent with the accepted theories that scientists began to come up with absolutely unlikely mathematical models. In reality, these models explained just little things, but much complicated and confused the essence, the nature of things. So why, despite the rejection of the idea of the world of information, I have decided to consider this idea yet again? Because what else can describe and explain the processes in the universe so well? And not only in there – but more about that later on.
First let’s turn to the contemporary problems of physics, which just do not fit into the existing theories.
I think that the invisibility of the Dark matter is due to its properties. Our Universe has its limitations, or restricts interactions with certain objects in the Wider Universe (I introduce a new term because the concept of single Universe is not enough). Same as Black holes, the Universe, because of some restrictions, cannot (doesn’t want to) interact with the Dark matter. That phenomenon is possibly not the last object in the Wider Universe that cannot be directly detected by our Universe.
What then could be the Dark matter in reality, and what is its purpose (function, use) in the Wider Universe? Assuming, on the basis of our previous reasoning, that matter is information, the Universe is, in fact, a kind of melting pot for it. Where else, in device known to us, information is processed in the similar way? “Isn’t it a computer?”, you may ask. I think yes. It is a computer!d.
What is there in a computer that could function as a Universe? Let’s try to build up an analogy. Everything that happens in the Universe resembles the process of computation and data handling. The data is kept in computer’s memory, from where it is extracted, processed and saved again in the memory. Some data becomes “litter” which would not be used again but would still remain in the computer’s memory. This informational “litter” is only saved there because clearing it off would have taken considerable time and resources.
The data computing is carried out by a processor – the very entity which in binary code adds and detracts blocks of data. The data comes from the computer memory, and comes back there after calculations. Algorithms for calculations are “taken” by the processor from specific information blocks called programs. Consequences of commands to the processor, addresses of information blocks in memory needed for computations – all are stored in the programs.
Rather conditionally, the programs can be put into three groups:
Having sorted all the programs into categories, we can suppose that our Universal Computer uses these programs in different ways. Due to these specific limitations, some programs cannot access certain algorithms and calculations. It is likely that we belong to programs of the third level that have access to most open areas of computer memory and very limited use of processor and computer resources. It appears then that our vision of the Universe-Computer is much distorted, and the reality is different. When observing the outer space, we can only see processes of calculations “permitted” to us by the Computer. And here is the key to solving a puzzle of the world order and the way to reconcile contradictions in our knowledge of the Universe structure.
We perceive the Universe like the blind or deaf. Some of our senses are working, and others are deliberately blocked. It’s none of our business. But we, the humankind, will fight for our right to break into the 2nd level programs! Of course, I am dreaming. But who knows…
Now let’s move on. The Computer is more complicated than I have described, so search for analogues goes on. Our access to the Universe’s memory is also very restricted, so the Dark Matter could be kept nearby on the Computer’s hard disk. However, that section is simply closed for 3-rd level programs. And now just imagine that the 1st or 2nd level programs are trying to optimise, from the Computer’s point of view, the way us the 3rd-levelled ones are being run. Would we notice that somehow? Definitely not directly. In case of dark Matter, we still have noticed anomalies indirectly. And that happened only because we managed to create an inner development model (algorithm) that has been distorted by the higher level programs.
Let’s carry on studying the Computer’s memory. Is there anything else that could be of interest, something that has any influence on us? Memory is specified by size, speed and metrics. We’ll now skip the latter two properties, and take a close look at the size. Could the Computer’s memory be unlimited? Unlikely. That memory limitation could restrict the Computer’s efficiency and productivity. This is no good. The Computer should have a mechanism to deal with those restrictions. What do the humans do for solving such problems in man-made computers? Well, they, while observing and copying the nature, compress or optimise data. Our man-made computer’s files are “squeezed” in size by special archiving programs. Using certain algorithms, one can save a great amount of space on a hard disk or in another type of memory. Supposedly, the Computer is using the same approach. A special program of optimising data storage is running there, and we, the humble 3rd-level ones, are pressed big time. Now let’s imagine how that pressure might affect our poor selves. What force we are constantly fighting on Earth? Force of gravity – the gravitation? I think this is likely. Eh? This is a really interesting discovery! Gravitation is simply a result of higher level program running to compress data. All seems logical, the work of the program itself is imperceptible by us, but in reality we do feel the result – constantly pressing down. Curiouser and curiouser. We are the files “packed” into the Earthly archive, and, together with other files, we crowd up on this Earth. What is interesting is the structure of our Earthly archive. Simple files of a similar kind are packed deeper inside, closer to the centre.
It follows that the centre of gravitation of archives in the Universe are the files or data blocks, would need more time for extraction by the processor. And this is the second interesting conclusion from comparing the Earthly computers with the Universal one. That property of archives could help us to understand “distance” and determine the meaning of that dimension. Therefore, the distance is linked to the time needed by the Processor to switch from processing one data block (archive) to another. Because the processor’s work is controlled by programs, it can’t immediately stop one task and start another without somehow saving the results. So it follows that the distance, as we perceive it, is just our estimation of real ability of the Processor to handle one block of data certain time after completing of a previous task. The existence of that time delay does not mean that the Computer processes data blocks consequently. That simply means that if it would have wanted to do so, it would have needed a certain amount of time. There are plenty of co-inserted and inter-connected data blocks (arrays) in the Universe, and that’s why the assessment of distance is so complicated and is distorted by inter-connection.
While further developing the “distance” concept, one can make another observation over the computer analogy. Our assessment of distance is based on some information which is freely transmitted by the Computer. What is that information and what is its equivalent in a man-made computer? How does our computer let the working programs know the order of data processing and when they should act?
So, from examining the distance we smoothly move over to understanding forces of nature.
In a man-made computer, the information as to when, and which, programs and devices should start working, is transmitted via special control lines (bus). Certain types of programs and devices have access to this information. Not all of them can actively use these lines, transmitting their own control commands. Usually, the programs of the 1st and 2nd levels can do that. As for the 3rds – they not always can. All types of program report to the processor their status via these command lines, but not necessarily using all of them.
If now we apply this reasoning to the Universal Computer, it appears that it is through these channels we receive the information as to how other programs are positioned in relation to ourselves. This is where concept of distance emerges. Using continuous stream of control data, we determine not our conventional material distance, but the time required for processing certain data by the Processor. It’s not even the time that we are counting but a quantity of processor’s clock cycles that separates computing tasks. Enormous arrays of data are slowly “digested” by the Computer, creating an illusion of a huge space around us. The quantity of those arrays itself is colossal – the arrays processed by the Computer in parallel or conditionally in parallel. This complicated simultaneous computing gives us perception of 3-dimentional space. We ourselves are part of multi-level nested array that has its own position in the current computing tasks. This very uniqueness allows us to “see” the rest of arrays in relation to ourselves, creating 3-dimentional space.
While continuing to examine the Universal control channels, we also can notice the difference with a man-made computer. Of course, a computer that we have created is much more basic; one or 2 control lines is all it has. The Universal Computer is by far more complicated. Let us look at universal forces:
We shall exclude the force of gravity from the list of potential candidates for control channels, as its nature, IMHO, is linked to the necessity to compact data, and not to control the Computer. The same is with nuclear forces; we will discuss their property later.
Three forces remain as candidates for independent control lines. Here I should note that I’ve broken the familiar electromagnetic force into two forces. I did that for one simple reason that not all programs are capable of interacting with both forces. That is, it could well be that magnetic field does not have any influence on an object (program), but an electric one does. The selective interaction of these three forces with programs confirms the existence of three separate control lines.
The control lines are not directly involved in the Processor’s work (computing data). They only transmit executive commands. This indirect connection with the Processor leads to absence of mass in forces of nature. The arrays of data themselves do have mass, as certain processor’s time is required for processing embedded information. That time differs depending on the internal structures of the same array size. Some arrays are more “dense” with highly compressed mass; the others are “light” with low compressed mass. The differences in arrays’ structure influence mass. Whereas the forces of nature (control lines) could have slowing down effect on just a few information bits. That is because the Computer’s speed is limited, and continuous processing of these bits would be affected if the control commands are delayed. Currently, it is not possible to prove or quash this assumption. New knowledge is needed to do that.
Now, why do we distinguish the forces of nature? The division happens according to interaction with different types of data and programs. Some programs have access to certain control lines, whereas others have not. In my understanding, the differences are caused by 2 major factors: the data structure and program restrictions laid down by programs of the 1st and 2nd levels. Possibly, it happens this way: one specific program structure is set, by default, to be controlled by light. But because of work of the higher level third party program, access of that one program to the control line is limited, and the program does not “see” all commands transmitted through the line. The higher program filters the line signals. Like we are only able to see within a certain spectrum of light (colour). That is, we do have access to the line, but only a limited one. In order to “see” within a wider spectrum, we need to use other programs – devices which do not have such limitations. These devices convert the invisible commands into a control form accessible to us. And this is how we take over the World (Computer), conquering other programs. Or possibly, a program’s internal structure can, in principle, have limitations to access to some control lines – because, for example, it would be faster to process certain computing tasks this way. Why to use the full capacity of the Computer when a task is simple and very specific? The Computer must be rational; otherwise everything would be going on very slowly.
Having dealt with the Computer’s control, we can now try to describe the property of the data arrays stored in its memory.
What would happen with data actively handled by the processor? And what would happen with data just simply stored – or what if it is abandoned data? Probably, local activity of the processor should somehow be transmitted to other programs for them to work properly. The programs must know whether the processor is idle, partially engaged, or very busy. Why to bother your boss if he is busy with something? Not so smart… That’s why some sort of control mechanism was needed to constantly inform the programs of the processor’s state. How this mechanism works is not yet clear to me. Is there an extra control line where this information is transmitted, or do the programs calculate the activity on the basis of information taken from already known control lines? Currently the second option seems more likely to me. It also could be that a special higher ranking program provides the information via the standard control lines.
Why is it important to know of the processor’s activity? At the first glance, there is no particular need. But in fact, this information is very important, as it helps programs attempting to avoid being stopped, damaged or deleted. The high activity of the processor while handling a compressed file where the program is located, can lead to that program being damaged or deleted. So what to do? To run – if you can. On the other hand, the processor’s activity can be greatly reduced, or even the archive could be neglected in the memory for a long time or abandoned altogether. What should the program do in the archive? To run away as well; otherwise there is a virtual death caused by lack of activity. You can scream your heart out; but the processor won’t hear you – it has other tasks to do.
So it seems that a certain program needs some sort of processor activity. A bit less of the activity – and you’ve lost contact with the processor (ignored); a bit more of the activity – and your program code has been damaged (destruction). Clearly, most third level programs are forced to humbly wait for their fate, or their ability to escape is very limited. Here, perhaps, the borderline between live and dead matter is drawn, as we understand it. If a program in principle is not able to adapt to the processor’s activity, it is the dead matter. But something that is “kicking”, can probably be referred to the live matter.
And now let’s turn to the temperature. What do you think I’ve just described? The temperature, of course, the one that tells us: “don’t warm your hand in the oven, you idiot, or put the mittens on, otherwise you’ll get a frostbite”. The temperature is the information of the processor’s activity linked to a specific data array (file). It is still not quite clear to me how the information of the local temperature is distributed. Most likely, the programs themselves calculate the temperature value on the basis of the Processor’s activity transmitted through control lines. Forces of nature (the control lines) allow other programs to predict their future condition based on the local activity of the Processor.
As we have already mentioned earlier, not all programs are able to read in full the information transmitted through control lines. So, the data about the local temperature in Universe is usually limited for most third level programs.
Perhaps this is the most complicated part of my tractate. It is about the way the Processor makes its calculations in Universe.
Huge data arrays “digested” by the Processor, make the Programmer (maybe that is God) to be very rational with use of the Processor’s resources. Same as we the humans combine bits of information in our computers into bytes, the Computer groups the information into blocks in order to process and save data. By the way, we know perfectly well part of those blocks. They are in the Periodic Table of the chemical elements. Yes, these are, at the very least, molecules of matter. The Computer uses them, same as the information bytes, for optimised computing. That is much faster: take a molecule as an information bite and process it, without thinking of its contents.
Molecules are the most researched information bytes of the Computer. They are made of atoms which, in their turn, are built of other smaller elementary particles. That is a very complicated data structure in the Computer, and we don’t know yet what is the real Computer’s bite. Good luck to our nuclear physicists in their search for an undividable particle of our world!
Of course, this bit would be a bit only for our Computer. For the Computer joining us at the lower level, that would be a Black Hole ready to explode and disintegrate data in that universe.
It is this grouping of bits of information into bytes that creates strong nuclear forces. This force holds atoms together to form a molecule, for example. While the Computer “thinks” it is dealing with an information byte, it is very difficult to break this byte apart. The Computer has to be “persuaded” that it is not a byte but a group of bits. A special effort or energy is needed for that.
Same as a man-made computer, the global Computer has its limitations. It cannot complete all the necessary tasks instantaneously. This happens step by step, with a certain delay that determines timing in the Computer. The time delay really exists, as all running Computer programs detect and use it in completing their tasks and making forecasts.
We’ve already determined long ago that nothing in the Universe can move faster than 300,000 km/sec. This is the speed of light or speed of one of control lines. Electromagnetic waves also spread at this speed, but they are more affected by the density of the data arrays that they control. It turns out that the fastest operating speed of our Computer is determined by the shortest time of changing state in the control lines.
Of course, measuring Computer speed in km/sec is not appropriate, because these are all conditional concepts invented by us to describe our perception of the world. Objectively, the processing speed has to be measured by the number of information bits processed by the Computer at one step of computation. In one such step Computer can process a huge number of bits. How many – we do not know yet, because we do not know the Computer’s structure. This topic we will discuss later.
And here is one more interesting point related to the processing speed. The speed of commands in control lines can be affected by density of data arrays. The denser they are, the slower the commands in reaching its destinations. Eventually, at certain density, a control command cannot get into the array at all, interacting only with the outer area. I don’t know exactly why this is happening, but I assume that this is due to the work of global backing up program (gravity). It is likely that a compressed program cannot be run at all. To start it, the Computer decompresses it and only then allows interacting with the control lines. The denser and bigger a file is, the longer is delay and, therefore, the effect of slowing down of control signals. The Processor is just waiting for an uncompressed program to respond to a control command. At a certain height of density of compressed data arrays, the Processor simply ignores the most densely packed part of them, interacting with data from the outer area only. The response from this area comes faster, so we can assume that some maximum waiting time limits are set in the control lines. If the response from a program is not received in a specified timing interval, the next command is sent for a different program, etc. That is why too densely compressed files are not accessible for the control commands.
By the way, a similar effect occurs if the size of bytes (molecules) or small blocks of data files do not match the properties or the operating mode of the control channel. I mean now the frequency of light or electromagnetic wave. This frequency is actually the operational mode of the control lines. The operating mode of high frequency is designed to work with micro arrays, and of low frequency — with large arrays. What is the difference here? it is simply the minimum waiting time for a program to response. Large arrays, in principle, cannot be uncompressed quickly. Therefore, we should not expect a prompt response from them; so some other tasks can be processed while waiting. As for micro arrays, these can be unpacked very quickly. Thus, the waiting time could be very short, and large files could be ignored altogether.
It follows that the Processor or a program, by setting short- and long-time waiting limits for reply from a respondent, can filter only those data arrays that match a certain “weight category”. This approach saves the Processor’s time during computations.
In addition, I want to highlight that control lines operate in a rather busy mode, and so there is a continuous stream of commands coming without stopping to wait for any response. This is why we can see combined white light and not only changing coloured light. That means that control lines are packed so densely by the commands in order to, on one hand, not to lose the responses, and on the other — not to wait for too long. It is likely that each command in a control line carries information about an addressee, waiting time for response, and the command itself. As a result, only the addressee responds to that request, and the addressee knows exactly how, and to whom, to reply.
Requests actually could be multicast or unaddressed. Addressees simply have to meet certain parameters. This is like a torch lighting up: something would be clearly visible (response received), and something would not be seen at all (no response).
In all my work I have so far missed such important dimension as energy. I have done that deliberately, as it would not have been possible to explain its meaning without previous comments. Hopefully, I would make up for it now.
Let’s imagine various possible operational modes of the Computer. One extreme option: the Processor is busy with just one single task, not distracting at all at anything else. What does it mean in practice? It means that speed of completion of that operation would be maximal, i.e. the computation is close to the speed of light. In the meantime, time used for completing the operation would be minimal.
The other extreme option is when the Processor, in fact, ignores a task, completing, at best, a couple of operations a year, or even less.
As a result, we can talk about specific engagement of a Processor with some specific task. In reality, the Processor is simultaneously conducting parallel computations, as well as constantly shifting between various tasks. All that leads to a share of time spent for servicing one specific task being much less than 100%. I suppose that the time share of the Processor’s engagement in processing one specific task is that same bespoken energy. It is for that very Processor’s attention that all Computer programs are competing for. The more attention the Processor pays to you, the more advantage you have over other programs. You just have to be able to, and to want to.
It is the Processor’s local specific activity transmitted via control lines that characterises computing energy. Where the Processor “loves” something a lot, there would be the high temperature and energy.
From the first glance, our dealing with energy could be ended here. But this is so only from the first glance. In reality, we forget that all data are optimized and kept in archives. What does that mean in practice? That actually means that the delay of unpacking arrays leads to part of bulky packed programs stopping responding to control commands along with the very high activity of the Processor. In order to accelerate its work, the Processor compulsory destroys and simplifies packed data. It is also important that the Computer, in principle, is not capable of working with complicated colossal archives. When the size of an archive starts to exceed some figure (a dwarf star), then the Processor starts to simplify the packed contents. Probably, the Processor simply has no option, it has not enough productivity, and some upper level program joins the process of destruction of all that is too complicated. Data is divided inside down to a level necessary just to sustain addressing that archive in the memory. The archive’s temperature becomes very high due to excessive activity of the Processor.
It is my understanding that too large archives cannot, in principle, participate in valid Computer computations. For that to happen, the Processor must concentrate on solving too limited number of tasks, and that is prohibited. And this is where an astonishing thing happens: colossal archives actually become a waste which has to be turned over and over again, for it not to be completely lost. We see two forces competing – a program trying to make calculations and a program limiting (directing) the Processor’s activity. This is like balanced scales: neither up nor down. As a result, depending on the archive’s size, we observe different types of a cosmic zombie: from red “warm” dwarfs to all sorts of superluminal quasar and giant stars. The size of the latter is so huge that the archive is, in fact, consisting of bytes in process of destruction (molecules, atoms, etc). It is the Program limiting the Processor’s activity which destroys these super archives. The Processor obviously cannot manage processing such an archive, and whole pieces burst out of it, in form of explosions or elementary particles (bytes and bits).
And here I would add one more process which we have temporary forgotten about. That process is gravitation or compulsory archiving of linked (nearby) data. Unfortunately for supermassive archives, the compression utility does not “sleep” and continues to add new data from nearby archives that just happen to be in a danger zone. The Super archive keeps growing until the Simplifying program starts breaking the contents into information bits. And here it turns out that the binary code 1111111111111111111 means, in fact, just 1. The problem is that the Processor has to ignore that 1. With applied limitations, there is nothing it can compute, and super 1 becomes a Black hole.
The fourth limitation that could influence Black holes appearing is, of course, a maximal size of a file (archive) that the Computer can work with. I have earlier described those limitations discussing gravitation. It is hard to say what particular influence that complex activity has on super archives of data. Black holes appear under the influence of all factors described above. Perhaps, in every single case one factor that takes an archive out of Computer’s area of access can be the decisive one.
In the next Chapter, I will examine versions as to where Black holes and “broken” information bits can disappear.
All previous discussions lead me to conclusion that our Universal Computer is only a part of a grand computing structure that solves even bigger tasks.
The presence of “invisible” Black Holes and breakable bytes of information (molecules, atoms, etc) is evidence to that. All these disappear from our Universe into some places unknown. Obviously, macro- and micro-bits participate in some further computing process we know nothing at all about – not a scintilla! Our Computer stops following that process, but it obviously continues in these invisible parallel worlds.
Computer is akin to a calculator that computes, for example, up to ten grades of figures. We have both the limit of calculating capacity and the limit of data array sizes, which could be processed. But imagine that there are several such calculators, working so that if the first one calculates the biggest possible number, it passes part of its functions over to the next calculator. The second calculator uses these partial calculations in its own tasks. When the second calculator reaches its limits, the third calculator joins in. And such process can go back and forth. The second calculator can compute too small a number and, without rounding it down to zero, re-engage the first one, for the first one to complete the computation.
Such chain can be very long – we may only guess how long. Our Computer is, probably, a part of such chain, positioned somewhere in-between. Being the Computer number N, it works within the chain with even larger arrays of data, receiving portions 0 and 1 (bits) from a Computer number N-1. And when we run out of calculating capacity, a Black Hole (macro-unit) is being formed. Here, a Computer number N+1 joins, etc.
The only real proof of the Super Computer’s existence is the excessive size of data arrays. We see clearly only a tip of an iceberg, and not the whole. The whole is visible to someone else – someone who controls all these “minor” Computers. I don’t know what and how is happening there, but I think that on that very level the compressing program (gravitation) runs. Other optimising programs might also be run that level, but that is far from being certain. Combing effect of those programs working at different levels could lead to a macro-unit (the Black Hole) and a micro-bit (micro-particle) being formed in different conditions. It is not at all necessary for one Black Hole’s size being equal to another’s. They could be different; but it is important that they are not “lost” on the Super Computer level. Increasing density of data in compressed files of our Computer does not at all lead to loss of data in a global data array. At a certain point of growth of our data file, the excessive complexity becomes unnecessary, and the file is simplified while approaching the Black Hole state.
The same is happening on the micro-bit level. That is, a bit disappears from our Universe not necessarily in same, but also in different conditions. All depends on work of the higher level programs. It is them that decide which part of a bit can be sent to a level below. What is important is not to lose task continuity. As it turns out, the nuclear physicists have “divided” such a particle “zoo” that nobody could put it into a complete system. I suppose that the problem is linked to discovery of debris of micro-bits, and not the bits themselves. Micro-bits should, in principle, be infinitely stable in the working area of our Computer, and the debris should disappear and appear for a very short time only. We should look for our micro-bit (“indivisible” atom) according to these specifications.
Why then physicists “see” the smaller debris of a micro-bit? Very simply, our slow Computer is to blame. It just does not immediately “stitch” the holes left after the bit’s disappearing. In one single computer clock cycle, it is impossible to fix disappearance of information. And the control lines also start to slow down. Plus the higher Computer optimising program intervenes. So it turns out that hundreds, if not thousands, of Computer clock cycles are required for “annihilation” of a micro-bit. This is the visibility of the invisible.
That time-consuming process creates a tunnel effect. That is when some elementary particles (debris) easily penetrate through dense data arrays. In reality they, of course, do not move at all, just micro-bits are appearing and disappearing. That creates an illusion of movement, as the files change even within that short time slot. A good example of such debris is an “elusive” neutrino, which is really a rare fragment of a micro-bit. It was extremely difficult to catch that particle.