New Digital World Theory. Part 3

Melting pot of matter (information).

What is there in a computer that could function as a Universe? Let’s try to build up an analogy. Everything that happens in the Universe resembles the process of computation and data handling. The data is kept in computer’s memory, from where it is extracted, processed and saved again in the memory. Some data becomes “litter” which would not be used again but would still remain in the computer’s memory. This informational “litter” is only saved there because clearing it off would have taken considerable time and resources.

The data computing is carried out by a processor – the very entity which in binary code adds and detracts blocks of data. The data comes from the computer memory, and comes back there after calculations. Algorithms for calculations are “taken” by the processor from specific information blocks called programs. Consequences of commands to the processor, addresses of information blocks in memory needed for computations – all are stored in the programs.

Rather conditionally, the programs can be put into three groups:

  1. First: computer management programs. They contain algorithms of management and attributes of one given computer. These programs are usually securely protected – as from other programs, so from physical faults in computer’s work. The primary programs are kept in specifically allocated computer memory. Only manufacturer, physical user, or another higher graded computer, can change that data.
  2. Second: programs of operational systems level. These are usually algorithms, allowing a user more comfortable work with computer. This is like a language interpreter that frees conversing parties (a computer and a user) from knowing the opponent’s language. These programs are kept in the area of a general memory, but are protected by special programs of the operational system itself. Programs of that level could be damaged or deleted by other programs working under the operational system management.
  3. Third: programs of user level. All remaining programs can be referred to that group. Data from these programs can be easily deleted and are hard to be protected from corruption.

Having sorted all the programs into categories, we can suppose that our Universal Computer uses these programs in different ways. Due to these specific limitations, some programs cannot access certain algorithms and calculations.  It is likely that we belong to programs of the third level that have access to most open areas of computer memory and very limited use of processor and computer resources. It appears then that our vision of the Universe-Computer is much distorted, and the reality is different. When observing the outer space, we can only see processes of calculations “permitted” to us by the Computer. And here is the key to solving a puzzle of the world order and the way to reconcile contradictions in our knowledge of the Universe structure.

We perceive the Universe like the blind or deaf. Some of our senses are working, and others are deliberately blocked.  It’s none of our business. But we, the humankind, will fight for our right to break into the 2nd level programs! Of course, I am dreaming. But who knows…

Now let’s move on. The Computer is more complicated than I have described, so search for analogues goes on. Our access to the Universe’s memory is also very restricted, so the Dark Matter could be kept nearby on the Computer’s hard disk. However, that section is simply closed for 3-rd level programs. And now just imagine that the 1st or 2nd level programs are trying to optimise, from the Computer’s point of view, the way us the 3rd-levelled ones are being run. Would we notice that somehow? Definitely not directly. In case of dark Matter, we still have noticed anomalies indirectly. And that happened only because we managed to create an inner development model (algorithm) that has been distorted by the higher level programs.

Let’s carry on studying the Computer’s memory.  Is there anything else that could be of interest, something that has any influence on us? Memory is specified by size, speed and metrics. We’ll now skip the latter two properties, and take a close look at the size. Could the Computer’s memory be unlimited? Unlikely. That memory limitation could restrict the Computer’s efficiency and productivity. This is no good. The Computer should have a mechanism to deal with those restrictions. What do the humans do for solving such problems in man-made computers? Well, they, while observing and copying the nature, compress or optimise data. Our man-made computer’s files are “squeezed” in size by special archiving programs. Using certain algorithms, one can save a great amount of space on a hard disk or in another type of memory. Supposedly, the Computer is using the same approach. A special program of optimising data storage is running there, and we, the humble 3rd-level ones, are pressed big time. Now let’s imagine how that pressure might affect our poor selves. What force we are constantly fighting on Earth? Force of gravity – the gravitation? I think this is likely. Eh? This is a really interesting discovery! Gravitation is simply a result of higher level program running to compress data. All seems logical, the work of the program itself is imperceptible by us, but in reality we do feel the result – constantly pressing down. Curiouser and curiouser. We are the files “packed” into the Earthly archive, and, together with other files, we crowd up on this Earth. What is interesting is the structure of our Earthly archive. Simple files of a similar kind are packed deeper inside, closer to the centre.

It follows that the centre of gravitation of archives in the Universe are the files or data blocks, would need more time for extraction by the processor. And this is the second interesting conclusion from comparing the Earthly computers with the Universal one. That property of archives could help us to understand “distance” and determine the meaning of that dimension. Therefore, the distance is linked to the time needed by the Processor to switch from processing one data block (archive) to another. Because the processor’s work is controlled by programs, it can’t immediately stop one task and start another without somehow saving the results. So it follows that the distance, as we perceive it, is just our estimation of real ability of the Processor to handle one block of data certain time after completing of a previous task. The existence of that time delay does not mean that the Computer processes data blocks consequently. That simply means that if it would have wanted to do so, it would have needed a certain amount of time. There are plenty of co-inserted and inter-connected data blocks (arrays) in the Universe, and that’s why the assessment of distance is so complicated and is distorted by inter-connection.

While further developing the “distance” concept, one can make another observation over the computer analogy. Our assessment of distance is based on some information which is freely transmitted by the Computer. What is that information and what is its equivalent in a man-made computer? How does our computer let the working programs know the order of data processing and when they should act?

Leave a Reply