Коронавирус COVID-19

Вирусы и Цифровая теория мироздания.

Долго не писал, так как был занят другими делами. Но тут внезапно большинство из нас оказались посаженными на карантин. Вот и я оказался в такой ситуации, а потому получил возможность уж что-то написать на ныне модную тему короновируса. Хочу на этом примере взглянуть на тему вирусов с точки зрения моей Цифровой теории мироздания.

Вирусы, как мы теперь знаем от вирусологов, не являются полноценными живыми организмами. Они чересчур малы и примитивны, чтобы существовать независимо от других организмов. Оказываясь вне необходимой биологической среды, они довольно быстро разрушаются. В чем достоинства и недостатки такой малюсенькой программки? Теперь я начинаю рассматривать вирус с точки зрения моей цифровой теории.

Малый программный код затрудняет обнаружение вируса другими программами, особенно большими и более сложными, как человек, например. Это «боевое» достоинство вируса. Он может проникнуть в нужную ему среду и «обжиться» там до того, как инфицированный организм успевает среагировать на это вторжение. С другой стороны, малюсенький программный код является и недостатком вируса, так как сильно ограничивает возможности вируса к адаптации к новой окружающей среде.

Рассмотрим алгоритмы работы вируса и заражаемого организма. По большому счету, сам организм не замечает вторжения напрямую, а  заметив неправильную работу каких-то своих частей, начинает внутренний «ремонт», борясь не с причиной, а со следствиями вторжения. Для защиты от таких угроз в организме существуют специальные «войска» из подобных вирусу программ, которые живут за счет организма и в отличие от вируса защищают его от внешних микроугроз. Эти войска входят в систему иммунитета организма. От качества этих войск и зависит способность организма противостоять угрозам. Большое количество агрессивных и легко настраиваемых воинов позволяет организму быстро справляться с вторжениями вирусов. Важно также, чтобы организм через иммунную систему накапливал знания о внешних угрозах и умел настраивать воинов на борьбу с конкретным вирусом. Как я уже писал ранее в статьях по Цифровой теории мироздания, любая программа низшего уровня «борется» за ресурсы Глобального Компьютера. Это же происходит и с вирусами. Они стремятся взять под свое управление как можно больше организмов, при этом, в силу малых ресурсов, возможности для эволюции у них ограничены, поэтому захват происходит через простое размножение – репродукцию. Происходит это в клетках, куда вламывается вирус, как террорист кладет всех на пол и требует повернуть самолет на нужный ему курс. В клетках вирус находит все необходимые компоненты для самовоспроизводства. Этими компонентами по сборке новой копии вируса являются микро программки-инструменты и сырье – кусочки программных кодов самой клетки. Как вы понимаете, поврежденные таким образом клетки перестают правильно функционировать, и это уже замечает сам организм. Я совершенно не хочу вдаваться в подробности борьбы организма с вирусами, так как не компетентен в этом вопросе, да и большинству из вас, наверное, это и не интересно. Лучше рассмотрим, что происходит с вирусом в процессе его эволюции (размножения). Это будет хорошей иллюстрацией действия моей теории.

Вирус, как и любая программа, состоит из программного кода и ресурсов, выделенных для ее работы компьютером. Программный код вируса человек читает через ДНК вируса. Разница между ДНК и выделенными ресурсами только одна – вирус сам не может стереть свой ДНК, а вот ресурсы может. Это очень важный момент в понимании процесса завоевания вирусом новой среды обитания – других организмов, и не только тех, алгоритм доступа к которым уже записан в ДНК. В силу его малости, в ДНК вируса прописан только один организм для захвата с небольшими вариациями. Вот поэтому-то вирусы и «заточены» к заражению только людей, для примера. Попадая в «чужие» организмы, они просто не знают, что делать для попадания в клетку и размножения. К сожалению для нас, иногда, ну очень редко, вирусам удается подобрать код доступа к клетке в чужом организме и проникнуть внутрь. Наверное, это происходит в ослабленном организме, где иммунные войска не боеспособны, и у вируса достаточно времени, чтобы воспользоваться выделенными компьютером ресурсами для поиска способа проникновения в клетку. Почему важны эти ресурсы? Вирус может хранить в выделенной памяти результаты своих попыток взлома оболочки клетки, корректируя свое воздействие на оболочку и сохраняя параметры удачных попыток. В конце концов, какому-то одному вирусу удается подобрать алгоритм взлома клетки, и этот алгоритм сохраняется в выделенной памяти вируса. Такой вирус проникает внутрь клетки «чужого» организма. Там он начинает размножаться, воспроизводя не просто свои ДНК копии, но и копии нового сохраненного алгоритма проникновения. Естественно, это происходит в ресурсах, выделенных для новой копии вируса. В результате такого процесса вновь рожденные вирусы уже знают, как покорять «чужой» им организм. Процесс заражения раскручивается, и скоро весь организм уже полностью поражен размножающимися вирусами и медленно умирает. Да-да, вирус в этом случае еще просто пользуется своим основным алгоритмом, записанным в ДНК, не зная, что жертва-организм скоро умрет, и возможность для распространения закончится с этим «чужим» организмом. Полагаю, что в подавляющем количестве таких редких случаев мутации вся колония «продвинутых» вирусов гибнет вместе с тем новым алгоритмом проникновения в клетку.

Вот так, новая мутация погибла, не став эпидемией. Наверное, таких редких мутаций происходят тысячи, прежде чем в какой-то прекрасный момент одному феноменально удачливому вирусу не удалось смутировать дважды – первый раз, попав в клетку чужака, а затем через выделения зараженного организма – в другой такой же организм, и там, «просочившись» через иммунные войска, вновь заразить одну из клеток. Естественно, весь этот сложный алгоритм заражения фиксируется в выделенной памяти вируса. Все копии этого суперуспешного вируса получают данный алгоритм распространения в выделенной для них компьютером памяти. Вот на этом этапе продвинутый мутировавший во второй раз вирус и узнает, что проникнуть в клетку недостаточно от слова «совсем», главное нужно сохраняться в зараженном организме, чтобы получить больше шансов на заражение нового тела чужака. Теперь вирусы начинают действовать все более аккуратно, не убивая организм сразу, а растягивая процесс заражения в ожидании удачного момента для передачи своего кода другому такому же организму. Каждые такие новые знания сохраняются в выделенной памяти, делая вирус все более «беззубым» и перегружая выделенные ресурсы компьютера. Вот тут-то и происходит очень интересный процесс, который и объясняет внезапное исчезновение вируса-чужака.

Став уже эпидемией, заразив тысячи и тысячи чужих организмов, вирус внезапно обнаруживает, что выделенные ресурсы закончились. Некуда уже больше сохранять весь свой накопленный опыт. Караул! Что делать??? У вируса остается только два варианта. Первый – оставаться в том последнем виде до перегрузки ресурсов, или обнулить всю информацию, накопленную в процессе мутации. Первый вариант приводит к тому, что вирус полностью теряет способность к адаптации, и, в конечном счете, становится легкой добычей иммунных войск, даже ослабленных. Второй вариант мгновенно приводит к потере вирусом способности заражать и размножаться в чужом организме. Это фактически массовое самоубийство. Какой в реальности механизм самоубийства выбирает вирус, я не знаю, но думаю, что именно этот механизм приводит к внезапному прекращению эпидемии на чужих организмах. Скорее всего, действует первый вариант самоуничтожения: «ослепший» вирус более эффективно уничтожается организмом, при этом процесс ускоряется, так как организм, в отличие от вируса, имеет огромные ресурсы и продолжает обучаться, все более эффективно уничтожая вирус.

В результате именно такого процесса, а не заражения 70% процентов популяции, как говорят вирусологи, и заканчивается эпидемия. Конечно, иммунизация населения сильно замедляет процесс заражения, но большую роль играет, как раз, внутреннее ослабление вируса.

Почему вдруг исчезли братья Коронавируса COVID-19? Где теперь SARS и MERS вирусы? Сгинули, притом, совсем. Вероятно, все та же участь ждет и COVID-19, если не произойдет ну совсем уникальное явление – внешнее повреждение вируса, да таким уникальным образом, что алгоритм из выделенной области памяти будет переписан в ДНК. Вот тогда вирус продолжит адаптироваться, сохраняя возможность размножения в чужом организме. Вообще, это будет уже другой вирус, вирус «заточенный» под совсем иной организм. Такой вирус, скорее всего, даже потеряет способность вернуться в тот организм, из которого вышел. Появится новый вирус, периодически поражающий один и тот же организм.

Вот так, скорее всего и происходит эволюция вирусов-микропрограмм. Спасибо за внимание и скорейшего всем окончания вынужденного карантина.

Ворошилов Игорь

11 апреля 2020

New Digital World Theory. Part 1.

What matter? Does it matter?

We use such concepts as matter, energy, force, time, speed and others to describe processes occurring around us. And these terms seem to perfectly describe and explain all that. But do they really? Do these familiar terms answer the question as to how our world works? No, not in my view. And   problems started long ago, ever since the discovery of elementary particles, when it became clear that everything around us was made of uniform micro-pieces of something. What are these pieces? Why are they so weird? Why do they not fit within our usual linear mathematics, where each position of matter can be assigned specific values? These values of time, length, height, etc. are, to us, linear and continuous.

But suddenly, with the discovery of these particles, it became clear that it was a completely different world, uncertain and conditional. There was a gap between concepts. Suddenly we had to live in two completely different worlds with different physics and maths. The differences are so significant that, until now, scientists, physicists and chemists are unable to reconcile these principal distinctions. These attempts of reconciliation happen all the time, but the reality is such that we are living in the world of two physics – micro and medium size physics.

Note that I am talking about the “medium size” physics, the physics of Newton. Why? Because in the middle of the 20th century we, all of a sudden, learned a lot about the world of scale of galaxies and the universe. And this enormous-sized world suddenly also gave us a lot of surprises, when the scientists again tried to fit their knowledge of mid-size physics onto real physics of large masses. Black holes, dark matter, galaxies, gravity, time and distance became new problems. Hunting for new physics and mathematics started again.

Scientists are striving, constantly coming up with new ways of pairing differences, but the differences keep piling up, resulting in even more inexplicable knowledge and data. They invented the Big Bang theory. A brilliant idea, but what about the dark matter which distorts the results of calculations based on that theory?

In my opinion, we have already accumulated enough new information to stop trying to pull our past material knowledge onto something that, in fact, is not material and requires a rethinking of all accepted notions of the world order. The concept of matter can only describe a rather small in-between layer of sizes of the objects present in the universe.

Let’s turn our attention to one very interesting point related to everything existing in this visible world. Here, everything consists of particles combined into known matter – the matter we can only call so at the level of “mid-size” physics. Micro and macro worlds obey their own laws, and these laws we’ll try to explain in the following sections, abandoning the concept of matter and all other fundamental definitions.

In my opinion we have already accumulated enough of new information to stop trying to “pull” our past material knowledge on something that, in fact, is not material and requires a rethinking of all accepted notions of the world order. Using the concept of matter, we can only describe a rather small intermediate layer of sizes of the objects present in the universe.

Let’s turn our attention to one very interesting point related to all that exists in this visible world. All here is composed of particles, which are combined to the known matter, into matter, which we can call so only at the level of the “mid-size” physics. Micro and macro worlds obey their own laws, and these laws we will try to explain in the following sections, abandoning the concept of matter and all other fundamental definitions.

New Digital World Theory. Part 2

What is around us?

Non-material particles that make up our material world somehow change the laws of our “mid-size” world. Why do micro particles, grouped into large masses, follow the laws of matter? And why do these same micro particles, grouped into enormous masses, stop following the laws of matter?

Alas, it is not possible to answer these fundamental questions using the modern concept of physics. Here you need to change completely a point of view on everything that happens around us. Let us try to apply new knowledge that humanity just gained in 20th century.

What happened then that was of such importance in explaining the nature of the universe? A science called Cybernetics emerged, and with it came a concept of information. We began to describe as “information” virtually everything including ourselves. The concept of information is so universal that can be used to explain literally everything. So what’s the problem? Why we still do not use information to describe our world? Unfortunately, there is one discrepancy, which does not allow associating material world with information directly, without changing those very fundamental concepts. It turned out that we can describe anything using the information, but we cannot “materialise” the information. We do not yet have such tools, alas. After failing to link information to energy, the information itself was no longer considered as fundamental universal value. So far there is still energy, mass, power, speed, etc as we are still used to. Information is in computers now, and we continue to live in three worlds, keeping wondering of their properties. We, as Pagans in science, worship different gods: God of physics, God of chemistry etc. Yet in reality God is one.

Unfortunately for modern science, new experimental data became so inconsistent with the accepted theories that scientists began to come up with absolutely unlikely mathematical models. In reality, these models explained just little things, but much complicated and confused the essence, the nature of things. So why, despite the rejection of the idea of the world of information, I have decided to consider this idea yet again? Because what else can describe and explain the processes in the universe so well? And not only in there – but more about that later on.

First let’s turn to the contemporary problems of physics, which just do not fit into the existing theories.

  1. Quantum physics. The position of an elementary particle in space and its state cannot be defined exactly in the same way as the position of a pen on a table. The particle simply constantly changes its state, and not smoothly: now it is here, now it’s not. Does this remind you of something? A computer works the same way, processing bits of information. It switches bits of data to “1” or “0”. Looks similar, isn’t it? Now imagine that we are dealing not with particles, but with bits and bytes of information. Seems very interesting: both quantum mechanics and chemistry perfectly fit in. But more of that later.
  2. Black Holes. How much has already been written about them? How strange they are, and where goes information from the objects swallowed by them… Unfortunately, modern physics does not give a simple and logical answer to “what is it and why”. The problem lies in the concept of gravity, which is the main reason for creation of black holes. But gravity is known as the process of attraction of elementary particles at any distance. This process is formally described, but without defining the nature of its meaning. In other words, it is absolutely unclear to us why all particles are pulling together. Without understanding the essence of this phenomenon, we can’t explain why enormous blocks of particles growing in size at some point actually cease to be a group of particles and become a single particle as well. Yes, Black Hole is also an elementary particle. The Black Hole is an elementary particle, only in reverse. It could be broken into matter, unlike the particles of the micro world, from which the matter is made up. And again, does it not remind you of a computer? It also, while counting till, for example, a maximum of 111111111111111, can only start again from “0” or pass a signal to another computer that the calculation has been completed, and the other computer can carry it on. Doesn’t it look similar to the situation with Black Holes? An array of data has become too big for our Universe to interact with. The Universe just cannot physically “see” such a huge object. It simply has that limitation. It is very interesting what we could arrive at – an invisible bit of information that affects us a lot, and, when “nullified”, explodes into our computer (Universe). This explosion releases enormous energy and a mass of matter. Clusters of stars are being formed, and emitted gravitational waves and deadly streams of elementary particles penetrate the Universe. Perhaps, the micro world has a similar structure? What do you think? Maybe when physicists collide elementary particles in accelerators, they also “nullify” micro particles (bits) visible to us? This process provides them with a lot of useful data. The operating speed of our Universe (computer) is not infinite. The time is required to “nullify” our particle and to destroy a Black Hole in the micro universe adjacent to ours at the bottom, on the level below. Here, scientists observe an amazing phenomenon: a disappearing but not yet null particle, and a collapsing “bit”, no longer a black hole. And here, as I understand it, physicists have not yet established a clear boundary where our universe ends and the micro universe begins. Later, I’ll try to describe why this edge is so difficult to define, as well as to explain the backbone of gravitation.
  3. Dark matter. First talks about the Dark matter started when calculations of the Universe’s expansion after the Big Bang didn’t match the real observations. The expansion was faster than it was supposed to be. Something invisible influenced movement of masses of matter, and that invisible substance was named the Dark matter. In recent years, with the help of spaceships, people even managed to map the Dark matter density in the Universe. Of course, no one detected a presence of the Dark matter directly; calculations of its concentration were based on the difference between the predicted and actual Universe’s expansion. No scientist has yet come up with any clear idea explaining this new fact. The Dark matter remains “unlinked” to any modern theory of the world order. Attempts to unravel this mystery are just beginning, and I will offer my version of this phenomenon.

I think that the invisibility of the Dark matter is due to its properties. Our Universe has its limitations, or restricts interactions with certain objects in the Wider Universe (I introduce a new term because the concept of single Universe is not enough). Same as Black holes, the Universe, because of some restrictions, cannot (doesn’t want to) interact with the Dark matter.  That phenomenon is possibly not the last object in the Wider Universe that cannot be directly detected by our Universe.

What then could be the Dark matter in reality, and what is its purpose (function, use) in the Wider Universe? Assuming, on the basis of our previous reasoning, that matter is information, the Universe is, in fact, a kind of melting pot for it. Where else, in device known to us, information is processed in the similar way? “Isn’t it a computer?”, you may ask. I think yes. It is a computer!d.

New Digital World Theory. Part 3

Melting pot of matter (information).

What is there in a computer that could function as a Universe? Let’s try to build up an analogy. Everything that happens in the Universe resembles the process of computation and data handling. The data is kept in computer’s memory, from where it is extracted, processed and saved again in the memory. Some data becomes “litter” which would not be used again but would still remain in the computer’s memory. This informational “litter” is only saved there because clearing it off would have taken considerable time and resources.

The data computing is carried out by a processor – the very entity which in binary code adds and detracts blocks of data. The data comes from the computer memory, and comes back there after calculations. Algorithms for calculations are “taken” by the processor from specific information blocks called programs. Consequences of commands to the processor, addresses of information blocks in memory needed for computations – all are stored in the programs.

Rather conditionally, the programs can be put into three groups:

  1. First: computer management programs. They contain algorithms of management and attributes of one given computer. These programs are usually securely protected – as from other programs, so from physical faults in computer’s work. The primary programs are kept in specifically allocated computer memory. Only manufacturer, physical user, or another higher graded computer, can change that data.
  2. Second: programs of operational systems level. These are usually algorithms, allowing a user more comfortable work with computer. This is like a language interpreter that frees conversing parties (a computer and a user) from knowing the opponent’s language. These programs are kept in the area of a general memory, but are protected by special programs of the operational system itself. Programs of that level could be damaged or deleted by other programs working under the operational system management.
  3. Third: programs of user level. All remaining programs can be referred to that group. Data from these programs can be easily deleted and are hard to be protected from corruption.

Having sorted all the programs into categories, we can suppose that our Universal Computer uses these programs in different ways. Due to these specific limitations, some programs cannot access certain algorithms and calculations.  It is likely that we belong to programs of the third level that have access to most open areas of computer memory and very limited use of processor and computer resources. It appears then that our vision of the Universe-Computer is much distorted, and the reality is different. When observing the outer space, we can only see processes of calculations “permitted” to us by the Computer. And here is the key to solving a puzzle of the world order and the way to reconcile contradictions in our knowledge of the Universe structure.

We perceive the Universe like the blind or deaf. Some of our senses are working, and others are deliberately blocked.  It’s none of our business. But we, the humankind, will fight for our right to break into the 2nd level programs! Of course, I am dreaming. But who knows…

Now let’s move on. The Computer is more complicated than I have described, so search for analogues goes on. Our access to the Universe’s memory is also very restricted, so the Dark Matter could be kept nearby on the Computer’s hard disk. However, that section is simply closed for 3-rd level programs. And now just imagine that the 1st or 2nd level programs are trying to optimise, from the Computer’s point of view, the way us the 3rd-levelled ones are being run. Would we notice that somehow? Definitely not directly. In case of dark Matter, we still have noticed anomalies indirectly. And that happened only because we managed to create an inner development model (algorithm) that has been distorted by the higher level programs.

Let’s carry on studying the Computer’s memory.  Is there anything else that could be of interest, something that has any influence on us? Memory is specified by size, speed and metrics. We’ll now skip the latter two properties, and take a close look at the size. Could the Computer’s memory be unlimited? Unlikely. That memory limitation could restrict the Computer’s efficiency and productivity. This is no good. The Computer should have a mechanism to deal with those restrictions. What do the humans do for solving such problems in man-made computers? Well, they, while observing and copying the nature, compress or optimise data. Our man-made computer’s files are “squeezed” in size by special archiving programs. Using certain algorithms, one can save a great amount of space on a hard disk or in another type of memory. Supposedly, the Computer is using the same approach. A special program of optimising data storage is running there, and we, the humble 3rd-level ones, are pressed big time. Now let’s imagine how that pressure might affect our poor selves. What force we are constantly fighting on Earth? Force of gravity – the gravitation? I think this is likely. Eh? This is a really interesting discovery! Gravitation is simply a result of higher level program running to compress data. All seems logical, the work of the program itself is imperceptible by us, but in reality we do feel the result – constantly pressing down. Curiouser and curiouser. We are the files “packed” into the Earthly archive, and, together with other files, we crowd up on this Earth. What is interesting is the structure of our Earthly archive. Simple files of a similar kind are packed deeper inside, closer to the centre.

It follows that the centre of gravitation of archives in the Universe are the files or data blocks, would need more time for extraction by the processor. And this is the second interesting conclusion from comparing the Earthly computers with the Universal one. That property of archives could help us to understand “distance” and determine the meaning of that dimension. Therefore, the distance is linked to the time needed by the Processor to switch from processing one data block (archive) to another. Because the processor’s work is controlled by programs, it can’t immediately stop one task and start another without somehow saving the results. So it follows that the distance, as we perceive it, is just our estimation of real ability of the Processor to handle one block of data certain time after completing of a previous task. The existence of that time delay does not mean that the Computer processes data blocks consequently. That simply means that if it would have wanted to do so, it would have needed a certain amount of time. There are plenty of co-inserted and inter-connected data blocks (arrays) in the Universe, and that’s why the assessment of distance is so complicated and is distorted by inter-connection.

While further developing the “distance” concept, one can make another observation over the computer analogy. Our assessment of distance is based on some information which is freely transmitted by the Computer. What is that information and what is its equivalent in a man-made computer? How does our computer let the working programs know the order of data processing and when they should act?

New Digital World Theory. Part 4

Controlling Universe (The computer)

So, from examining the distance we smoothly move over to understanding forces of nature.

In a man-made computer, the information as to when, and which, programs and devices should start working, is transmitted via special control lines (bus). Certain types of programs and devices have access to this information. Not all of them can actively use these lines, transmitting their own control commands. Usually, the programs of the 1st and 2nd levels can do that. As for the 3rds – they not always can. All types of program report to the processor their status via these command lines, but not necessarily using all of them.

If now we apply this reasoning to the Universal Computer, it appears that it is through these channels we receive the information as to how other programs are positioned in relation to ourselves. This is where concept of distance emerges. Using continuous stream of control data, we determine not our conventional material distance, but the time required for processing certain data by the Processor. It’s not even the time that we are counting but a quantity of processor’s clock cycles that separates computing tasks. Enormous arrays of data are slowly “digested” by the Computer, creating an illusion of a huge space around us. The quantity of those arrays itself is colossal – the arrays processed by the Computer in parallel or conditionally in parallel. This complicated simultaneous computing gives us perception of 3-dimentional space. We ourselves are part of multi-level nested array that has its own position in the current computing tasks. This very uniqueness allows us to “see” the rest of arrays in relation to ourselves, creating 3-dimentional space.

While continuing to examine the Universal control channels, we also can notice the difference with a man-made computer. Of course, a computer that we have created is much more basic; one or 2 control lines is all it has. The Universal Computer is by far more complicated. Let us look at universal forces:

  1. Light
  2. Electric field
  3. Magnetic field
  4. Force of gravity (gravitation)
  5. Strong nuclear force

We shall exclude the force of gravity from the list of potential candidates for control channels, as its nature, IMHO, is linked to the necessity to compact data, and not to control the Computer. The same is with nuclear forces; we will discuss their property later.

Three forces remain as candidates for independent control lines. Here I should note that I’ve broken the familiar electromagnetic force into two forces. I did that for one simple reason that not all programs are capable of interacting with both forces. That is, it could well be that magnetic field does not have any influence on an object (program), but an electric one does. The selective interaction of these three forces with programs confirms the existence of three separate control lines.

The control lines are not directly involved in the Processor’s work (computing data). They only transmit executive commands. This indirect connection with the Processor leads to absence of mass in forces of nature. The arrays of data themselves do have mass, as certain processor’s time is required for processing embedded information. That time differs depending on the internal structures of the same array size. Some arrays are more “dense” with highly compressed mass; the others are “light” with low compressed mass. The differences in arrays’ structure influence mass. Whereas the forces of nature (control lines) could have slowing down effect on just a few information bits. That is because the Computer’s speed is limited, and continuous processing of these bits would be affected if the control commands are delayed. Currently, it is not possible to prove or quash this assumption. New knowledge is needed to do that.

Now, why do we distinguish the forces of nature? The division happens according to interaction with different types of data and programs. Some programs have access to certain control lines, whereas others have not. In my understanding, the differences are caused by 2 major factors:  the data structure and program restrictions laid down by programs of the 1st and 2nd levels. Possibly, it happens this way: one specific program structure is set, by default, to be controlled by light. But because of work of the higher level third party program, access of that one program to the control line is limited, and the program does not “see” all commands transmitted through the line. The higher program filters the line signals. Like we are only able to see within a certain spectrum of light (colour). That is, we do have access to the line, but only a limited one. In order to “see” within a wider spectrum, we need to use other programs – devices which do not have such limitations. These devices convert the invisible commands into a control form accessible to us.  And this is how we take over the World (Computer), conquering other programs. Or possibly, a program’s internal structure can, in principle, have limitations to access to some control lines – because, for example, it would be faster to process certain computing tasks this way. Why to use the full capacity of the Computer when a task is simple and very specific? The Computer must be rational; otherwise everything would be going on very slowly.

New Digital World Theory. Part 5

The Computer and temperature.

Having dealt with the Computer’s control, we can now try to describe the property of the data arrays stored in its memory.

What would happen with data actively handled by the processor? And what would happen with data just simply stored – or what if it is abandoned data? Probably, local activity of the processor should somehow be transmitted to other programs for them to work properly. The programs must know whether the processor is idle, partially engaged, or very busy. Why to bother your boss if he is busy with something?  Not so smart… That’s why some sort of control mechanism was needed to constantly inform the programs of the processor’s state. How this mechanism works is not yet clear to me. Is there an extra control line where this information is transmitted, or do the programs calculate the activity on the basis of information taken from already known control lines? Currently the second option seems more likely to me. It also could be that a special higher ranking program provides the information via the standard control lines.

Why is it important to know of the processor’s activity? At the first glance, there is no particular need. But in fact, this information is very important, as it helps programs attempting to avoid being stopped, damaged or deleted. The high activity of the processor while handling a compressed file where the program is located, can lead to that program being damaged or deleted. So what to do? To run – if you can. On the other hand, the processor’s activity can be greatly reduced, or even the archive could be neglected in the memory for a long time or abandoned altogether. What should the program do in the archive? To run away as well; otherwise there is a virtual death caused by lack of activity. You can scream your heart out; but the processor won’t hear you – it has other tasks to do.

So it seems that a certain program needs some sort of processor activity. A bit less of the activity – and you’ve lost contact with the processor (ignored); a bit more of the activity – and your program code has been damaged (destruction). Clearly, most third level programs are forced to humbly wait for their fate, or their ability to escape is very limited. Here, perhaps, the borderline between live and dead matter is drawn, as we understand it. If a program in principle is not able to adapt to the processor’s activity, it is the dead matter. But something that is “kicking”, can probably be referred to the live matter.

And now let’s turn to the temperature. What do you think I’ve just described? The temperature, of course, the one that tells us: “don’t warm your hand in the oven, you idiot, or put the mittens on, otherwise you’ll get a frostbite”. The temperature is the information of the processor’s activity linked to a specific data array (file). It is still not quite clear to me how the information of the local temperature is distributed. Most likely, the programs themselves calculate the temperature value on the basis of the Processor’s activity transmitted through control lines. Forces of nature (the control lines) allow other programs to predict their future condition based on the local activity of the Processor.

As we have already mentioned earlier, not all programs are able to read in full the information transmitted through control lines. So, the data about the local temperature in Universe is usually limited for most third level programs.

New Digital World Theory. Part 6

The computer and the data structure.

Perhaps this is the most complicated part of my tractate. It is about the way the Processor makes its calculations in Universe.

Huge data arrays “digested” by the Processor, make the Programmer (maybe that is God) to be very rational with use of the Processor’s resources. Same as we the humans combine bits of information in our computers into bytes, the Computer groups the information into blocks in order to process and save data. By the way, we know perfectly well part of those blocks. They are in the Periodic Table of the chemical elements. Yes, these are, at the very least, molecules of matter. The Computer uses them, same as the information bytes, for optimised computing. That is much faster: take a molecule as an information bite and process it, without thinking of its contents.

Molecules are the most researched information bytes of the Computer. They are made of atoms which, in their turn, are built of other smaller elementary particles. That is a very complicated data structure in the Computer, and we don’t know yet what is the real Computer’s bite. Good luck to our nuclear physicists in their search for an undividable particle of our world!

Of course, this bit would be a bit only for our Computer. For the Computer joining us at the lower level, that would be a Black Hole ready to explode and disintegrate data in that universe.

It is this grouping of bits of information into bytes that creates strong nuclear forces. This force holds atoms together to form a molecule, for example. While the Computer “thinks” it is dealing with an information byte, it is very difficult to break this byte apart. The Computer has to be “persuaded” that it is not a byte but a group of bits. A special effort or energy is needed for that.

New Digital World Theory. Part 7

The computer and limitation of everything including speed.

Same as a man-made computer, the global Computer has its limitations. It cannot complete all the necessary tasks instantaneously. This happens step by step, with a certain delay that determines timing in the Computer. The time delay really exists, as all running Computer programs detect and use it in completing their tasks and making forecasts.

We’ve already determined long ago that nothing in the Universe can move faster than 300,000 km/sec. This is the speed of light or speed of one of control lines. Electromagnetic waves also spread at this speed, but they are more affected by the density of the data arrays that they control. It turns out that the fastest operating speed of our Computer is determined by the shortest time of changing state in the control lines.

Of course, measuring Computer speed in km/sec is not appropriate, because these are all conditional concepts invented by us to describe our perception of the world. Objectively, the processing speed has to be measured by the number of information bits processed by the Computer at one step of computation. In one such step Computer can process a huge number of bits. How many – we do not know yet, because we do not know the Computer’s structure. This topic we will discuss later.

And here is one more interesting point related to the processing speed. The speed of commands in control lines can be affected by density of data arrays. The denser they are, the slower the commands in reaching its destinations. Eventually, at certain density, a control command cannot get into the array at all, interacting only with the outer area. I don’t know exactly why this is happening, but I assume that this is due to the work of global backing up program (gravity). It is likely that a compressed program cannot be run at all. To start it, the Computer decompresses it and only then allows interacting with the control lines. The denser and bigger a file is, the longer is delay and, therefore, the effect of slowing down of control signals. The Processor is just waiting for an uncompressed program to respond to a control command. At a certain height of density of compressed data arrays, the Processor simply ignores the most densely packed part of them, interacting with data from the outer area only. The response from this area comes faster, so we can assume that some maximum waiting time limits are set in the control lines. If the response from a program is not received in a specified timing interval, the next command is sent for a different program, etc. That is why too densely compressed files are not accessible for the control commands.

By the way, a similar effect occurs if the size of bytes (molecules) or small blocks of data files do not match the properties or the operating mode of the control channel. I mean now the frequency of light or electromagnetic wave. This frequency is actually the operational mode of the control lines. The operating mode of high frequency is designed to work with micro arrays, and of low frequency — with large arrays. What is the difference here? it is simply the minimum waiting time for a program to response. Large arrays, in principle, cannot be uncompressed quickly. Therefore, we should not expect a prompt response from them; so some other tasks can be processed while waiting. As for micro arrays, these can be unpacked very quickly. Thus, the waiting time could be very short, and large files could be ignored altogether.

It follows that the Processor or a program, by setting short- and long-time waiting limits for reply from a respondent, can filter only those data arrays that match a certain “weight category”. This approach saves the Processor’s time during computations.

In addition, I want to highlight that control lines operate in a rather busy mode, and so there is a continuous stream of commands coming without stopping to wait for any response. This is why we can see combined white light and not only changing coloured light. That means that control lines are packed so densely by the commands in order to, on one hand, not to lose the responses, and on the other — not to wait for too long. It is likely that each command in a control line carries information about an addressee, waiting time for response, and the command itself. As a result, only the addressee responds to that request, and the addressee knows exactly how, and to whom, to reply.

Requests actually could be multicast or unaddressed. Addressees simply have to meet certain parameters. This is like a torch lighting up: something would be clearly visible (response received), and something would not be seen at all (no response).

New Digital World Theory. Part 8

Temperature and Energy.

In all my work I have so far missed such important dimension as energy. I have done that deliberately, as it would not have been possible to explain its meaning without previous comments. Hopefully, I would make up for it now.

Let’s imagine various possible operational modes of the Computer. One extreme option: the Processor is busy with just one single task, not distracting at all at anything else. What does it mean in practice? It means that speed of completion of that operation would be maximal, i.e. the computation is close to the speed of light. In the meantime, time used for completing the operation would be minimal.

The other extreme option is when the Processor, in fact, ignores a task, completing, at best, a couple of operations a year, or even less.

As a result, we can talk about specific engagement of a Processor with some specific task. In reality, the Processor is simultaneously conducting parallel computations, as well as constantly shifting between various tasks. All that leads to a share of time spent for servicing one specific task being much less than 100%. I suppose that the time share of the Processor’s engagement in processing one specific task is that same bespoken energy. It is for that very Processor’s attention that all Computer programs are competing for. The more attention the Processor pays to you, the more advantage you have over other programs. You just have to be able to, and to want to.

It is the Processor’s local specific activity transmitted via control lines that characterises computing energy. Where the Processor “loves” something a lot, there would be the high temperature and energy.

From the first glance, our dealing with energy could be ended here. But this is so only from the first glance. In reality, we forget that all data are optimized and kept in archives. What does that mean in practice? That actually means that the delay of unpacking arrays leads to part of bulky packed programs stopping responding to control commands along with the very high activity of the Processor. In order to accelerate its work, the Processor compulsory destroys and simplifies packed data. It is also important that the Computer, in principle, is not capable of working with complicated colossal archives. When the size of an archive starts to exceed some figure (a dwarf star), then the Processor starts to simplify the packed contents. Probably, the Processor simply has no option, it has not enough productivity, and some upper level program joins the process of destruction of all that is too complicated. Data is divided inside down to a level necessary just to sustain addressing that archive in the memory. The archive’s temperature becomes very high due to excessive activity of the Processor.

It is my understanding that too large archives cannot, in principle, participate in valid Computer computations. For that to happen, the Processor must concentrate on solving too limited number of tasks, and that is prohibited. And this is where an astonishing thing happens: colossal archives actually become a waste which has to be turned over and over again, for it not to be completely lost. We see two forces competing – a program trying to make calculations and a program limiting (directing) the Processor’s activity. This is like balanced scales: neither up nor down. As a result, depending on the archive’s size, we observe different types of a cosmic zombie: from red “warm” dwarfs to all sorts of superluminal quasar and giant stars. The size of the latter is so huge that the archive is, in fact, consisting of bytes in process of destruction (molecules, atoms, etc). It is the Program limiting the Processor’s activity which destroys these super archives. The Processor obviously cannot manage processing such an archive, and whole pieces burst out of it, in form of explosions or elementary particles (bytes and bits).

And here I would add one more process which we have temporary forgotten about. That process is gravitation or compulsory archiving of linked (nearby) data. Unfortunately for supermassive archives, the compression utility does not “sleep” and continues to add new data from nearby archives that just happen to be in a danger zone. The Super archive keeps growing until the Simplifying program starts breaking the contents into information bits. And here it turns out that the binary code 1111111111111111111 means, in fact, just 1. The problem is that the Processor has to ignore that 1. With applied limitations, there is nothing it can compute, and super 1 becomes a Black hole.

The fourth limitation that could influence Black holes appearing is, of course, a maximal size of a file (archive) that the Computer can work with. I have earlier described those limitations discussing gravitation. It is hard to say what particular influence that complex activity has on super archives of data. Black holes appear under the influence of all factors described above. Perhaps, in every single case one factor that takes an archive out of Computer’s area of access can be the decisive one.

In the next Chapter, I will examine versions as to where Black holes and “broken” information bits can disappear.